Generative AI tools raise two concerns related to misinformation:
Given this potential for intentional and unintentional falsehoods, how can media consumers trust what they see and read?
Scholarly publishers are choosing to require human authorship and hold them accountable for any AI-generated content and mistakes.
See also references under Scholarly Publication on the Higher Education Impacts tab.
A coalition of media and content authoring companies are working on a system for digital watermarking media objects with information about provenance and alteration history.
Others are looking for ways to empower consumers to detect AI content, although there are pitfalls and limitations.