The Associated Press published standards today for generative AI use in its newsroom. The organization, which has a licensing agreement with ChatGPT maker OpenAI, lists a fairly strict and common list of measures around the emerging technology while warning its staff not to use AI to create publishable content. Although none of the new guidelines are particularly controversial, less scrutinizing outlets will see the APIts blessing seems to be a license to use generative AI excessively or poorly.
The organization’s AI manifesto emphasizes a belief that artificial intelligence content should be treated as the flawed tool that it is — not a replacement for trained writers, editors and reporters who use their best judgment. “We don’t see AI as a replacement for journalists in any way,” the APThe Vice President for Standards and Inclusion, Amanda Barrett, wrote an article about its approach to AI today. “This is the responsibility of AP journalists who are accountable for the accuracy and fairness of the information we share.”
The article instructs its reporters to view AI-generated content as “unvetted source material,” for which editorial staff “must use their editorial judgment and APsearch criteria when considering any information for publication.” It says employees can “experiment with ChatGPT with caution” but not create publishable content on it. Also includes that of the images. “According to our standards, we do not modify any element of our photos, videos or audio,” it said. “Therefore, we do not allow the use of generative AI to add or subtract any elements.” However, it carves out an exception for stories where illustrations or AI art is a subject of the story — and even then, it must be clearly labeled as such.
Barrett warns about AI’s potential to spread misinformation. To prevent accidental publication of anything created by AI that appears to be real, he said AP Journalists “should use the same caution and skepticism as they normally do, including trying to identify the source of the original content, doing a reverse image search to help verify the source of the an image, and checking reports with similar content from reliable media.” To protect privacy, the guidelines also prohibit writers from entering “confidential or sensitive information on AI tools.”
While that is a fairly common sense and non-controversial set of rules, some media outlets are less discerning. CNET was caught earlier this year publishing articles explaining finance generated by false AI (only marked as computer-generated if you clicked on the article’s byline). Gizmodo found itself in a similar spotlight this summer when it ran a Star Wars article full of inaccuracies. It’s not hard to imagine other outlets — desperate for an edge in the increasingly competitive media landscape — looking to AP‘s (tightly restricted) AI is used as a green light to make robot journalism a central figure in their newsrooms, publishing poorly edited/inaccurate content or failing to score in the work done by AI as such.