The Associated Press clarifies its guidelines for using AI to write articles

Associated Press guidelines
Image source: Gizmodo

TL;DR

  • AP’s stance on AI in journalism: Enhancing operations without replacing human journalists.
  • Amanda Barrett’s insights: Experimenting with AI agents, but not for publishable content.
  • Scrutinizing AI influence: AP journalists’ responsibility to verify sources and images.
  • AI in article generation: Balancing automation with editorial oversight.
  • AP’s unique approach: Collaborating with OpenAI to train linguistic models.

The Associated Press takes a firm stance against using any AI system by journalists for composing articles. Nonetheless, this stance doesn’t deter the corporation from capitalizing on an opportunity to monetize the training of generative AI models using archived AP content.

In a blog post on a Wednesday, Amanda Barrett, the AP’s Vice President of Standards, articulates that, while the wire service perceives artificial intelligence as a tool to enhance operational methodologies, generative AI will not substitute for journalistic endeavors.

Barrett, in a meticulously itemized list, delineates the permissibility granted to AP personnel to “experiment” with ChatGPT and other conversational AI agents, albeit with cautious restraint. It is stipulated that such agents cannot be deployed to create publishable content directly. Moreover, the output from these AI entities is deemed as “unvetted source material,” necessitating the rigorous procurement of factual sources by journalists to corroborate any information originating from said AI entities. The rigors of standard authentication processes shall remain inviolable.

An additional onus imposed on AP journalists entails scrutinizing sources for any vestiges of AI-borne influence. This mandates reporters to conduct reverse image searches and corroborative investigations of concordant sources. 

Fundamentally, AI compels AP journalists to be unflaggingly cautious regarding the authenticity of their sources. These novel directives emanate from an open letter endorsed by prominent media establishments, including AP itself, petitioning legislative bodies to mandate the requisition of consent before AI models are trained on their journalistic productions.

“Barett,” she iterates, “prescribes the same circumspection and skepticism incumbent upon journalists in their customary pursuits. This encompasses endeavors to identify the provenance of original content, the execution of reverse image searches to ascertain the genesis of an image, and the comparison of reports with akin content from reputable media outlets.”

AP is leveraging AI to autogenerate select articles, albeit within the confines of a system operational for nearly a decade. This system primarily lends itself to the brief explication of specialized data-driven news, encompassing corporate fiscal summaries and localized sporting occurrences. Over time, AP has implemented supplementary AI tools designed to encapsulate narrative essence within subheading synopses and to peruse social media feeds for analysis.

Regarding generative AI’s role in image manipulation, AP decrees that AI shall not be enlisted to alter photographs, videos, or auditory media. The corporation shall abstain from incorporating AI-derived images that fabricate distorted depictions of actuality. Nonetheless, AI-crafted visual content may be embraced when the narrative focal point revolves around the AI-originated subject matter.

It is manifestly discerned that AP is far from spurning AI’s potential. The corporation has brokered a biennial accord with OpenAI, the developer of ChatGPT, allowing the training of linguistic models utilizing a trove of archival AP content. OpenAI, along with the creators of GPT-4, has made modest philanthropic contributions to U.S. journalistic nonprofits in their endeavors to promulgate AI tools within the precincts of local newsrooms.

This approach diverges from other eminent news enterprises’ strategies to grapple with the tide of generative AI. The New York Times has recently amended its Terms of Service to proscribe any exploitation of its articles for AI training. 

Google, it is rumored, has endeavored to court the enthusiasm of major news institutions, such as The New York Times and The Washington Post, in embracing AI applications. Diverse experiments, such as CNET’s, have culminated in formulating entirely AI-authored articles; however, these endeavors were marred by copious inaccuracies. G/O media’s subsidiaries, including Gizmodo, also showcased a repertoire of AI-composed articles in July, acclaimed for their embellished inaccuracies.

The Associated Press wields considerable clout as a preeminent news wire agency across the United States. Its content is syndicated by over a thousand smaller news purveyors nationwide. The corporation presides over many bureaus strewn across myriad nations, disseminating news in English, Spanish, and Arabic.

 The AP Stylebook, a lodestar of writing conventions, exerts its influence across much of the news fraternity. Whether other media outlets shall follow AP’s footsteps, forsaking the siren call of expedient, economical, dismal, unfeeling, and erroneous AI-crafted compositions remains to be seen.

Source(S): Gizmodo

Adam Pierce

Adam Pierce is a seasoned technology journalist and professional content writer who has a genuine passion for delivering the latest tech news and updates. With a wealth of experience in the field, Adam is committed to providing NwayNews readers with accessible, informative, and engaging content. He aims to keep readers well-informed about the latest breakthroughs, gadget releases, and industry trends through his articles.

Leave a Reply

Your email address will not be published.