October 5, 2024

Google, whose work in artificial intelligence helped make A.I.-generated content far easier to create and spread, now wants to ensure that such content is traceable as well.

The tech giant said on Thursday that it was joining an effort to develop credentials for digital content, a sort of “nutrition label” that identifies when and how a photograph, a video, an audio clip or another file was produced or altered — including with A.I. The company will collaborate with companies like Adobe, the BBC, Microsoft and Sony to fine-tune the technical standards.

The announcement follows a similar promise announced on Tuesday by Meta, which like Google has enabled the easy creation and distribution of artificially generated content. Meta said it would promote standardized labels that identified such material.

Google, which spent years pouring money into its artificial intelligence initiatives, said it would explore how to incorporate the digital certification into its own products and services, though it did not specify its timing or scope. Its Bard chatbot is connected to some of the company’s most popular consumer services, such as Gmail and Docs. On YouTube, which Google owns and which will be included in the digital credential effort, users can quickly find videos featuring realistic digital avatars pontificating on current events in voices powered by text-to-speech services.

Recognizing where online content originates and how it changes is a high priority for lawmakers and tech watchdogs in 2024, when billions of people will vote in major elections around the world. After years of disinformation and polarization, realistic images and audio produced by artificial intelligence and unreliable A.I. detection tools caused people to further doubt the authenticity of things they saw and heard on the internet.

Configuring digital files to include a verified record of their history could make the digital ecosystem more trustworthy, according to those who back a universal certification standard. Google is joining the steering committee for one such group, the Coalition for Content Provenance and Authenticity, or C2PA. The C2PA standards have been supported by news organizations such as The New York Times as well as by camera manufacturers, banks and advertising agencies.

Laurie Richardson, Google’s vice president of trust and safety, said in a statement that the company hoped its work would “provide important context to people, helping them make more informed decisions.” She noted Google’s other endeavors to provide users with more information about the online content they encountered, including labeling A.I. material on YouTube and offering details about images in Search.

Efforts to attach credentials to metadata — the underlying information embedded in digital files — are not flawless.

OpenAI said this week that its A.I. image-generation tools would soon add watermarks to images according to the C2PA standards. Beginning on Monday, the company said, images generated by its online chatbot, ChatGPT, and the stand-alone image-generation technology, DALL-E, will include both a visual watermark and hidden metadata designed to identify them as created by artificial intelligence. The move, however, “is not a silver bullet to address issues of provenance,” OpenAI said, adding that the tags “can easily be removed either accidentally or intentionally.”

(The New York Times Company is suing OpenAI and Microsoft for copyright infringement, accusing the tech companies of using Times articles to train A.I. systems.)

There is “a shared sense of urgency” to shore up trust in digital content, according to a blog post last month from Andy Parsons, the senior director of the Content Authenticity Initiative at Adobe. The company released artificial intelligence tools last year, including its A.I. art-generation software Adobe Firefly and a Photoshop tool known as generative fill, which uses A.I. to expand a photo beyond its borders.

“The stakes have never been higher,” Mr. Parsons wrote.

Cade Metz contributed reporting.