Google, Meta and OpenAI announce measures to identify content created with artificial intelligence | Technology
Google, Meta and OpenAI have announced various measures in recent days to facilitate the identification of images or files produced or retouched with artificial intelligence (AI). These initiatives are motivated by the fear that the indiscriminate dissemination of false messages could condition the results of elections or have other unintended consequences. The three companies had already announced that they would try to prevent the use of AI in the 2024 electoral processes.
After years of investing billions of euros in improving the capabilities of generative artificial intelligence, the three platforms have joined the Coalition for Content Provenance and Authenticity (C2PA for its acronym in English), which offers a standard certificate and brings together a good part of the digital industry, media like the BBC or The New York Times to banks and camera manufacturers. The companies admit in their statements that there is currently no single solution, nor effective in all cases, for the identification of content generated with AI.
The initiatives vary between marks visible on the images themselves and messages hidden in the file’s metadata or in artificially generated pixels. With its tool still in beta version, SynthID, Google also claims to have found a way to identify audio.
The C2PA standard for capturing information in image metadata has many technical weaknesses, identified in detail by the developers. OpenAI itself, which will use C2PA in Dall-3 (its image generator), warns in his statement that You shouldn’t have too much confidence in its possibilities: “Metadata like C2PA is not a miracle solution to solving provenance problems. It can be easily deleted accidentally or intentionally. An image lacking this metadata may or may not have been generated with ChatGPT or our API,” said the company that created the chatbot that popularized generative artificial intelligence.
Companies do not give clear deadlines for the full implementation of their measures. This 2024 is a year full of electoral events in important countries and the influence of AI-generated content can be explosive. The need for preventive protection is vital. “We are expanding this feature and in the coming months we will begin applying labels in all languages supported by each application,” said Nick Clegg, president of public affairs at Meta, in his statement.
Meta promises not only to visually tag images generated by its AI tools, but also to detect and identify those published on its networks: Instagram, Facebook and Threads, when possible. “We can tag images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they roll out their projects to add metadata to images created by their tools,” Clegg says.
Google will apply its methods to YouTube. While these approaches claim to represent the state of the art of what is technically possible at this time, the company believes that its SynthID-built method can survive intentional modifications to obfuscate it, such as “adding of filters, changing colors and saving with various lossy compressions. schemes, which are most used for JPEG files.
You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.