Social media giant Meta, formerly known as Facebook, will include an invisible watermark on all images it creates using artificial intelligence (AI) as it takes steps to prevent false positives. use of technology.
In a December 6 report detailing updates for Meta AI — Meta’s virtual assistant — the company revealed that it will soon add invisible watermarking to all AI-generated images created using “imagine with Meta AI experience.” Like many other AI chatbots, Meta AI generates images and content based on user prompts. However, Meta aims to prevent bad actors from viewing the service as another tool to deceive the public.
Like many other AI image generators, Meta AI creates images and content based on user prompts. The latest watermark feature makes it difficult for a creator to remove the watermark.
“In the coming weeks, we will add invisible image watermarking with Meta AI experience for more transparency and tracking.”
Meta said it will use a deep learning model to apply watermarks to images created with its AI tool, which are invisible to the human eye. However, invisible watermarks can be found on a compatible model.
Unlike traditional watermarks, Meta claims that its AI watermarks – called Imagine with Meta AI – “can handle common image manipulations such as cropping, color changing ( brightness, contrast, etc.), screenshots and more.” While the watermarking services will initially be launched for images created through Meta AI, the company plans to bring the feature to other Meta services that use AI-generated images.
In its latest update, Meta AI also introduced the “reimagine” feature for Facebook Messenger and Instagram. The update allows users to send and receive AI-generated images to each other. As a result, both messaging services will also receive the invisible part of the watermark.
Related: Tom Hanks, MrBeast and other celebrities warn about deep fake AI scams
AI services such as Dall-E and Midjourney already allow adding traditional watermarks to the content it breaks. However, such watermarks can be removed by simply cropping the edge of the image. Additionally, certain AI tools can remove watermarks from images automatically, which Meta AI claims is impossible to do in its output.
Since the mainstreaming of generative AI tools, many entrepreneurs and celebrities are calling for AI-powered scam campaigns. Scammers use readily available tools to create fake videos, audios and images of popular figures and spread them on the internet.
In May, an AI-generated image showing an explosion near the Pentagon — the headquarters of the United States Department of Defense — caused the stock market to fall briefly.
Prime example of the dangers of the pay-to-verify system: This account, which tweeted a (probably AI-generated) photo of a (fake) story about an explosion at the Pentagon , looks at first glance like a legitimate Bloomberg news feed. pic.twitter.com/SThErCln0p
– Andy Campbell (@AndyBCampbell) May 22, 2023
The fake image, as shown above, was later picked up and circulated by other news media outlets, resulting in a snowball effect. However, local authorities, including the Pentagon Force Protection Agency, which oversees the security of the building, said they were aware of the circulating report and confirmed that “no explosion or incident” had occurred.
@PFPAOfficial and the ACFD became aware of a social media report circulating online about an explosion near the Pentagon. NO explosions or incidents have occurred on or near the Pentagon reservation, and there is no immediate danger or danger to the public. pic.twitter.com/uznY0s7deL
– Arlington Fire & EMS (@ArlingtonVaFD) May 22, 2023
In the same month, the human rights advocacy group Amnesty International fell for an AI-generated image depicting police brutality and used it to run campaigns against the authorities.
“We removed the images from the social media posts, because we did not want the criticism of the use of AI-generated images to distract from the core message of supporting the victims and their calls for justice in Colombia,” said Erika Guevara Rosas, Amnesty’s Americas director.
Magazine: Lawmakers’ fear and skepticism are driving proposed US crypto regulations