As AI image generators become more advanced, spotting deep fakes has become more challenging than ever. Law enforcement and world leaders continue to sound the alarm about AI-generated dangers deepfakes in social media and in conflict zones.
“We are reaching a time where we can no longer believe what we see,” said Marko Jak, co-founder, and CEO of Secta Labs. Decrypt in an interview. “Now, it’s easier takes takes deepa deep and fakes and fakes are not very good.
According to Jak, we’re not too far away—perhaps within a year—from the point where the ability to recognize a fake image at first sight will no longer be possible. And he should know: Jak is the CEO of an AI-image generator company.
As co-founded Sect Labs in 2022; the Austin-based generative AI startup focuses on creating high-quality AI-generated images. Users can upload photos of themselves and turn them into AI-generated headshots and avatars.
As Jak explained, Secta Labs sees users as the owners of the AI models generated from their data, while the company is just custodians helping to create images from it. models.
The potential misuse of more advanced AI models has led world leaders to call for immediate action action of AI regulation and cause companies to decide not to release their advanced tools to the public.
Last week after announcing the new one Voicebox The AI-generated voice platform, Meta said it will not release the AI to the public.
“While we believe it is important to be open with the AI community and share our research to advance the state of the art in AI,” a Meta spokesperson said. Decrypt in an email. “It’s also necessary to get the right balance between openness with responsibility.”
Earlier this month, the US Federal Bureau of Investigation Warned in AI deepfake extortion scams and criminals use photos and videos taken from social media to create fake content.
The answer to fighting deepfakes, according to Jak, may not be to see a deepfake but to expose a deepfake.
“AI is the first way you see it [a deepfake]”said Jak. “There are people building artificial intelligence where you can put an image like a video and the AI can tell you if it was made by AI.”
Generative AI and the potential use of AI-generated images in film and television is a hot topic in the entertainment industry. SAG-AFTRA Members voted before entering into contract negotiations to authorize a strike, a key concern, artificial intelligence.
Jak added that the challenge is to advance the AI arms race as the technology becomes more advanced and bad actors create more advanced deepfakes to counter the technology designed to detect it.
Acknowledging that blockchain has been overused—some might say overhyped—as a solution to real-world problems, Jak said the technology and cryptography could solve a deeper problem.
But while technology can solve many of the issues with deepfakes, Jak said a lower-tech solution, the wisdom of the crowd, may be the key.
“One of the things I’ve seen Twitter do, which I think is a great idea is community notes, where people can add some notes to give context to someone’s tweet,” he said. Jak said. “A tweet can be misinformation like a deep fake,” he said. Jak added that it would benefit social media corporations to think of ways to use their communities to validate whether the content being spread is authentic.
“Blockchain can address certain issues, but cryptography can help authenticate the origin of an image,” he said. “This can be a practical solution, because it deals with the source verification rather than the content of the image, no matter how sophisticated the deep fake.”