A few years ago, deepfakes seemed like a novel technology whose creators relied on extreme computing power. Today, deepfakes are ubiquitous and have the potential to be used for disinformation, hacking, and other nefarious purposes.
Intel Labs has developed real-time deepfake detection technology to combat this growing problem. Ilke Demir, a senior research scientist at Intel, explains the technology behind deepfakes, Intel’s detection methods, and the ethical considerations involved in developing and implementing such tools.
Also: Current AI boom will exacerbate societal problems if we don’t act now, says AI ethicist
Deepfakes are videos, speech, or images where the actor or action is not real but created by artificial intelligence (AI). Deepfakes use complex deep learning architectures, such as generative adversarial networks, variational auto-encoders, and other AI models, to create highly realistic and believable content. These models create synthetic personalities, lip-sync videos, and even text-to-image conversions, making it challenging to distinguish between real and fake content.
The term deepfake is sometimes applied to real content that has been altered, such as a 2019 video of former House Speaker Nancy Pelosi, which was doctored to appear drunk.
Demir’s team investigates computational deepfakes, which are synthetic forms of content created by machines. “The reason it’s called a deepfake is that it has this complex deep learning architecture of generative AI that creates all the content,” he said.
Also: Most Americans think AI threatens humanity, according to a poll
Cybercriminals and other bad actors often use deep counterfeiting technology. Other use cases include political misinformation, mature content featuring famous or objectionable individuals, market manipulation, and impersonation for money. These negative effects highlight the need for effective deep counterfeit detection methods.
Intel Labs has developed one of the world’s first real-time deepfake detection platforms. Instead of looking for fake artifacts, the technology focuses on identifying what’s real, like heart rate. Using a technique called photoplethysmography — the detection system analyzes changes in the color of veins due to oxygen content, which is computationally detectable — the technology can detect whether a personality is real human or synthetic.
“We try to see what is true and real. The heart rate is one of the [the signals], “said Demir. “So when your heart pumps blood, it goes to your veins, and the veins change color because of the oxygen content that the color changes. It is invisible to our eyes; I can’t just watch this video and see your heartbeat. But that color change is computationally visible.”
Also: Don’t get scammed by fake ChatGPT apps: Here’s what to watch out for
Intel’s deepfake detection technology is implemented in a variety of sectors and platforms, including social media tools, news agencies, broadcasters, content creation tools, startups, and nonprofits. By integrating technology into their workflows, these organizations can better identify and reduce the spread of deepfakes and misinformation.
Despite the potential for misuse, deepfake technology has legitimate applications. One of the first uses is to create avatars to better represent individuals in the digital environment. Demir refers to a specific use case called “MyFace, MyChoice,” which uses deepfakes to improve privacy on online platforms.
In simple terms, this method allows individuals to control their appearance in online photos, replacing their face with a “quantifiably dissimilar deepfake” if they wish to remain anonymous. These controls offer more privacy and control over a person’s identity, helping to suppress automatic facial recognition algorithms.
Also: GPT-3.5 vs GPT-4: Is ChatGPT Plus worth its subscription fee?
Ensuring the ethical development and implementation of AI technologies is essential. Intel’s Trusted Media team works with anthropologists, social scientists, and user researchers to evaluate and refine the technology. The company also has a Responsible AI Council, which reviews AI systems for responsible and ethical principles, including potential biases, limitations, and potentially harmful use cases. This multidisciplinary approach helps ensure that AI technologies, such as deepfake detection, serve to benefit people rather than harm them.
“We’ve got legal people, we’ve got social scientists, we’ve got psychologists, and they all come together to find out the limits to find out if there’s bias – algorithmic bias, systematic bias, data bias, whatever kind of bias,” said Dimer. The team scanned the code to find “any possible use cases of a technology that could harm people.”
Also: 5 ways to explore the use of generative AI at work
As deepfakes become more widespread and sophisticated, the development and implementation of detection technologies to combat misinformation and other harmful consequences is increasingly important. Intel Labs’ real-time deepfake detection technology offers a scalable and effective solution to this growing problem.
By incorporating ethical considerations and collaborating with experts in various disciplines, Intel is working towards a future where AI technologies are used responsibly and for the betterment of society.