With AI technology progressing, it’s getting increasingly difficult to tell what content is real and what’s not. While many deepfakes are for entertainment purposes, more and more of them are being used to defraud people or influence elections.
In partnership with Comcast NBCUniversal LIFT Labs, The Information’s Margaux MacColl sat down to discuss misinformation and deepfakes with three experts:
Ben Colman: Comcast NBCUniversal LIFT Labs Accelerator Alum & co-founder and CEO of Reality Defender, a deepfake detection platform for enterprise and government users.
Henry Ajder: Globally recognized expert and adviser on generative AI, deepfakes and the synthetic media revolution.
Julia Kostova, PhD: Director of publishing and head of the U.S. division at Frontiers, the third most cited and sixth-largest scholarly publisher in the world.
Level Setting: What Is Generative AI?
MacColl started off the discussion by asking each of the panelists how they define generative AI. Henry Adjer, known as the first “gen AI cartographer,” boiled down his definition to this: “Generative AI is essentially hyperrealistic synthetic content, which often when used maliciously is referred to as deepfakes.”
Ben Colman agreed with Henry and warned that creating misinformation with AI has become easier than ever. “In previous years, you had to be an expert. But now anybody with access to a browser and Google can just search for something and then be off to the races, doing things that in previous years were impossible.”
Risk Without Regulation
It’s clear that generative AI can fall into the wrong hands. But what, if any, guardrails are in place to prevent bad actors from using this technology with ill intent?
Currently, Colman sees the generative AI space as a bit of a wild west, with few regulations: “The proliferation of incredibly free and easy-to-use tools is complemented in a negative way by a lack of regulations on those same tools. And large companies are taking a bit of a posturing approach to claiming they’re on the right side of what may happen, but in reality [they] are trying to avoid onerous regulations.”
Julia Kostova spoke of the risks in terms of scientific publications. “There needs to be transparency around the data used to train models, and to make sure that data is being used legitimately, whether it’s with respect to privacy laws and civil rights.…A lot can and should be done at the governance level, at the policy level, both nationally and globally.”
Tool Talk
What tools exist now to combat misinformation? And what’s a wishlist for the future? Adjer puts his trust in content provenance. “That’s a bottom-up approach, which involves essentially capturing at the source—when an image or video is taken or audio is recorded. A certain amount of metadata is cryptographically secured to that piece of media and acts as a receipt.”
But Colman doesn’t believe full-scale provenance is feasible. Instead, he operates on the presumption that there will never be a ground truth.
For Kostova, the risks in her world revolve around detecting plagiarism. She believes the human touch is still an essential detecting tool: “I think there needs to be a human intervention—a human review—to produce better outcomes.”
Can Deepfakes Influence Elections?
Election interference that uses deepfakes is a problem all over the world. It’s certainly going to be an issue as we face the next U.S. presidential election. But how much of a threat is it? Adjer said he doesn’t think deepfakes will be able to flip an election. However, he noted, “We’re starting to see disinformation—like getting Joe Biden's voice to say transphobic remarks, or to make it appear that Elizabeth Warren [is] saying that Republicans shouldn’t have the right to vote—going viral.”
Colman added, “Once [deepfakes] go viral, they take on a life of their own. But I don’t think there’s going to be a 30% swing in a particular candidate’s public perception. But even swings as low as 3% or 4% at the right time can have a relatively binary result in very, very close races.”
Regulation Is Needed and Is (Probably) Coming Soon
As deepfakes rise, so too will regulation. All the panelists were optimistic about regulatory frameworks coming into play internationally, but agreed those frameworks won’t be in place for several years. Until then, everyone would be wise to keep one eye open when it comes to assessing what’s real—and what’s not.