Disputes, Employee Misconduct Rattle Centerview’s Silicon Valley DreamsRead more

Recap: Combating Misinformation, Deepfakes and the Proliferation of Generative AI

With AI technology progressing, it’s getting increasingly difficult to tell what content is real and what’s not. While many deepfakes are for entertainment purposes, more and more of them are being used to defraud people or influence elections.

In partnership with Comcast NBCUniversal LIFT Labs, The Information’s Margaux MacColl sat down to discuss misinformation and deepfakes with three experts:

Ben Colman: Comcast NBCUniversal LIFT Labs Accelerator Alum & co-founder and CEO of Reality Defender, a deepfake detection platform for enterprise and government users.

Henry Ajder: Globally recognized expert and adviser on generative AI, deepfakes and the synthetic media revolution.

Julia Kostova, PhD: Director of publishing and head of the U.S. division at Frontiers, the third most cited and sixth-largest scholarly publisher in the world.

Level Setting: What Is Generative AI?

MacColl started off the discussion by asking each of the panelists how they define generative AI. Henry Adjer, known as the first “gen AI cartographer,” boiled down his definition to this: “Generative AI is essentially hyperrealistic synthetic content, which often when used maliciously is referred to as deepfakes.”

Ben Colman agreed with Henry and warned that creating misinformation with AI has become easier than ever. “In previous years, you had to be an expert. But now anybody with access to a browser and Google can just search for something and then be off to the races, doing things that in previous years were impossible.”

Risk Without Regulation

It’s clear that generative AI can fall into the wrong hands. But what, if any, guardrails are in place to prevent bad actors from using this technology with ill intent?

Currently, Colman sees the generative AI space as a bit of a wild west, with few regulations: “The proliferation of incredibly free and easy-to-use tools is complemented in a negative way by a lack of regulations on those same tools. And large companies are taking a bit of a posturing approach to claiming they’re on the right side of what may happen, but in reality [they] are trying to avoid onerous regulations.”

Julia Kostova spoke of the risks in terms of scientific publications. “There needs to be transparency around the data used to train models, and to make sure that data is being used legitimately, whether it’s with respect to privacy laws and civil rights.…A lot can and should be done at the governance level, at the policy level, both nationally and globally.”

Tool Talk

What tools exist now to combat misinformation? And what’s a wishlist for the future? Adjer puts his trust in content provenance. “That’s a bottom-up approach, which involves essentially capturing at the source—when an image or video is taken or audio is recorded. A certain amount of metadata is cryptographically secured to that piece of media and acts as a receipt.”

But Colman doesn’t believe full-scale provenance is feasible. Instead, he operates on the presumption that there will never be a ground truth.

For Kostova, the risks in her world revolve around detecting plagiarism. She believes the human touch is still an essential detecting tool: “I think there needs to be a human intervention—a human review—to produce better outcomes.”

Can Deepfakes Influence Elections?

Election interference that uses deepfakes is a problem all over the world. It’s certainly going to be an issue as we face the next U.S. presidential election. But how much of a threat is it? Adjer said he doesn’t think deepfakes will be able to flip an election. However, he noted, “We’re starting to see disinformation—like getting Joe Biden's voice to say transphobic remarks, or to make it appear that Elizabeth Warren [is] saying that Republicans shouldn’t have the right to vote—going viral.”

Colman added, “Once [deepfakes] go viral, they take on a life of their own. But I don’t think there’s going to be a 30% swing in a particular candidate’s public perception. But even swings as low as 3% or 4% at the right time can have a relatively binary result in very, very close races.”

Regulation Is Needed and Is (Probably) Coming Soon

As deepfakes rise, so too will regulation. All the panelists were optimistic about regulatory frameworks coming into play internationally, but agreed those frameworks won’t be in place for several years. Until then, everyone would be wise to keep one eye open when it comes to assessing what’s real—and what’s not.


Access on the go
View stories on our mobile app and tune into our weekly podcast.
Join live video Q&A’s
Deep-dive into topics like startups and autonomous vehicles with our top reporters and other executives.
Enjoy a clutter-free experience
Read without any banner ads.
Former Apple design chief Jony Ive and OpenAI CEO Sam Altman. Photos by Getty.
Exclusive
Designer Jony Ive and OpenAI’s Sam Altman Discuss AI Hardware Project
Jony Ive, the renowned designer of the iPhone, and OpenAI CEO Sam Altman have been discussing building a new AI hardware device, according to two people familiar with the conversations.
From left to right: Blair Effron, Robert Pruzan and David Handler. Photos by Getty; Tidal Partners.
Exclusive Finance
Disputes, Employee Misconduct Rattle Centerview’s Silicon Valley Dreams
The San Francisco Bay Area–based bankers at Centerview Partners, the investment bank that advised Silicon Valley Bank’s owner and Credit Suisse through recent turmoil, got two doses of bad news last week.
Art by Clark Miller
Exclusive startups entertainment
MasterClass Takes a Crash Course in Frugality
MasterClass had a problem with the shoot featuring its latest star instructor, Walt Disney Co. CEO Bob Iger.
OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella. Photos via Getty
Exclusive microsoft ai
How Microsoft is Trying to Lessen Its Addiction to OpenAI as AI Costs Soar
Microsoft’s push to put artificial intelligence into its software has hinged almost entirely on OpenAI , the startup Microsoft funded in exchange for the right to use its cutting-edge technology.
Art by Clark Miller
The Big Read policy
Europe Has Figured Out How to Tame Big Tech. Can the U.S. Learn Its Tricks?
Late last month in Belgium, Sen. Elizabeth Warren (D-Mass.) had a pressing question for Paul Tang, a Dutch politician and member of the European Parliament.
If AI researchers can meet Nat Friedman's Vesuvius Challenge, “It’ll be the first time we’ve read handwriting that hasn’t been seen in 2,000 years.” Art by Clark Miller
The AI Age culture ai
Nat Versus the Volcano: Can an AI Investor Solve an Ancient Mystery from the Ashes of Vesuvius?
Long before men’s daily thoughts about ancient Rome became a TikTok meme , former GitHub CEO Nat Friedman’s mind was regularly turning toward the Roman Empire.