From Snowflake to Sierra, Every Enterprise Software Firm Is Selling the Same AI AgentsSave 25% and read more

The Information
Sign inSubscribe

    Data Tools

    • About Pro
    • The Next GPs 2025
    • The Rising Stars of AI Research
    • Leaders of the AI Shopping Revolution
    • Enterprise Software Startup Takeover List
    • Org Charts
    • Sports Tech Owners Database
    • The Information 50 2025
    • Generative AI Takeover List
    • Generative AI Database
    • AI Chip Database
    • AI Data Center Database
    • Cloud Database
    • Creator Economy Database
    • Tech IPO Tracker
    • Tech Sentiment Tracker
    • Sports Rights Database
    • Tesla Diaspora Database
    • Gigafactory Database
    • Pro Newsletter

    Special Projects

    • The Information 50 Database
    • VC Diversity Index
    • Enterprise Tech Powerlist
  • Org Charts
  • Tech
  • Finance
  • Weekend
  • Events
  • TITV
    • Directory

      Search, find and engage with others who are serious about tech and business.

    • Forum

      Follow and be a part of discussions about tech, finance and media.

    • Brand Partnerships

      Premium advertising opportunities for brands

    • Group Subscriptions

      Team access to our exclusive tech news

    • Newsletters

      Journalists who break and shape the news, in your inbox

    • Video

      Catch up on conversations with global leaders in tech, media and finance

    • Partner Content

      Explore our recent partner collaborations

      XFacebookLinkedInThreadsInstagram
    • Help & Support
    • RSS Feed
    • Careers
  • About Pro
  • The Next GPs 2025
  • The Rising Stars of AI Research
  • Leaders of the AI Shopping Revolution
  • Enterprise Software Startup Takeover List
  • Org Charts
  • Sports Tech Owners Database
  • The Information 50 2025
  • Generative AI Takeover List
  • Generative AI Database
  • AI Chip Database
  • AI Data Center Database
  • Cloud Database
  • Creator Economy Database
  • Tech IPO Tracker
  • Tech Sentiment Tracker
  • Sports Rights Database
  • Tesla Diaspora Database
  • Gigafactory Database
  • Pro Newsletter

SPECIAL PROJECTS

  • The Information 50 Database
  • VC Diversity Index
  • Enterprise Tech Powerlist
Deep Research
TITV
Tech
Finance
Weekend
Events
Newsletters
  • Directory

    Search, find and engage with others who are serious about tech and business.

  • Forum

    Follow and be a part of discussions about tech, finance and media.

  • Brand Partnerships

    Premium advertising opportunities for brands

  • Group Subscriptions

    Team access to our exclusive tech news

  • Newsletters

    Journalists who break and shape the news, in your inbox

  • Video

    Catch up on conversations with global leaders in tech, media and finance

  • Partner Content

    Explore our recent partner collaborations

Subscribe
  • Sign in
  • Search
  • Opinion
  • Venture Capital
  • Artificial Intelligence
  • Startups
  • Market Research
    XFacebookLinkedInThreadsInstagram
  • Help & Support
  • RSS Feed
  • Careers

Answer tough business questions, faster than ever. Ask

Partner Content

Event Recap: Open Source Safety, Shallow Fakes and What’s Next in Responsible AI

Event Recap: Open Source Safety, Shallow Fakes and What’s Next in Responsible AI
By
The Information Partnerships
[email protected]Profile and archive

As the artificial intelligence boom continues, questions about safety and ethics are more important than ever. In partnership with Salesforce, Aaron Holmes of The Information held a virtual panel to discuss the intersection of safety, technology and regulatory frameworks for AI with three powerhouses in the industry:

  • Kathy Baxter, principal architect, Ethical AI Practice, Salesforce
  • Dr. Sara Hooker, VP of research and head, Cohere for AI
  • Dr. Sasha Luccioni, AI and climate lead, Hugging Face

What are the biggest concerns for AI professionals?

Baxter said one of her biggest concerns was the proliferation of shallow fakes, which are subtle changes to video or audio, such as making someone look older or slowing down speech so it seems like the speaker is impaired.

And here’s where the danger lies, as Baxter sees it: “How do those videos impact individuals who don’t think twice about whether or not this content is trustworthy? We end up dealing with the liar's dividend—where you can’t tell the difference between what is truth and what is a lie.”

Luccioni agreed, and said that another of her pressing concerns is the lack of a framework to help non-technical buyers make informed decisions about the safest models to use.

Hooker talked about companies’ need to mitigate risk as the AI models are released for general use. “Once a model is in the wild, how do you even know that all the precautions that you built in are still able to be preserved in downstream use?”

How do developers know which AI models are the safest?

Holmes asked the panelists about the risks surrounding open source models when the models were coming from many different developers around the world.

The first step, Luccioni responded, is to document each model’s intended use. She added that things like model cards and evaluations are also useful. “I'm a huge evaluation geek— above and beyond just performance,” she said.

This allows developers to better understand the AI models they’re considering. Luccioni added: “Providing this information transparently and making it an accepted practice to document models and data sets is really picking up traction.”

But how do developers know which models have been tested for safety? Luccioni touted the leaderboards Hugging Face uses to help people compare models more easily.

Baxter said she thought that leaderboards don’t go far enough. “You need to be able to also combine the quant with the qual,” she said. “You may get a particular score for toxicity or bias. But once you do, you might want to do a hackathon or another type of adversarial testing. That makes it real.” She warned that “If you're just trying to chase that leaderboard, you don't know what the model is actually going to look like once you put it out into the real world.”

Hooker pointed out that “a lot of traditional benchmarks have been focused on narrow notions of harm like toxicity. But now the notion of harm can be much more complex. So I see in the medium term there's a need for standards.”

The changing face of AI safety

At the end of the discussion, all the panelists said they felt optimistic about the future of AI safety, while acknowledging that bad actors will always try to jailbreak models. But safety will take work.

As Baxter said, “It's not inevitable that things get worse, but we have to pull on our big-kid pants and work together in a way to make sure that we create technology that benefits everyone equally, and not just a small number of individuals in the most powerful positions.”

Luccioni then brought up something striking: that everyone on the panel is a woman, and she’d never been on a tech panel where that was the case. “It’s so refreshing,” she said.

Baxter brought up the larger makeup of AI safety. “This entire field of responsible AI, it really has been uplifted by women—and women of color in particular—that have done some of this groundbreaking work. And I think the field is better for it.”

Most Popular

  • ExclusiveAnthropic to Outpace OpenAI in Server Efficiency, Internal Projections Show
  • From Snowflake to Sierra, Every Enterprise Software Firm Is Selling the Same AI Agents
  • The Big ReadBefore Buyout Offer, a Boardroom Feud Festered at Grindr
  • DealmakerElad Gil Doubles Fund Target to Nearly $3 Billion

Recommended