AI Agenda Live Recap: IBM’s Ritika Gunnar on Where Enterprises Can Find ROI With AI

It seems that the more enterprises spend on AI, the more they have to spend on AI. Surpassing the startup costs of deploying AI models are those for maintaining the technology—from acquiring the GPUs to building out the storage centers to house all that sensitive data. In 2025, companies are adding billions of dollars of investment to the billions they’ve already spent on the technology. It’s little wonder that many companies are wondering when they might see a return on investment.
To answer that question, The Information Executive Editor Amir Efrati sat down with Ritika Gunnar, general manager of data & AI at IBM, to understand how one of the world’s largest, most innovative tech companies extracts value from AI, and how it helps other organizations do the same.
On the (Use) Case
For the last few years, IBM has been laser-focused on pairing its customers with enterprise-ready AI solutions, from watsonx—an AI studio that helps businesses build, train and deploy their AI models—to its Granite series of small AI models, which help organizations achieve results at a fraction of the cost of larger models.
To better understand how these technologies can serve customers, IBM battle-tests its products internally, said Gunnar.
“We use a term called Client Zero, because we use our own technologies across several different functions,” she noted.
The most successful use cases are also the most obvious: customer service (think a triage system that uses chatbots to field inquiries) and HR (AI is particularly good at screening candidates). Gunnar said IBM has also found efficiencies by integrating AI into its sales organization to provide targeted training and enablement for sellers, and by using watsonx Code Assistant, which automates about 6% of code across all IBM products.
“That may seem small,” said Gunnar, “but when you think about the fact that IBM services large enterprises, that’s billions in productivity that we’ve been able to reinvest into R&D.”
When Smaller Is Better
For enterprises looking to get bang for their buck, Gunnar said that when it comes to choosing the right-sized model, “bigger isn’t always better.”
Combined with alignment techniques, she said, smaller, more specialized models often outperformed their larger language counterparts. As an example, she pointed to a financial institution that recently swapped out GPT-4 for one of IBM’s fine-tuned Granite models.
“They were able to get 6% improved accuracy at 2% of the cost,” she noted, adding that smaller models are easier to integrate with enterprise applications and data while offering greater transparency and governance—both “top concerns from clients,” noted Gunnar.
“Our perspective is that fit-for-purpose models are extremely important to enterprise use cases, especially as we scale them out,” she said.
The Need for Speed?
The speed at which companies implement AI varies greatly by use case. Some applications, like customer care, are easier to deploy quickly because they provide immediate benefits to users. Other industries, especially those dealing with financial transactions or highly regulated environments, require more rigorous testing.
“If you’re using AI in your trading application, a lot more testing and iterations need to happen before it’s ready to be put into production,” Gunnar explained, adding that even within a single company, different AI applications may demand different levels of caution, with the level of risk tolerance shaping the deployment.
AI’s Post-DeepSeek Future
Asked about how the release of China’s open-source AI model was going to change the game, Gunnar said that while she still believes there’s room for innovation in AI models, most of the advances—and opportunity for enterprises—lie in next-generation AI agents, underpinned by LLMs like IBM’s Granite 3.2 models, that bring enhanced reasoning to the AI mix.
“One of the things that DeepSeek proved for us is [that] the strategy of using small language models is really essential for enterprises to drive the outcomes they need,” said Gunnar, adding that IBM’s current focus is on enhancing AI’s capabilities via agentic reasoning and multi-agent orchestration.
For example, IBM prototyped a financial AI assistant that could analyze investment options, calculate tax implications and even authenticate trades via voice commands.
“That just gives you an idea of the impact this technology is going to have,” Gunnar said.