In the race to win the large-language model race, one big tech name has not surfaced as a contestant: Apple. Until now.
My colleague Wayne Ma reported on Wednesday that Apple created a team four years ago, long before the current generative AI hype cycle, to develop large-language models. In other words, Apple may not be as much of an LLM laggard as people thought.
That shouldn’t be surprising, given that Apple has been a visionary in the past, most obviously by bringing touchscreens to the mainstream with the iPhone, helping create the foundation for countless mobile applications like Uber and TikTok.
Apple’s LLM team today is just 16 people strong, with a budget for training Apple’s most advanced models in the millions of dollars per day. (To put that in perspective, OpenAI CEO Sam Altman has said that it cost more than $100 million to train its most advanced model, GPT-4.) And the team is just one out of at least four at the company developing language or image models, demonstrating the financial and personnel resources Apple has been quietly funneling into this area.