The App Store for Brains
Apple is reportedly planning to let users pick and choose which third-party large language models power core iPhone features in iOS 27, including Siri and Writing Tools. Internally codenamed “Extensions,” this feature turns the walled garden into a bustling agora for AI. Google and Anthropic models are already being tested. The company, under incoming CEO John Ternus, seems to be admitting it cannot build its own foundational model as effectively as it can curate your choices.
The Fox in the Henhouse
This “choose your own adventure” approach conveniently sidesteps Apple’s massive AI infrastructure bill, but it also sidesteps privacy. Apple wants users to believe this is about giving them freedom. It is more likely about offloading liability. If Google’s model misbehaves or Anthropic’s model leaks data, Apple can shrug and point to the developer. Meanwhile, ChatGPT remains conspicuously present as just another option, not the default. That is a tactical retreat, not a revolution.
The Real Motive Hiding in Plain Sight
Apple’s playbook has always been to control the rails, not the train. By hosting rival models on its hardware, Apple collects valuable telemetry on which AI functions users actually invoke, when, and for what purpose. This is not about openness. It is about analyzing your intent signals without building its own costly frontier model. The privacy promise rings hollow when your on-device AI is a fork of Google’s Gemini. CVE-2026-29262 and CVE-2026-29345 highlight that even sandboxed models can exfiltrate system prompts.
Source: Techcrunch
