Apple doesn't give up control easily. The
iOS 27 Extensions framework is as close as the company gets.
Summary
- Bloomberg's Mark Gurman confirmed that iOS 27, iPadOS 27, and macOS 27 will introduce a system called Extensions, letting users choose from multiple third-party AI models — including Google Gemini and Anthropic's Claude — to power Siri, Writing Tools, and Image Playground.
- Extensions works through the App Store: AI providers add support to their existing apps, and once installed, users can set their preferred model in the Apple Intelligence and Siri section of Settings — routing tasks across Apple Intelligence features to the chosen provider.
- This ends ChatGPT's exclusive position as the only external AI model integrated into Apple Intelligence, a setup that has been in place since iOS 18 and attracted an antitrust lawsuit from Elon Musk's xAI over alleged competitive harm.
- Gemini retains a separate privileged position as the native model powering Apple's rebuilt Siri and Apple Intelligence features — reportedly backed by a deal worth approximately $1 billion per year — meaning Extensions sits on top of an existing hierarchy rather than replacing it.
- Users will also be able to assign distinct Siri voices to different AI models, making it audibly clear whether Apple's own system, Gemini, Claude, or another provider is responding to a query.
What Extensions Actually Is — and How It Works
The name suggests something modest. The reality is a structural shift in how Apple Intelligence operates. Apple's own description of the feature in test builds of
iOS 27 reads: Extensions allow you to access generative AI capabilities from installed apps on demand, through Apple Intelligence features such as Siri, Writing Tools, Image Playground and more. In practical terms, a user who installs the Claude app and enables Extensions support can route Writing Tools tasks to Claude rather than Apple's default model — the same way ChatGPT has functioned as a Siri fallback since iOS 18, but extended across the full Apple Intelligence stack and opened to competing providers simultaneously.
The pathway runs through the App Store. Providers implement Extensions support in their apps; Apple hosts a dedicated AI apps section in the store to surface eligible options; users configure their preferences in Settings. Apple controls eligibility criteria, the discovery interface, and what ships as default — a level of platform leverage that ensures openness on Apple's own terms.
The Gemini Distinction Worth Understanding
Here's the nuance that matters. Gemini operates at two levels in iOS 27 — and they're different. At the native level, Apple licensed Gemini to power its rebuilt, more personalized Siri and Apple Intelligence foundation models, reportedly paying around $1 billion annually for the arrangement. That integration runs regardless of what users configure in Settings. At the Extensions level, Gemini can also appear as a user-selectable option like Claude or ChatGPT — but that's a separate, opt-in access layer.
A user who never touches the Extensions settings may still be routed to Gemini through the native integration. A user who selects Claude via Extensions has made an active choice. That distinction matters for how we interpret what Apple has actually opened up here.
The Voice Distinction and What It Signals
The per-model voice differentiation is a small but revealing design choice. Assigning separate Siri voices to separate AI providers makes the provider's identity audible rather than abstracting it behind a unified Apple Intelligence brand. It's an unusual acknowledgment that users might care who is answering their question — and an implicit admission that these models aren't interchangeable. I suppose that's Apple being more transparent than usual about the complexity underneath the interface.
WWDC 2026 runs June 8-12, where the full Extensions architecture — eligibility criteria, privacy handling, and default behavior — will be formally unveiled.