ai - legal insight

#38 A Quiet AI Takeover? How Gemini Could Become the Hidden Infrastructure of Every Smartphone

#38 A Quiet AI Takeover? How Gemini Could Become the Hidden Infrastructure of Every Smartphone

Rumors have been swirling since the summer about an unprecedented partnership between Apple and Google in the field of artificial intelligence. Specifically, industry insiders suggest that Apple’s voice assistant Siri may soon be supercharged by Google’s Gemini, a state-of-the-art large language model. Such an integration, if it happens, would effectively put Google’s AI technology into the hands of nearly every smartphone user in the Western world – spanning both Android devices and iPhones. The prospect is exciting from a technology standpoint: it promises a smarter, more capable Siri that could rival advanced chatbots like ChatGPT. Yet it also raises serious European legal concerns, notably in the realms of data protection (privacy) and competition law, given the outsized roles of Apple and Google. This article explores the speculation around the Siri–Gemini tie-up and delves into its potential implications under European law.

Rumors of an Apple–Google AI Alliance

How the Story Emerged

The idea that Apple might lean on Google for AI firepower would have sounded far-fetched not long ago, but multiple reports in 2025 suggest exactly that. In late June 2025, Bloomberg’s Mark Gurman reported that Apple was considering outside help to bolster Siri’s intelligence. Apple had been struggling to develop its own advanced large-language model for Siri and even delayed a planned Siri overhaul earlier that year. Gurman revealed that Apple approached OpenAI and Anthropic to create versions of their models for testing on Apple’s private cloud infrastructure. Internally, Apple ran a kind of bake-off: the Siri team tested whether these third-party models or Google’s in-house Gemini model handled requests better than Apple’s in-house prototypes. Interestingly, Anthropic’s Claude was apparently deemed the most promising in terms of raw capability, outperforming both OpenAI’s and Google’s models in Apple’s tests. However, Apple’s decision was not based on quality alone.

Why Gemini – And Not Claude or GPT?

By autumn 2025, the rumor mill converged on Google’s Gemini as Apple’s chosen partner. Reports emerged that Apple was finalizing a deal with Google to use a custom Gemini AI model to power Siri’s next-generation features. The scale of this AI model is immense: Bloomberg reported it would be a 1.2-trillion-parameter model developed by Google, dwarfing the size of any model Apple has built so far. For comparison, the machine-learning model behind Apple’s current Apple Intelligence features has around 150 billion parameters. This suggests that the Google-built “brain” for Siri could be almost an order of magnitude larger, enabling far more complex and nuanced understanding of queries. Apple apparently decided that leveraging Google’s expertise was the fastest way to catch up in the AI race. Apple’s own AI efforts have lagged to the point that even Apple’s executives acknowledged their large-language-model Siri was not meeting expectations and had to be delayed for improvements.

Price and existing business ties seemed decisive. Bloomberg’s reporting indicates that Anthropic’s fees to power Siri would have exceeded $1.5 billion per year, significantly more than Google’s offer. Google, by contrast, was willing to license a custom Gemini model for around $1 billion annually. While that is still an enormous sum, it is relatively small compared to the roughly $20 billion per year that Google already pays Apple to remain the default search engine on Apple devices. This pre-existing search deal likely made both companies comfortable doing business and may have given Google a foot in the door to offer favorable terms for Gemini. Apple reportedly concluded that Anthropic’s technology was superior, but Google “made more sense financially” partly because of the deep relationship the two giants already share.

The Quiet, Almost Invisible Partnership

By early November 2025, the story reached mainstream tech media. Bloomberg, Reuters, The Verge and others reported that Apple is poised to use Google’s Gemini to power a “smarter, more capable” Siri, likely debuting in 2026. Apple is said to be paying on the order of $1 billion per year for Google’s AI expertise. The Gemini model would be custom-built for Apple and tasked with specific functions such as summarizing information and performing multi-step planning and execution tasks for Siri. Apple’s own in-house models will still be used for certain Siri features, especially those involving personal or device-specific data.

The planned technical architecture is telling. Apple intends to run the Gemini AI on its own Private Cloud Compute servers, rather than on Google’s infrastructure. This means Apple retains physical and technical control over how the model operates. Reports explicitly state that because the AI model will run within Apple’s cloud environment, Google will not have access to Apple data. In effect, Apple is licensing the model weights and architecture from Google, but not outsourcing Siri’s operations or data processing to Google’s own cloud. To users, Siri will still look and feel like Apple’s assistant; there will be no “Google inside” branding.

Perhaps the most intriguing detail is that both companies seemingly plan to keep this partnership quiet. According to Gurman, neither Apple nor Google wants to publicly highlight the collaboration. Siri will not be marketed as “powered by Google Gemini,” even if, technically, that is true. Apple reportedly calls the project “Glenwood” internally and plans to position the upgraded Siri as Apple’s own technology, running on Apple’s servers with an Apple-designed interface. Launch is rumored for spring 2026, likely via an iOS update.

This quiet alliance has enormous implications. If it goes ahead as described, Google’s AI technology will, in practice, sit behind both Android phones and iPhones. That effectively gives Google technical reach into the AI experience of almost every smartphone user in the Western world. The rest of this article examines how that intersects with European data protection and competition law.

Data Protection Implications under European Law

Apple’s Privacy Image Meets Google’s Data DNA

Apple has spent years carefully crafting a privacy-first narrative. On billboards and keynotes, the company reminds us that privacy is a “fundamental human right” and that what happens on your iPhone should stay on your iPhone. Google, by contrast, built its empire on data. While Google invests heavily in privacy engineering and offers tools like privacy controls and auto-delete, its core business still depends on collecting, analyzing and monetising enormous amounts of user data.

The Apple–Google AI partnership therefore raises a core question: can a company whose brand is built on minimising data access successfully cooperate with a company whose business thrives on data? In European legal terms, this translates into a more precise question: who controls the data, who processes it, and under what legal basis under the GDPR?

The key technical choice Apple appears to have made is highly relevant here. By running the Gemini model on Apple’s Private Cloud Compute servers, and not on Google’s servers, Apple is trying to ensure that Siri queries do not flow into Google’s data centers. Freevacy, a UK-based privacy resource, summarised this by noting that Apple will “run all Gemini computations on its proprietary private cloud compute (PCC) servers”. In other words, Gemini is meant to be a tool that Apple hosts and controls, not a remote Google service that receives users’ raw voice requests.

Controller, Processor and the GDPR Lens

Under the GDPR, the notion of who is a “controller” and who is a “processor” is central. A controller decides “why” and “how” personal data is processed, whereas a processor acts only on the controller’s behalf and instructions. In the Siri–Gemini setup, Apple would almost certainly be the main controller: it decides that Siri should process user voice queries, using an AI model to interpret and respond. Google, if it never sees identifiable data and only supplies the model, could arguably be positioned as a mere technology vendor or, in some cases, as a processor operating under Apple’s strict instructions.

If Google has no independent access to Siri data and no say in how it is used, the legal risk shifts mostly to Apple. Apple would be responsible for ensuring a lawful basis for processing (for example, contract performance or legitimate interest for running Siri), for respecting users’ rights and for keeping data secure. Google’s role would be largely contractual and technical.

However, if in practice Google needs to receive some form of data – for instance, anonymised logs to fine-tune the model, aggregated statistics for debugging or telemetry about failure cases – then the arrangement becomes more complex. At that point, a data processing agreement under Article 28 GDPR would be necessary, and Apple would have to ensure that any transfer of data to Google (especially if outside the EU) complies with international data transfer rules. Apple would need to verify that what is shared is genuinely anonymised where possible, or covered by appropriate safeguards otherwise.

Another complication is the concept of joint controllership. If Google not only provides the model but also influences how it is updated, what it learns and to what end, a case could be made that Apple and Google jointly determine certain processing purposes. In that scenario, Article 26 GDPR would apply, requiring a transparent arrangement that spells out each party’s responsibilities. Even if this remains largely behind the scenes, European regulators may be interested in whether Google is truly just a supplier, or something closer to a partner with its own stake in the data.

Consent, Transparency and Apple’s Own Rules

GDPR emphasises transparency and, in many cases, consent. Users must be informed in a clear and accessible way about who processes their data, for what purpose, and on what legal basis. It is easy to imagine that many iPhone users would be surprised to learn that when they speak to Siri, Google’s AI model is involved behind the scenes, even if Google never sees the raw recordings.

Apple recently tightened its own rules for third-party apps, requiring them to obtain user permission before sending personal data to third-party AI services. App developers that use, say, OpenAI or another AI provider must now explicitly inform and ask users if they want their data sent to that AI. This move clearly reflects GDPR-inspired caution: Apple wants to ensure that developers do not quietly stream user content off-device to unknown AI providers.

The expectation in Europe will be that Apple holds itself to at least the same standard. Even if Apple can argue that Siri data remains within Apple’s infrastructure and never leaves for Google, questions will arise about how clearly this is explained to users. Apple might not want to highlight Google’s involvement in marketing, but in its privacy notices and legal documentation it will almost certainly need to address the use of third-party AI technology.

Whether explicit user consent will be needed depends on the architecture. If Apple can credibly argue that Gemini runs inside Apple’s environment and that there is no “sharing” of personal data with Google, it may rely on contractual necessity or legitimate interest as a legal basis for processing Siri requests. If, however, any use of data for improving the Gemini model involves Google receiving data in a way that goes beyond mere processing on Apple’s behalf, then consent could become necessary to remain GDPR-compliant. That is a scenario Apple is likely keen to avoid, both for legal and reputational reasons.

Data Minimisation and Privacy by Design

GDPR also enshrines the principles of data minimisation and purpose limitation. Data minimisation means processing only what is necessary for the given purpose. Purpose limitation means not repurposing data for incompatible use without a new legal basis. Large AI models, by their nature, are hungry for data. Training them and fine-tuning them often involves vast amounts of text, audio and interaction logs. This tension is at the heart of many privacy debates around AI.

Apple seems to be tackling this through architectural separation. According to reports, Apple will use its own smaller models to handle personal and device-specific information, while Gemini will be used for more general tasks such as understanding the structure of a question, planning steps across apps or summarising content. In practice, that could mean that Siri’s knowledge of your calendar, health data or messages is interpreted by an Apple-controlled system, and only a stripped-down version of the query – without direct identifiers – is passed to Gemini.

If done properly, this design can significantly reduce the amount of personal data exposed to the large model. It aligns with the idea of privacy by design: build systems in such a way that privacy risks are structurally reduced rather than fixed later with policies. However, the proof is in the implementation. Regulators may want technical assurances that data passed to the Gemini-running environment truly cannot be re-linked to individuals, and that the system cannot accidentally retain sensitive traces.

Another concern is model improvement. If Siri’s interactions help refine the model, Apple must ensure that users’ data is not being reused in a way that conflicts with the original purpose. Anonymised or synthetic data might be necessary for model tuning, rather than real user conversations. Otherwise, purpose limitation and user expectations may be breached.

How European Regulators Might React

It is almost certain that European Data Protection Authorities will pay attention to this integration. Apple’s main EU regulator, the Irish DPA, could seek clarification on key points: whether any personal data is shared with Google, how consent is handled, whether users are clearly informed of third-party AI involvement, and how long Siri data is stored and for what purposes.

If Apple’s answers demonstrate robust safeguards – for example, that Google is contractually barred from using any Siri data for its own purposes, that data remains in EU data centres for EU users, and that data minimisation is applied in practice – regulators may accept the arrangement as compatible with GDPR. If not, the risk of an investigation or corrective measures increases.

The upcoming EU AI Act adds another layer. Although not a data protection law, it will impose obligations on providers and deployers of AI systems, especially general-purpose models like Gemini. Transparency obligations, risk monitoring, documentation and even registration may apply. Apple, as a deployer of a general-purpose AI model on EU devices, will likely bear responsibilities under that Act as well. The Siri–Gemini integration could therefore become one of the first major real-world tests of how the AI Act and GDPR interact.

From a European privacy perspective, this partnership is less about whether collaboration is allowed and more about whether it is done with enough safeguards, transparency and user control. Apple has positioned itself as the privacy champion; in Europe, that is both a commercial asset and a regulatory expectation. Siri plus Gemini will show whether that reputation can survive a high-stakes alliance with Google.

Competition Law Implications for Big Tech and AI Rivals

Two Gatekeepers, One Brain

Beyond privacy, the integration raises major competition law questions. The EU has long viewed Apple and Google as “gatekeepers” of digital ecosystems. Apple controls the iOS environment, including the App Store and the default voice assistant, Siri. Google controls Android, its own app store and Google Assistant, alongside a dominant position in online search.

Historically, Siri and Google Assistant were competing products, each tied to their own platform. Now, Apple is poised to adopt Google’s core technology for Siri’s intelligence. On a functional level, the distinction between Siri and Google’s own AI starts to blur: Siri could become, in part, a different face on Google’s model. That does not mean Siri will become Google Assistant, but it does mean that Google’s AI will shape the experience of using Siri.

Under EU competition law, particularly Article 102 TFEU, the Commission can intervene if a dominant company abuses its position. Google is already considered dominant in several markets, including search. If, through this partnership, Google’s influence extends deep into iOS, regulators might worry about a new form of dominance: control over AI infrastructure across both major mobile platforms.

The broader context makes this even more striking. Android devices, especially Samsung phones, already rely heavily on Google services. During a US antitrust trial, it emerged that Google pays Samsung a substantial sum to preinstall Gemini AI on Galaxy devices. So Google is not just present as a service; it is paying to occupy that position. If Apple also brings Gemini into Siri, Google’s AI effectively has a footprint in both main smartphone ecosystems.

Impact on Other Large Language Models

The potential competitive impact on other AI providers is clear. OpenAI, Anthropic, Meta and various smaller AI developers were all vying for partnerships with hardware and platform players. Being chosen as the brain behind Siri would have been a game-changing deal for any of them. Losing that deal, especially to a direct rival like Google, means losing access to an enormous and highly valuable user base.

In practical terms, this reduces the available “routes to market” for large language models. If Android phones predominantly run Google’s AI, and iPhones also start using Google’s AI under the hood, then alternative AI providers may only reach users via standalone apps or specific integrations. That is much weaker than having default or system-level status. We have seen similar patterns in the past: default search engine agreements, for example, made it extremely hard for rivals to gain market share because most users simply stick with the default.
Competition law examines whether such arrangements have the effect of foreclosing rivals.

The EU has previously penalised Google for tying its apps and services to Android in a way that disadvantaged other browsers and search engines. It also scrutinised the massive payments Google makes to Apple for search defaults. The Siri–Gemini deal can be viewed in that light: another layer of integration between two giants which might make it harder for other AI firms to compete on equal terms.
The pricing dimension adds another twist. Google reportedly outbid Anthropic by offering a much lower price for providing the AI model. For a smaller firm like Anthropic, charging more may have been necessary to cover costs and sustain its business.

For Google, offering a lower price may still be strategically attractive because winning the deal helps cement its position as a leading AI infrastructure provider. In competition language, the question is whether Google is simply being competitive, or whether it is engaging in a form of exclusionary pricing that makes it artificially hard for others to win strategic contracts.

The Digital Markets Act and Voice Assistants

The EU’s Digital Markets Act adds another layer of complexity. The DMA is specifically aimed at gatekeepers and requires them to avoid self-preferencing and to keep markets contestable. Voice assistants are increasingly seen as “core platform services,” and the DMA is already influencing how Apple designs its systems.

One important development is that Apple is reportedly preparing to allow EU users to select third-party voice assistants as their default on iPhones. Under this emerging regime, an EU iPhone user might choose to make another assistant – potentially Google Assistant, Amazon Alexa or even a chatbot like ChatGPT – the default voice interface instead of Siri. This is a direct response to the DMA’s requirements to give users choice and not tie them exclusively to the gatekeeper’s in-house assistant.

In theory, this mitigates some competition concerns. If Siri becomes much better using Gemini, but users can still switch to competing assistants, the market remains open and contestable. However, system-level integration always has advantages. Siri is deeply baked into iOS; third-party assistants, at least for now, may not receive identical access to OS-level features. If Siri, powered by Gemini, can control system settings, open apps and act seamlessly across the device while a third-party assistant is more constrained, users might stick with Siri even if they technically have alternatives. Regulators will be keen to see whether Apple truly levels the playing field for other assistants or merely meets the letter of the DMA while preserving a practical advantage for Siri.

Furthermore, the DMA might force Apple to expose certain hooks or interfaces that allow third-party AI services deeper integration. If Apple does that well, it could soften the competitive edge of the Google partnership, because rival AI providers could still integrate meaningfully into iOS. If Apple implements this in a restrictive or purely formal way, the Commission could view that as non-compliance with the DMA.

Short-Term Consumer Gains vs Long-Term Market Risks

From a consumer perspective, the immediate effect of a Siri–Gemini alliance is likely positive. Siri has long been perceived as lagging behind Google Assistant and Amazon Alexa. Many users have simply stopped using Siri for anything beyond basic tasks. A major leap in capability, thanks to Gemini’s advanced reasoning and language understanding, could finally make Siri a genuinely useful conversational assistant. Better productivity, richer answers, more natural interactions – those are clear benefits.

European competition law does not ignore such benefits. Efficiency gains and innovation are recognised as positive outcomes. Apple could argue that this partnership increases competition in the voice assistant market because Siri becomes a stronger challenger rather than a weak also-ran.

The long-term risk, however, is that this stronger Siri is still powered by one of the two already-dominant players. Over time, if Google’s AI becomes entrenched as the de facto standard across devices, other AI providers may find it nearly impossible to catch up. The industry could drift toward an AI duopoly (Google on most devices, OpenAI/Microsoft on some Windows and enterprise contexts), with little room for others. That would be problematic from the EU’s perspective, both in terms of innovation and strategic autonomy.

European regulators therefore face a difficult balancing act. They must weigh short-term consumer benefits against potential long-term foreclosure effects. The DMA gives them tools to keep the platform side open. Traditional antitrust law gives them tools to examine contractual terms and market effects. For now, the Siri–Gemini deal seems to be framed as a non-exclusive, interim solution; Apple insists it is working on its own large models, expected perhaps in 2026. If Apple eventually replaces Gemini with an in-house AI, that would restore a more diverse AI provider landscape. If, on the other hand, Apple renews or expands the partnership, the Commission may become more concerned about structural market effects.

Final Thoughts

The potential integration of Google’s Gemini AI into Apple’s Siri is more than just a product upgrade story. It is a vivid illustration of how quickly alliances can form in the AI era, even between long-standing rivals, and how law and policy must adapt to keep up. Technically, the partnership promises to transform Siri from a sometimes-frustrating assistant into a powerful, context-aware tool capable of complex reasoning. Strategically, it gives Google an unprecedented reach into both major mobile ecosystems, subtly turning it into the invisible backbone of AI on billions of devices.

For European law, this development is both a challenge and an opportunity. On the data protection side, the Siri–Gemini collaboration will test whether GDPR’s principles can be upheld when cutting-edge AI models are provided by third parties. Apple’s choice to run Gemini on its own Private Cloud Compute, to keep data within Apple’s control and away from Google’s servers, is a strong signal that GDPR is influencing system design at the highest levels. If Apple can show that user data remains protected, that purpose limitation and data minimisation are respected, and that users retain meaningful control over their data, this partnership could become a model for privacy-preserving AI integration. If not, it may prompt enforcement actions and stricter guidance on how such collaborations should be governed.

On the competition side, Europe’s role will be to ensure that this powerful alliance does not choke off diversity and innovation in the AI ecosystem. The Commission and national authorities will observe whether alternative AI providers still have viable routes to market on iOS and Android, whether user choice of assistants is real or merely theoretical, and whether any contractual terms between Apple and Google have exclusionary effects. The DMA, already pushing Apple to open up defaults and interoperability, will be a key instrument in keeping the market contestable. Traditional antitrust law remains a backstop if the partnership is used to reinforce dominance or foreclose rivals.

The broader lesson is that AI is not just a technical field but a geopolitical and regulatory one. The Siri–Gemini story shows how quickly control over AI infrastructure can concentrate in the hands of a few companies – and how vital it is that law, especially in Europe, keeps that power accountable. For users, the hope is that we get the best of both worlds: assistants that are genuinely helpful and intelligent, and legal frameworks that keep our rights and freedoms intact in the background. For policymakers, the task is to craft rules and decisions that foster innovation while preventing the emergence of unassailable AI gatekeepers.

Whether the future of Siri is powered by Gemini for a few years or for a decade, one thing is clear: this partnership will shape not only the experience of using a smartphone but also the legal and policy debates about AI in Europe. It will be a reference point for how to do – or how not to do – high-stakes AI integration in a way that respects privacy, competition and, ultimately, the people whose lives these systems touch.

Stay curious, stay informed, and let´s keep exploring the fascinating world of AI together.

This post was written with the help of different AI tools.

Check out previous posts for more exiting insights!