Apple unintelligent for linking Siri to ChatGPT

By Darío Maestro

Apple’s new integration between Siri and ChatGPT, introduced with iOS 18.2, represents a fundamental break from the privacy principles the company has marketed for decades. From “1984” to “What happens on your iPhone, stays on your iPhone,” Apple distinguished itself by positioning privacy as a premium product in a market saturated with surveillance. Unlike Google or Meta, Apple didn’t need your data to sell you ads. It sold hardware, not your attention. But now, as it scrambles to stay competitive in AI, Apple has aligned with OpenAI—a platform dogged by concerns over excessive data retention, security vulnerabilities, and inadequate anonymization. By pursuing this integration, Apple risks reducing its users to raw material for an AI system designed to profit off their data.

Steve Jobs famously said in 2010: "Privacy means people know what they’re signing up for, in plain English, and repeatedly." Since then, every Face ID scan, every encrypted iMessage, and every privacy label on the App Store has become a symbol of Apple’s commitment to protecting users from the ravenous data appetites of Big Tech.

On some level, of course, its application of these privacy policies has been selective. While inflexible in countries in Europe and the United States, the company is generally happy to do business in China, where government mandates on data access are non-negotiable. But the company has until now managed to cultivate an image of privacy guardian and educate users about the many ways their data might travel.

In the limited Big Tech AI sphere, Apple too could be a differentiator or a collaborator. A differentiator would leverage its on-device processing and privacy-preserving cloud computing infrastructure to protect user data. A collaborator would risk breaking its privacy promise that users have bought into by outsourcing to others, no matter how invasive their policies. With the release of iOS 18.2, it’s clear which path Apple has chosen.

Some at Apple believe the company is more than two years behind the industry leaders. In a scramble to reassure shareholders, Apple has flooded its marketing with buzzwords and hastily launched AI-powered features that are riddled with problems—like the iPhone’s latest alert system, which frequently delivers inaccurate notifications. Criticism from journalists at BBC and Reporters Without Borders has already forced Apple to update these flawed products. Among the rollouts is the new integration between Siri and ChatGPT, part of Apple Intelligence, which is now available on its latest iPhones, iPads, and Macs. After installing it, users’ requests to Siri will be rerouted via ChatGPT whenever the Apple assistant can’t get an answer.

Here’s how it works: Users can either use ChatGPT anonymously without logging in or sign in with an existing ChatGPT account via the Settings app. The former will keep IP addresses obscure on OpenAI’s servers and only the contents of requests will be sent to ChatGPT for processing. Later when users exceed the first free uses, they will be promoted to log in to a ChatGPT account and upgrade to a paid subscription. When logged in, OpenAI’s data retention policies will apply; at this point, users’ questions can be used to train its models or, in the future, serve targeted ads.

On a recent visit to Lisbon, I prompted Siri to tell me about the National Tile Museum, the relations between Portugal and Spain, and the uses of tiling in Portuguese culture. Each time, Siri asked if I wanted to send the query to ChatGPT. Having no AI solution of its own, Apple invariably deferred the task and transferred my data to the servers of OpenAI.

Users unwilling to pay for ChatGPT or agree to its terms are relegated to the old Siri that will refuse an answer if it can’t handle it. Therefore, Apple is forcing users to make a choice between a lesser product and handing over personal data to use powerful ones, and such choice is precisely what we who pay a premium for Apple products wished to avoid.

If we are to follow Apple’s marketing on AI, we are to trust it with our most intimate thoughts. We are encouraged to use it to rehearse a job interview, get a second opinion on our writing, or learn about a dietary restriction before cooking dinner for friends. But those intimate conversations could now end up stored on OpenAI’s servers. How does that square with Apple’s privacy promise? Is the company even considering this conflict, or is it too focused on staying competitive at any cost?

If Apple wants to maintain its identity as a privacy-first innovator, it needs to do more than borrow tools from OpenAI. It needs to invest in building its own AI models that adhere to its privacy principles and are only trained on licensed data. It needs to demand software partners minimize data collection in exchange for access to its ecosystem. But that’s hard work, and right now, Apple seems more focused on keeping up with Google and Microsoft than maintaining its core.

The market doesn’t need more empty slogans about privacy. It needs real solutions. And if Apple won’t provide them, someone else will swing the hammer.

Maestro is a senior legal fellow at the Surveillance Technology Oversight Project (S.T.O.P.). 

Researchcommunications