Advertisement

Elon Musk blasts Apple’s OpenAI deal over alleged privacy issues. Does he have a point?

Elon Musk, Tesla CEO, stands in front of a wall of vertical black and white stripes
Elon Musk, Tesla CEO, at the opening of the Tesla factory Berlin Brandenburg in Gruenheide, Germany, in 2022.
(Patrick Pleul / Associated Press)
Share via

When Apple holds its annual Worldwide Developers Conference, its software announcements typically elicit cheers and excitement from tech enthusiasts.

But there was one notable exception this year — Elon Musk.

The Tesla and SpaceX chief executive threatened to ban all Apple devices from his companies, alleging a new partnership between Apple and Microsoft-backed startup OpenAI could pose security risks. As part of its new operating system update, Apple said users who ask Siri a question could opt in for Siri to pull additional information from ChatGPT.

“Apple has no clue what’s actually going on once they hand your data over to OpenAI,” Musk wrote on X. “They’re selling you down the river.”

Advertisement

The partnership allows Siri to ask iPhone, Mac and iPad users if the digital assistant can surface answers from OpenAI’s ChatGPT to help address a question. The new feature, which will be available on certain Apple devices, is part of the company’s operating system update due later this year.

“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,” Musk wrote on X. “That is an unacceptable security violation.”

Apple announced new artificial intelligence-powered tools, including a partnership with OpenAI that allows Siri to surface answers from ChatGPT.

Representatives for Musk and Apple did not respond to a request for comment.

In a keynote presentation at its developers conference on Monday, Apple said ChatGPT would be free for iPhone, Mac and iPad users. Under the partnership, Apple device users would not need to set up a ChatGPT account to use it with Siri.

Advertisement

“Privacy protections are built in for users who access ChatGPT — their IP addresses are obscured, and OpenAI won’t store requests,” Apple said on its website. “ChatGPT’s data-use policies apply for users who choose to connect their account.”

Many of Apple’s AI models and features, which the company collectively calls “Apple Intelligence,” run on the device itself, but some inquiries will require information to be sent through the cloud. Apple said that data is not stored or made accessible to Apple and that independent experts can inspect the code that runs on the servers to verify this.

Apple Intelligence will be available for certain models of Apple devices, such as the iPhone 15 Pro and iPhone 15 Pro Max and iPad and Mac with M1 and later.

Advertisement

So does Musk have a point? Technology and security experts who spoke to The Times offered mixed opinions.

Some pushed back on Musk’s assertion that Apple’s OpenAI deal poses security risks, citing a lack of evidence.

“Like a lot of things that Elon Musk says, it’s not based upon any kind of technical reality now, it’s really just based upon his political beliefs,” said Alex Stamos, chief trust officer at Mountain View, Calif.-based cybersecurity company SentinelOne. “There’s no real factual basis for what he said.”

Stamos, who is also a computer science lecturer at Stanford University and a former chief security officer at Facebook, said he was impressed with Apple’s data protection efforts, adding, “They’re promising a level of transparency that nobody’s really ever provided.

“It’s hard to totally prove at this point, but what they’ve laid out is about the best you could do to provide this level of AI services running on people’s private data while protecting their privacy,” Stamos said.

“To do the things that people have become accustomed to from ChatGPT, you just can’t do that on phones yet,” Stamos added. “We’re years away from being able to run those kinds of models on something that fits in your pocket and doesn’t burn a hole in your jeans from the amount of power it burns.”

Advertisement

Hollywood talent agencies and producers have met with AI companies, including ChatGPT maker OpenAI, to learn about how their technologies could be used in entertainment.

Musk has been critical of OpenAI. He sued the company in February for breach of contract and fiduciary duty, alleging it had shifted its focus from an agreement to develop artificial general intelligence “for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits.” On Tuesday, Musk, who was a co-founder of and investor in OpenAI, withdrew his lawsuit. Musk’s San Francisco company, xAI, is a competitor to OpenAI in the fast-growing field of artificial intelligence.

Musk has taken aim at Apple in the past, calling it a “Tesla graveyard,” because, according to him, Apple had hired people that Tesla had fired. “If you don’t make it at Tesla, you go work at Apple,” Musk said in an interview with German newspaper Handelsblatt in 2015. “I’m not kidding.”

Still, Rayid Ghani, a machine learning and public policy professor at Carnegie Mellon University, said that, at a high level, he thinks the concerns Musk raised about the OpenAI-Apple partnership should be raised.

While Apple said that OpenAI is not storing Siri requests, “I don’t think we should just take that at face value,” Ghani said. “I think we need to ask for evidence of that. How does Apple ensure that processes are there in place? What is the recourse if it doesn’t happen? Who’s liable, Apple or OpenAI, and how do we deal with issues?”

Some industry observers also have raised questions about the option for Apple users who have a ChatGPT subscription to link their accounts with their iPhone, and what information is collected by OpenAI in that case.

“We have to be careful with that one — linking your account on your mobile phone is a big deal,” said Pam Dixon, executive director of the World Privacy Forum. “I personally would not link until there is a lot more clarity about what happens to the data.”

Advertisement

Johansson, who portrayed the voice of a computer program in ‘Her,’ was not behind OpenAI’s ‘Sky’ voice assistant. Another actor provided the voice, OpenAI said.

OpenAI pointed to a statement on its website that says, “Users can also choose to connect their ChatGPT account, which means their data preferences will apply under ChatGPT’s policies.” The company declined further comment.

Under OpenAI’s privacy policy, the company says it collects personal information that is included in the input, file uploads or feedback when account holders use its service. ChatGPT has a way for users to opt out of having their inquiries used to train AI models.

As the use of AI becomes more entwined with people’s lives, industry observers say that it will be crucial to provide transparency for customers and test the trustworthiness of the AI tools.

“We’re going to have to understand something about AI. It’s going to be a lot like plumbing. It’s going to be built into our devices and our lives everywhere,” Dixon said. “The AI is going to have to be trustworthy and we’re going to need to be able to test that trustworthiness.”

Night Archiving Supervisor Valerie Hood contributed to this report.

Advertisement