Apple Intelligence is the big buzz at WWDC. But when it comes to AI and the cloud, if you aren’t a huge enterprise or well-funded government, privacy and data security have always been a challenge when using any cloud service. With the introduction of Private Cloud Compute (PCC), Apple just did cloud services right — and put a real competitive moat in place.
Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to create a cloud infrastructure on which to run generative AI (genAI) models that need more processing power than its devices could supply while also protecting user privacy.
While you can also use ChatGPT with Apple Intelligence, you do not need to (and OpenAI has promised not to store your data under the Apple deal, I think); PCC helps you run Apple’s own GenAI models instead.
The Apple achievement explained
“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior Vice President of Software Engineering Craig Federighi said when announcing the service Monday at WWDC.
To achieve this, Apple has poured what I imagine is more or less a nation-state level security budget into creating a highly secure cloud-based system that provides the computational power some problems will require to be resolved.
The introduction comes at a time when providers are rolling out a range of trusted cloud and data sovereignty solutions to answer similar challenges across enterprise IT; Apple’s Private Cloud Compute service represents the best attempt yet to provide trusted cloud access to the mass market. It comes as security experts warn against unconstrained use of cloud-based genAI services in the absence of a privacy guarantee.
(Remarkably, Elon Musk’s first reaction on hearing of Apple Intelligence was to label it a security threat, when that is precisely what it has been built not to be — you don’t have to use OpenAI at all, and I expect device management tools will be able to close off access to doing so. Perhaps a PCC-style service will eventually form part of the ecosystem for autonomous vehicles that are actually safe?)
What is Private Cloud Compute?
Private Cloud Compute consists of a network of renewable-energy powered Apple Silicon servers Apple is deploying across US data centers. These servers run Apple’s own genAI models remotely when a query demands more computational power than is available on an Apple device. (We don’t expect some of the newly introduced Apple Intelligence services to be available outside the US until 2025, likely reflecting the time it will take to deploy servers locally to support them.)
The idea is that while many Apple Intelligence tasks will run quite happily at the edge, on your device, some queries will require more computational power — and that’s where the PCC kicks in.
But what about the data you share when making a query? Apple says you don’t need to worry, promising that the information you provide isn’t accessible to anyone other than the user, not even to Apple.
This has been achieved through a combination of hardware, software, and an all-new operating system. The latter has been specially tailored to support Large Language Model (LLM) workloads, while presenting a very limited potential attack surface.
This is the power of what is at core server-side Unix, coupled with Apple’s own proprietary system security software and a range of on-device, on-system hardened and highly secure components.
How does this all fit together?
The hardware itself is built around Apple Silicon, which means the company has been able to protect the servers with built-in security protections such as Secure Enclave and Secure Boot. These systems are also protected by iOS security tools, such as Code Signing and sandboxing.
To provide additional protection, Apple has closed down traditional tools such as remote shells and replaced them with purpose-built proprietary tools. There is also something Apple calls Swift on Server on which the company’s cloud services exist. The use of Swift, Apple says, ensures memory safety, which helps further limit any attack surface.
This is what happens when you make an Apple Intelligence request:
Your device figures out if it can process the request itself.
If it needs more computational power, it will get help from PCC.
In doing so, the request is routed through an Oblivious HTTP (OHTTP) relay operated by an independent third party, which helps conceal the IP address from which the request came.
It will only send data relevant to your task to the PCC servers.
Your data is not stored at any point, including in server metrics or error logs; is not accessible; and is destroyed once the request is fulfilled.
That also means no data retention (unlike any other cloud provider), no privileged access, and masked user identity.
Where Apple really seems to have made big steps is in how it protects its users against being targeted. Attackers cannot compromise data that belongs to a specific Private Cloud user without compromising the entire PCC system. That doesn’t just extend to remote attacks, but also to attempts made on site, such as when an attacker has gained access to the data center. This makes it impossible to grab database credentials to mount an attack.
What about the hardware?
Apple has also made the entire system open to independent security and privacy review — indeed, unless the server identifies itself as being open to such review, the information will not be transmitted — so no spoof PCC for you.
The company didn’t stop there. “We supplement the built-in protections of Apple Silicon with a hardened supply chain for PCC hardware, so that performing a hardware attack at scale would be both prohibitively expensive and likely to be discovered,” the company said. “Private Cloud Compute hardware security starts at manufacturing, where we inventory and perform high-resolution imaging of the components of the PCC node before each server is sealed and its tamper switch is activated.”
What that means: Apple has put protections in place to maintain server security that extend all the way from the factory where those servers are made. That’s a huge step on its own account.
What about Apple’s genAI software?
Apple also maintained a focus on user security while developing the tools it makes available within Apple Intelligence. These follow what it calls its “Responsible AI principles,” as explained on the company site. These are:
“Empower users with intelligent tools: We identify areas where AI can be used responsibly to create tools for addressing specific user needs. We respect how our users choose to use these tools to accomplish their goals.
“Represent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.
“Design with care: We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm. We will continuously and proactively improve our AI tools with the help of user feedback.
“Protect privacy: We protect our users’ privacy with powerful on-device processing and groundbreaking infrastructure like Private Cloud Compute. We do not use our users’ private personal data or user interactions when training our foundation models.”
Apple has also thought about whose data it uses to train its models and promises that its Apple Intelligence LLMs are trained on licensed data, “as well as publicly available data collected by our web-crawler, AppleBot.” (If you don’t want your content crawled by AppleBot for use in training models you can opt out, as explained here.)
While researchers will kick Apple’s systems around, it looks very much like the company has crafted a highly secure approach to genAI from the device used to request a service all the way to the cloud, with software and hardware protections in place every step of the way.
What Apple has achieved
There is a lot more to consider — take a look at this white paper — but Apple has achieved something potentially very good here: an ecosystem that provides private genAI services, and can be extended over time.
I’m seeing positive reaction from across the security community to Apple’s news.
“If you gave an excellent team a huge pile of money and told them to build the best ‘private’ cloud in the world, it would probably look like this,” Johns Hopkins cryptography lecturer Mathew Green said. He also warned that the right to opt out of using Apple Intelligence should be more visible and suggested that the impact of Apple’s move will effectively lead toward more use of cloud services.
“We believe this is the most advanced security architecture ever deployed for cloud AI compute at scale,” said Apple Head of Security Engineering and Architecture Ivan Krstić
Is that really the case? Perhaps. Apple has promised that additional transparency to confirm its security promise is on the way. Though I do wonder how this service will gel with those bandit nations (such as the UK) that legislate for pretty much constant data surveillance in order to protect nothing much at all.
But in terms of end user protection and a fully figured out system to support cloud services, Apple’s new offering shows what every cloud service should aspire to exceed.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.
Apple Intelligence is the big buzz at WWDC. But when it comes to AI and the cloud, if you aren’t a huge enterprise or well-funded government, privacy and data security have always been a challenge when using any cloud service. With the introduction of Private Cloud Compute (PCC), Apple just did cloud services right — and put a real competitive moat in place.
Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to create a cloud infrastructure on which to run generative AI (genAI) models that need more processing power than its devices could supply while also protecting user privacy.
While you can also use ChatGPT with Apple Intelligence, you do not need to (and OpenAI has promised not to store your data under the Apple deal, I think); PCC helps you run Apple’s own GenAI models instead.
The Apple achievement explained
“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior Vice President of Software Engineering Craig Federighi said when announcing the service Monday at WWDC.
To achieve this, Apple has poured what I imagine is more or less a nation-state level security budget into creating a highly secure cloud-based system that provides the computational power some problems will require to be resolved.
The introduction comes at a time when providers are rolling out a range of trusted cloud and data sovereignty solutions to answer similar challenges across enterprise IT; Apple’s Private Cloud Compute service represents the best attempt yet to provide trusted cloud access to the mass market. It comes as security experts warn against unconstrained use of cloud-based genAI services in the absence of a privacy guarantee.
(Remarkably, Elon Musk’s first reaction on hearing of Apple Intelligence was to label it a security threat, when that is precisely what it has been built not to be — you don’t have to use OpenAI at all, and I expect device management tools will be able to close off access to doing so. Perhaps a PCC-style service will eventually form part of the ecosystem for autonomous vehicles that are actually safe?)
What is Private Cloud Compute?
Private Cloud Compute consists of a network of renewable-energy powered Apple Silicon servers Apple is deploying across US data centers. These servers run Apple’s own genAI models remotely when a query demands more computational power than is available on an Apple device. (We don’t expect some of the newly introduced Apple Intelligence services to be available outside the US until 2025, likely reflecting the time it will take to deploy servers locally to support them.)
The idea is that while many Apple Intelligence tasks will run quite happily at the edge, on your device, some queries will require more computational power — and that’s where the PCC kicks in.
But what about the data you share when making a query? Apple says you don’t need to worry, promising that the information you provide isn’t accessible to anyone other than the user, not even to Apple.
This has been achieved through a combination of hardware, software, and an all-new operating system. The latter has been specially tailored to support Large Language Model (LLM) workloads, while presenting a very limited potential attack surface.
This is the power of what is at core server-side Unix, coupled with Apple’s own proprietary system security software and a range of on-device, on-system hardened and highly secure components.
How does this all fit together?
The hardware itself is built around Apple Silicon, which means the company has been able to protect the servers with built-in security protections such as Secure Enclave and Secure Boot. These systems are also protected by iOS security tools, such as Code Signing and sandboxing.
To provide additional protection, Apple has closed down traditional tools such as remote shells and replaced them with purpose-built proprietary tools. There is also something Apple calls Swift on Server on which the company’s cloud services exist. The use of Swift, Apple says, ensures memory safety, which helps further limit any attack surface.
This is what happens when you make an Apple Intelligence request:
Your device figures out if it can process the request itself.
If it needs more computational power, it will get help from PCC.
In doing so, the request is routed through an Oblivious HTTP (OHTTP) relay operated by an independent third party, which helps conceal the IP address from which the request came.
It will only send data relevant to your task to the PCC servers.
Your data is not stored at any point, including in server metrics or error logs; is not accessible; and is destroyed once the request is fulfilled.
That also means no data retention (unlike any other cloud provider), no privileged access, and masked user identity.
Where Apple really seems to have made big steps is in how it protects its users against being targeted. Attackers cannot compromise data that belongs to a specific Private Cloud user without compromising the entire PCC system. That doesn’t just extend to remote attacks, but also to attempts made on site, such as when an attacker has gained access to the data center. This makes it impossible to grab database credentials to mount an attack.
What about the hardware?
Apple has also made the entire system open to independent security and privacy review — indeed, unless the server identifies itself as being open to such review, the information will not be transmitted — so no spoof PCC for you.
The company didn’t stop there. “We supplement the built-in protections of Apple Silicon with a hardened supply chain for PCC hardware, so that performing a hardware attack at scale would be both prohibitively expensive and likely to be discovered,” the company said. “Private Cloud Compute hardware security starts at manufacturing, where we inventory and perform high-resolution imaging of the components of the PCC node before each server is sealed and its tamper switch is activated.”
What that means: Apple has put protections in place to maintain server security that extend all the way from the factory where those servers are made. That’s a huge step on its own account.
What about Apple’s genAI software?
Apple also maintained a focus on user security while developing the tools it makes available within Apple Intelligence. These follow what it calls its “Responsible AI principles,” as explained on the company site. These are:
“Empower users with intelligent tools: We identify areas where AI can be used responsibly to create tools for addressing specific user needs. We respect how our users choose to use these tools to accomplish their goals.
“Represent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.
“Design with care: We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm. We will continuously and proactively improve our AI tools with the help of user feedback.
“Protect privacy: We protect our users’ privacy with powerful on-device processing and groundbreaking infrastructure like Private Cloud Compute. We do not use our users’ private personal data or user interactions when training our foundation models.”
Apple has also thought about whose data it uses to train its models and promises that its Apple Intelligence LLMs are trained on licensed data, “as well as publicly available data collected by our web-crawler, AppleBot.” (If you don’t want your content crawled by AppleBot for use in training models you can opt out, as explained here.)
While researchers will kick Apple’s systems around, it looks very much like the company has crafted a highly secure approach to genAI from the device used to request a service all the way to the cloud, with software and hardware protections in place every step of the way.
What Apple has achieved
There is a lot more to consider — take a look at this white paper — but Apple has achieved something potentially very good here: an ecosystem that provides private genAI services, and can be extended over time.
I’m seeing positive reaction from across the security community to Apple’s news.
“If you gave an excellent team a huge pile of money and told them to build the best ‘private’ cloud in the world, it would probably look like this,” Johns Hopkins cryptography lecturer Mathew Green said. He also warned that the right to opt out of using Apple Intelligence should be more visible and suggested that the impact of Apple’s move will effectively lead toward more use of cloud services.
“We believe this is the most advanced security architecture ever deployed for cloud AI compute at scale,” said Apple Head of Security Engineering and Architecture Ivan Krstić
Is that really the case? Perhaps. Apple has promised that additional transparency to confirm its security promise is on the way. Though I do wonder how this service will gel with those bandit nations (such as the UK) that legislate for pretty much constant data surveillance in order to protect nothing much at all.
But in terms of end user protection and a fully figured out system to support cloud services, Apple’s new offering shows what every cloud service should aspire to exceed.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Read More