Apple’s artificial/machine/generative AI research team seems to be opening up as it explores new frontiers in this research, publishing more than 20 new Core ML models for on-device AI through the popular AI community site Hugging Face.
It’s a real change in the company’s customary rectitude in being open about what it’s doing, and it seems likely the move comes in response to demands from its research teams to be a bit more transparent.
Cutting-edge AI capabilities
As first reported by VentureBeat, Apple has released dozens of Core ML models, complementing them with extensive datasets. The company seems to be posting new collections at a rapid clip — the latest item appeared in the collection within the last 24 hours. The collection is extensive and highlights two of the main aims of Apple’s teams: to build models that will eventually run on the device, and to ensure these also preserve user privacy.
Some of the AI functions promised by all this code includs tools for image classification, depth segmentation, text analysis, translation, and more.
What, who, why?
They cover a wide range of applications, including FastViT for image classification, DepthAnything for monocular depth estimation, and DETR for semantic segmentation.
The models are not intended for mass market use and are aimed at developers, who can download them, convert them to CoreML format, and then deploy them in their own code. The process for this was explained at WWDC 2024 in a presentation that details how the models can be deployed once converted. It is also worth noting CoreML is much, much faster in iOS 18, as Apple said.
The models available on Hugging Face are also ready to run at the edge. In addition to better privacy and security, on-device LLM models should also run far more swiftly than cloud-based code.
Apple is also working with Hugging Face on other AI-related tasks, including via the MLX Community. All in all, the company seems to have become more visibly open to open-source contributions as it seeks to build Apple Intelligence.
Not the first time Apple’s been open
Except, that’s not exactly the case. Apple is an active player in open-source development, and while this isn’t always fully understood, a cursory glance through company history shows support for the FreeBSD project, a GitHub repository that offers up source code for operating systems, developer tools and more. It also plays an active part across multiple standards bodies, such as Bluetooth SIG.
In other words, some degree of openness already does exist, though it seems to have opened up more for AI.
There’s a reason for this, of course. AI researchers like to collaborate as they explore these new frontiers, and it’s thought Apple’s customary corporate secrecy might have frustrated attempts to put its own work in artificial intelligence on the fast track. This certainly seems to have changed in the last year, as multiple research notes and AI tools have emerged from the company. This latest batch then is completely in keeping with Apple’s new approach, at least, its new tactics related to this part of tech.
Apple is, therefore, learning from the wider industry.
…And the industry is learning from it
Apple’s stance on privacy leads the industry, and as the potential pitfalls of AI systems become more widely understood it seems probable that more companies will follow its lead.
That means an eventual multitude of small models capable of being run on edge devices to perform a variety of tasks. While the capabilities of such models will be limited by a ceiling comprised of processor speed, computational power, and on-device memory bandwidth, Apple’s approach also includes strategic use of highly secured private cloud services, itself a signal to others in the space to follow its example – particularly as increasingly authoritarian and ill-conceived legislation threatens to undermine the security of networked intelligence itself.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.
Apple’s artificial/machine/generative AI research team seems to be opening up as it explores new frontiers in this research, publishing more than 20 new Core ML models for on-device AI through the popular AI community site Hugging Face.
It’s a real change in the company’s customary rectitude in being open about what it’s doing, and it seems likely the move comes in response to demands from its research teams to be a bit more transparent.
Cutting-edge AI capabilities
As first reported by VentureBeat, Apple has released dozens of Core ML models, complementing them with extensive datasets. The company seems to be posting new collections at a rapid clip — the latest item appeared in the collection within the last 24 hours. The collection is extensive and highlights two of the main aims of Apple’s teams: to build models that will eventually run on the device, and to ensure these also preserve user privacy.
Some of the AI functions promised by all this code includs tools for image classification, depth segmentation, text analysis, translation, and more.
What, who, why?
They cover a wide range of applications, including FastViT for image classification, DepthAnything for monocular depth estimation, and DETR for semantic segmentation.
The models are not intended for mass market use and are aimed at developers, who can download them, convert them to CoreML format, and then deploy them in their own code. The process for this was explained at WWDC 2024 in a presentation that details how the models can be deployed once converted. It is also worth noting CoreML is much, much faster in iOS 18, as Apple said.
The models available on Hugging Face are also ready to run at the edge. In addition to better privacy and security, on-device LLM models should also run far more swiftly than cloud-based code.
Apple is also working with Hugging Face on other AI-related tasks, including via the MLX Community. All in all, the company seems to have become more visibly open to open-source contributions as it seeks to build Apple Intelligence.
Not the first time Apple’s been open
Except, that’s not exactly the case. Apple is an active player in open-source development, and while this isn’t always fully understood, a cursory glance through company history shows support for the FreeBSD project, a GitHub repository that offers up source code for operating systems, developer tools and more. It also plays an active part across multiple standards bodies, such as Bluetooth SIG.
In other words, some degree of openness already does exist, though it seems to have opened up more for AI.
There’s a reason for this, of course. AI researchers like to collaborate as they explore these new frontiers, and it’s thought Apple’s customary corporate secrecy might have frustrated attempts to put its own work in artificial intelligence on the fast track. This certainly seems to have changed in the last year, as multiple research notes and AI tools have emerged from the company. This latest batch then is completely in keeping with Apple’s new approach, at least, its new tactics related to this part of tech.
Apple is, therefore, learning from the wider industry.
…And the industry is learning from it
Apple’s stance on privacy leads the industry, and as the potential pitfalls of AI systems become more widely understood it seems probable that more companies will follow its lead.
That means an eventual multitude of small models capable of being run on edge devices to perform a variety of tasks. While the capabilities of such models will be limited by a ceiling comprised of processor speed, computational power, and on-device memory bandwidth, Apple’s approach also includes strategic use of highly secured private cloud services, itself a signal to others in the space to follow its example – particularly as increasingly authoritarian and ill-conceived legislation threatens to undermine the security of networked intelligence itself.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Read More