Apple Intelligence
Apple Intelligence is a generative artificial intelligence system developed by Apple Inc. Relying on a combination of on-device and server processing, it was announced on June 10, 2024, at the 2024 Worldwide Developers Conference, as a built-in feature of Apple's iOS 18, iPadOS 18, and macOS Sequoia, which were announced alongside Apple Intelligence. Apple Intelligence is free for all users with supported devices.
History
Background
Apple first implemented artificial intelligence features in its products with the release of Siri in the iPhone 4S in 2011. In the years after its release, Apple engaged in efforts to ensure its artificial intelligence operations remained covert; according to University of California, Berkeley professor Trevor Darrell, the company's secrecy deterred graduate students. The company started expanding its artificial intelligence team in 2015, opening up its operations by publishing more scientific papers and joining AI industry research groups. Apple reportedly acquired more AI companies from 2016 to 2020. In 2017, Apple released the iPhone 8 and the iPhone X with the A11 Bionic processor, which featured its first dedicated Neural Engine for accelerating common machine learning tasks. Despite its investments in artificial intelligence, Siri was criticized both by reviewers and internally at Apple for lagging behind other AI assistants.The rapid development of generative artificial intelligence and the release of ChatGPT in late 2022 reportedly blindsided Apple executives and forced the company to refocus its efforts on AI. In an interview with Good Morning America, Apple CEO Tim Cook stated that generative AI had "great promise" but had some potential dangers, and that it was "looking closely" at ChatGPT. It was first reported in July 2023 that Apple was creating its own internal large language model, codenamed "Ajax". In October 2023, Apple was reportedly on track to release new generative AI features into its operating systems by 2024, including a significantly redeveloped Siri. In an earnings call in February 2024, Cook stated that the company was spending a "tremendous amount of time and effort" into AI features that would be shared "later that year".
Google deal
In January 2026, Apple and Google announced a multi-year partnership under which Apple’s next-generation foundation models are expected to incorporate Google’s Gemini models and cloud infrastructure. According to the companies, the collaboration is intended to support future Apple Intelligence features, including enhancements to Siri, while Apple Intelligence will continue to operate on Apple devices and through Apple’s Private Cloud Compute system, which Apple states is designed to preserve user privacy.On an earnings call, Apple reported to investors that they were integrating an on-device model of the Google Gemini AI to Siri, as the development of their model was beset with setbacks. Apple has previously tested and used other third-party AI models like ChatGPT, but according to a Bloomberg article by Mark Gurman, Apple pushed forward the proposed Google deal; by using Google's Gemini model possessing 1.2 trillion parameters, Apple would integrate a much larger and complex model than those it previously developed and used. Of note, comparable AI models from other major companies have also been reported to operate at a similar “trillion-parameter” scale and to compete against Gemini-class systems on benchmarks.
Models
Apple Intelligence consists of an on-device model as well as a cloud model running on servers primarily using Apple silicon. Both models consist of a generic foundation model, as well as multiple adapter models that are more specialized to particular tasks like text summarization and tone adjustment. It was launched for developers and testers on July 29, 2024, in U.S. English, with the developer betas of iOS 18.1, macOS 15.1, and iPadOS 18.1, released partially on October 28, 2024, and will fully launch by 2026.According to a human evaluation done by Apple's machine learning division, the on-device foundation model beat or tied equivalent small models by Mistral AI, Microsoft, and Google, while the server foundation models beat the performance of OpenAI's GPT-3, while roughly matching the performance of GPT-4.
Apple's cloud models are built on a Private Cloud Compute platform which is allegedly designed with user privacy and end-to-end encryption in mind. Unlike other generative AI services like ChatGPT which use servers from third parties, Apple Intelligence's cloud models are run entirely on Apple servers with custom Apple silicon hardware built for end-to-end encryption. It was also designed to make sure that the software running on said servers matches the independently verifiable software accessible to researchers. In case of a software mismatch, Apple devices will refuse to connect to the servers.
On June 10, 2025, Apple announced that Apple's on-device foundation models will be available to third-party applications as part of the Foundation Models API, with support for structured data response and tool calling.
Features
Writing tools
Apple Intelligence features writing tools that are powered by LLMs. Selected text can be proofread, rewritten, made more friendly, concise or professional, similar to Grammarly's AI writing features. It can also be used to generate summaries, key points, tables, and lists from an article or piece of writing. In iOS 18.2 and macOS 15.2, a ChatGPT integration was added to Writing Tools through "Compose" and "Describe your change" features. Writing Tools has been replicated by Xiaomi, and an open-source PC program brings similar functionality to Windows, Linux, and older Macs.Real-time Translation
Apple Intelligence enables the real-time translation of messages, photos and videos, and phone calls, through Apple's hardware. For communicating with foreigners, using the Translate app on iPhone to show subtitles in their language or to play back the translated audio naturally in their language, and also by wearing AirPods with Live Translation can now help to understand what someone is saying in users' preferred language in conversation. If both have headphones, simultaneous interpretation can be achieved.Image Playground
Apple Intelligence can be used to generate images on-device with the Image Playground app. Similarly to OpenAI's DALL-E, it can be used to generate images using AI, using phrases and descriptions to output an image with customizable styles such as Animation and Sketch. In Notes, users can access Image Playground on iPad through the Image Wand tool in the Apple Pencil palette without having to open the Image Playground app. Rough sketches made with Apple Pencil can be transformed into images.As part of iOS, iPadOS, and macOS 26, Image Playground now integrates with the image generation models built into ChatGPT.