Apple Intelligence


Apple Intelligence is a generative artificial intelligence system developed by Apple Inc. Relying on a combination of on-device and server processing, it was announced on June 10, 2024, at the 2024 Worldwide Developers Conference, as a built-in feature of Apple's iOS 18, iPadOS 18, and macOS Sequoia, which were announced alongside Apple Intelligence. Apple Intelligence is free for all users with supported devices.

History

Background

Apple first implemented artificial intelligence features in its products with the release of Siri in the iPhone 4S in 2011. In the years after its release, Apple engaged in efforts to ensure its artificial intelligence operations remained covert; according to University of California, Berkeley professor Trevor Darrell, the company's secrecy deterred graduate students. The company started expanding its artificial intelligence team in 2015, opening up its operations by publishing more scientific papers and joining AI industry research groups. Apple reportedly acquired more AI companies from 2016 to 2020. In 2017, Apple released the iPhone 8 and the iPhone X with the A11 Bionic processor, which featured its first dedicated Neural Engine for accelerating common machine learning tasks. Despite its investments in artificial intelligence, Siri was criticized both by reviewers and internally at Apple for lagging behind other AI assistants.
The rapid development of generative artificial intelligence and the release of ChatGPT in late 2022 reportedly blindsided Apple executives and forced the company to refocus its efforts on AI. In an interview with Good Morning America, Apple CEO Tim Cook stated that generative AI had "great promise" but had some potential dangers, and that it was "looking closely" at ChatGPT. It was first reported in July 2023 that Apple was creating its own internal large language model, codenamed "Ajax". In October 2023, Apple was reportedly on track to release new generative AI features into its operating systems by 2024, including a significantly redeveloped Siri. In an earnings call in February 2024, Cook stated that the company was spending a "tremendous amount of time and effort" into AI features that would be shared "later that year".

Google deal

In January 2026, Apple and Google announced a multi-year partnership under which Apple’s next-generation foundation models are expected to incorporate Google’s Gemini models and cloud infrastructure. According to the companies, the collaboration is intended to support future Apple Intelligence features, including enhancements to Siri, while Apple Intelligence will continue to operate on Apple devices and through Apple’s Private Cloud Compute system, which Apple states is designed to preserve user privacy.
On an earnings call, Apple reported to investors that they were integrating an on-device model of the Google Gemini AI to Siri, as the development of their model was beset with setbacks. Apple has previously tested and used other third-party AI models like ChatGPT, but according to a Bloomberg article by Mark Gurman, Apple pushed forward the proposed Google deal; by using Google's Gemini model possessing 1.2 trillion parameters, Apple would integrate a much larger and complex model than those it previously developed and used. Of note, comparable AI models from other major companies have also been reported to operate at a similar “trillion-parameter” scale and to compete against Gemini-class systems on benchmarks.

Models

Apple Intelligence consists of an on-device model as well as a cloud model running on servers primarily using Apple silicon. Both models consist of a generic foundation model, as well as multiple adapter models that are more specialized to particular tasks like text summarization and tone adjustment. It was launched for developers and testers on July 29, 2024, in U.S. English, with the developer betas of iOS 18.1, macOS 15.1, and iPadOS 18.1, released partially on October 28, 2024, and will fully launch by 2026.
According to a human evaluation done by Apple's machine learning division, the on-device foundation model beat or tied equivalent small models by Mistral AI, Microsoft, and Google, while the server foundation models beat the performance of OpenAI's GPT-3, while roughly matching the performance of GPT-4.
Apple's cloud models are built on a Private Cloud Compute platform which is allegedly designed with user privacy and end-to-end encryption in mind. Unlike other generative AI services like ChatGPT which use servers from third parties, Apple Intelligence's cloud models are run entirely on Apple servers with custom Apple silicon hardware built for end-to-end encryption. It was also designed to make sure that the software running on said servers matches the independently verifiable software accessible to researchers. In case of a software mismatch, Apple devices will refuse to connect to the servers.
On June 10, 2025, Apple announced that Apple's on-device foundation models will be available to third-party applications as part of the Foundation Models API, with support for structured data response and tool calling.

Features

Writing tools

Apple Intelligence features writing tools that are powered by LLMs. Selected text can be proofread, rewritten, made more friendly, concise or professional, similar to Grammarly's AI writing features. It can also be used to generate summaries, key points, tables, and lists from an article or piece of writing. In iOS 18.2 and macOS 15.2, a ChatGPT integration was added to Writing Tools through "Compose" and "Describe your change" features. Writing Tools has been replicated by Xiaomi, and an open-source PC program brings similar functionality to Windows, Linux, and older Macs.

Real-time Translation

Apple Intelligence enables the real-time translation of messages, photos and videos, and phone calls, through Apple's hardware. For communicating with foreigners, using the Translate app on iPhone to show subtitles in their language or to play back the translated audio naturally in their language, and also by wearing AirPods with Live Translation can now help to understand what someone is saying in users' preferred language in conversation. If both have headphones, simultaneous interpretation can be achieved.

Image Playground

Apple Intelligence can be used to generate images on-device with the Image Playground app. Similarly to OpenAI's DALL-E, it can be used to generate images using AI, using phrases and descriptions to output an image with customizable styles such as Animation and Sketch. In Notes, users can access Image Playground on iPad through the Image Wand tool in the Apple Pencil palette without having to open the Image Playground app. Rough sketches made with Apple Pencil can be transformed into images.
As part of iOS, iPadOS, and macOS 26, Image Playground now integrates with the image generation models built into ChatGPT.

Genmoji

Using Apple Intelligence text-to-image models, users can generate unique "Genmoji" images by typing descriptions. Users can pick people in photos to have Genmoji generate images that resemble them. Similarly to emoji, Genmoji can be added inline to text messages, tapbacks, stickers and can be shared in Messages as well in third-party applications as inline messages or as stickers.

Siri overhaul

, which used to be Apple's virtual assistant, has been updated to be an LLM chatbot, with enhanced capabilities made possible by Apple Intelligence. The latest iteration features an updated user interface, improved natural language processing, and the option to interact via text by double tapping the home bar without enabling the feature in the Accessibility menu, or double-clicking the command key on macOS. In a later update, Apple Intelligence will add the ability for Siri to use personal context from device activities to answer queries.

Mail

Apple Intelligence adds a feature called Priority Messages to the Mail app, which shows urgent emails such as same-day invitations or boarding passes, with AI generated summaries of the email. The Mail app also gains the ability to categorize incoming mail into Primary, Transactions, Updates, and Promotions based on what the email contains, which Apple claims is done all on-device.

Photos

Apple's Photos app includes a feature to create custom memory movies and enhanced search capabilities. Users can describe a story, and using Apple Intelligence, Photos selects matching photos, videos, and music. Users can also remove distractions in images with the Clean Up tool in the Photos app. Apple Intelligence identifies background objects and removes them with a tap, brush, or circle. It organizes these into a movie with a narrative arc based on identified themes. Additionally, users can search for specific photos or videos by description and/or keyword, and Apple Intelligence can pinpoint particular moments within video clips.

Notifications

Using the Notification Summary feature, Apple Intelligence can summarize notifications from messaging apps and groups of notifications from apps so that users don't have to examine large numbers of notifications. A new Reduce Interruptions focus mode silences notifications deemed unimportant while letting important notifications go through.

Visual Intelligence

On the iPhone 16, 16 Plus, 16 Pro, and 16 Pro Max or later, users are able to hold down the Camera Control button and take a picture of an item to then either send to ChatGPT or search with Google. The image taken is not stored on-device, and Apple claims they do not have access to the image either. This is meant to allow people to learn more about items faster. This feature was also made available on the iPhone 15 Pro, iPhone 15 Pro Max, iPhone 16e, and later, starting with iOS 18.4 via the action button or in the Control Center.

ChatGPT integration

As a result of the company's partnership with OpenAI, Apple Intelligence includes a system-wide integration with ChatGPT, allowing Siri to determine when to send certain complex user requests to ChatGPT. This system-wide integration is powered by GPT-4o. ChatGPT integration is opt-in by default, with users being prompted before any data or photos are sent to ChatGPT servers and IP addresses being obscured when requests are sent to OpenAI's servers. Using ChatGPT features is free for all users without needing to sign in; however, they will only get a limited number of GPT-4o requests until switching to a less powerful GPT. Paid subscribers can sign in to gain access to paid features systemwide including more requests using GPT-4o. Apple plans to integrate other models such as Google's Gemini into the system in the future.