[ad_1]
After months of speculation, Apple Intelligence took center stage at WWDC 2024 in June. The platform was announced in the wake of a torrent of AI news from companies like Google and Open AI, causing concern that the notoriously silent tech giant had missed the boat in the latest tech craze.
However, contrary to this speculation, Apple had a team working on what proved to be Apple’s approach to AI. There was still excitement amid the demos – Apple always likes to put on a show – but Apple Intelligence is ultimately a very practical approach in this category.
Apple Intelligence (yes, AI for short) is not a standalone feature. It’s about integration into existing offerings. Although this is a branding exercise in a very real sense, technology based on the Large Language Model (LLM) will be working behind the scenes. As far as the consumer is concerned, the technology will mostly present itself in the form of new features for existing applications.
We learned more during Apple’s iPhone 16 event, held on September 9. During the event, Apple touted a number of AI-powered features coming to its devices, from Translation on the Apple Watch Series 10, visual search on iPhones and a number of tweaks to Siri capabilities. The first wave of Apple Intelligence will arrive at the end of October, as part of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. The second wave of features is available as part of developer betas for iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2.
Features were first released in US English. Apple has since added Australian, Canadian, New Zealand, South African, and UK English translations.
Support for Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese will arrive in 2025. Notably, users in both China and the EU may not be able to access Apple Intelligence features, due to regulatory hurdles.
What is Apple’s intelligence?
Cupertino marketing executives have described Apple Intelligence as “AI for the rest of us.” The platform is designed to leverage things that generative AI already does well, like generating text and images, to enhance existing features. Like other platforms including ChatGPT and Google Gemini, Apple Intelligence is trained on large information models. These systems use deep learning to create connections, whether it’s text, images, video, or music.
The text display, powered by LLM, presents itself as writing tools. The feature is available across various Apple apps, including Mail, Messages, Pages, and Notifications. It can be used to provide summaries of long texts, proofread, and even write messages for you, using content and tone prompts.
The image creation process is also integrated in a similar way, albeit in a less seamless manner. Users can ask Apple Intelligence to create custom emojis (Genmojis) in Apple’s house style. Meanwhile, Image Playground is a standalone image creation app that uses prompts to create visual content that can be used in Messages, Keynote, or shared via social media.
Apple Intelligence also represents a long-awaited facelift for Siri. The intelligent assistant was around early in the game, but has been mostly neglected over the past several years. Siri is more deeply integrated into Apple’s operating systems; For example, instead of the familiar icon, users will see a glowing light around the edge of their iPhone screen when they’re doing their work.
Most importantly, the new Siri works across apps. This means, for example, that you can ask Siri to edit a photo and then insert it directly into a text message. It’s a seamless experience that Assistant previously lacked. On-screen awareness means Siri uses the context of the content you’re currently viewing to provide an appropriate answer.
Who gets Apple Intelligence and when?

The first wave of Apple Intelligence arrives in October via updates to iOS 18.1, iPadOS 18., and macOS Sequoia 15.1. This includes integrated writing tools, image cleanup, article summaries, and writing input for the redesigned Siri experience.
Many of the remaining features will be added with the upcoming October release, as part of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. The second wave of features is available as part of iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. This list includes Genmoji, Image Playground, Visual Intelligence, Image Wand, and ChatGPT integration.
The offer will be free to use, as long as you have one of the following devices:
- All iPhone 16 models
- iPhone 15 Pro Max (A17 Pro)
- iPhone 15 Pro (A17 Pro)
- iPad Pro (M1 and later)
- iPad Air (M1 and later)
- iPad mini (A17 or later)
- MacBook Air (M1 and later)
- MacBook Pro (M1 and later)
- iMac (M1 and later)
- Mac Mini (M1 and later)
- Mac Studio (M1 Max and later)
- Mac Pro (M2 Ultra)
It is worth noting that only the Pro versions of the iPhone 15 will have access, due to shortcomings in the chipset of the standard model. However, the entire iPhone 16 line will supposedly be able to run Apple Intelligence when it arrives.
Private cloud computing

Apple has followed a small, ad hoc model for training. Instead of relying on the kind of kitchen sink approach that fuels platforms like GPT and Gemini, the company aggregated data sets internally for specific tasks like, say, composing an email. The biggest benefit of this approach is that many of these tasks become less resource-intensive and can be performed on the device.
But this does not apply to everything. More complex queries will benefit from the new private cloud computing offering. The company now runs remote servers running on Apple Silicon, which it claims allows it to offer the same level of privacy as its consumer devices. Whether the action is performed locally or via the cloud will be invisible to the user, unless their device is offline, at which point remote queries will throw an error.
Apple Intelligence with third-party apps

There was a lot of talk about Apple’s pending partnership with OpenAI ahead of WWDC. However, in the end, it turned out that the deal was less about supporting Apple Intelligence and more about offering an alternative platform for those things it wasn’t really designed for. It is an implicit admission that building a small modular system has its limitations.
Apple Intelligence is free. The same goes for accessing ChatGPT. However, those with paid accounts for the latter will have access to premium features that free users do not have, including unlimited queries.
The ChatGPT integration, which debuted on iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, has two primary roles: to supplement Siri’s knowledge base and add to existing writing tool options.
With the service enabled, certain questions will prompt the new Siri to ask the user to approve their access to ChatGPT. Recipes and travel planning are examples of questions the option might ask. Users can also ask Siri directly to “Ask ChatGPT.”
Generation is another core ChatGPT feature available through Apple Intelligence. Users can access it in any app that supports the new Writing Tools feature. Creating adds the ability to write content based on a prompt. This joins existing writing tools such as Style and Summary.
We know for sure that Apple is planning to partner with additional AI services. Google Gemini is next on that list, the company said.
[ad_2]