Apple CEO Tim Cook has a habit of dropping subtle hints about where the company is headed. This time, all the missing crumbs point to visual intelligence. And the impression they leave is of a company that is about to reshape the way people interact with the world around them.
How Apple Visual Intelligence Can Change Everything
Cook set a record for dropping hints years ago. In 2013, he talked about the coming explosion in sensor technology – long before the Apple Watch first came out. And ahead of the launch of the Vision Pro, he spoke at length about the promise of augmented and virtual reality. Now he seems to be doing it again when it comes to visual intelligence.
Visual Intelligence is Apple’s AI-based feature that allows devices to “see” the physical environment and respond intelligently to it. Currently available on the iPhone 16 Pro and later (and the iPhone 15 Pro through iOS 18.4), it allows users to point their cameras at objects, text, or places and get contextual information in return—essentially a reverse image search and AI query system under the hood of ChatGPT and Google OpenAI. It can read and summarize text, identify objects, translate languages, and more.
Ace of goals BloombergMark Gurman reported in his Turn on Apple is actively developing its own visual models to reduce its dependence on third-party services. The implications of this shift — and the hardware Apple plans to pair it with — could be significant.
Tim Cook’s not so subtle hints
Image: Apple/Cult of Mac
Cook highlighted visual intelligence in two significant places. During Apple’s holiday quarter earnings call, he called it one of Apple Intelligence’s most popular features, describing it as something that “helps users learn and do more with content on the iPhone screen than ever before.” He then brought it up again in an all-hands meeting with employees, touting Apple’s 2.5 billion installed devices as a “huge advantage” in AI.
Gurman sees a clear pattern. He notes that Cook wouldn’t have promoted the feature so publicly if Apple didn’t plan to significantly accelerate work in this area. The above supports this view. Cook’s early enthusiasm for sensors predated the Apple Watch, and his AR commentary foreshadowed the Vision Pro.
Visual intelligence seems to be the prelude to something much bigger.
3 new wearables
Apple is now accelerating the development of three new wearable devices as part of a move towards artificial intelligence-powered hardware. Bloomberg: smart glasses, a pendant that can be clipped to a shirt or worn as a necklace, and AirPods with built-in cameras for enhanced AI capabilities. All three are built around a smarter, more capable version of Apple’s Siri voice assistant, which connects to visual context to take action.
Smart glasses

AI concept: ChatGPT/Cult of Mac
Smart glasses are the most ambitious of the three. Apple’s smart glasses will include an advanced camera system — a high-definition camera for taking photos and videos, and a second camera that provides Siri with visual information and context of the environment.
The glasses will support interaction with Siri, making phone calls, listening to music, taking photos and recording videos. Users will be able to look at and ask questions about an object and get turn-by-turn navigation instructions while walking. Unlike Ray-Ban’s Meta smart glasses, which use a single camera that switches between functions, Apple will offer special lenses for each function—a difference that Apple employees see as a key differentiator. Production of the smart glasses could begin as early as the end of this year, with a market launch in 2027.
AI during

Illustration: Midjourney/Cult of Mac
The possible AI pendant is perhaps the wildest entry in the lineup, and also the most interesting. It will act as the iPhone’s always-on camera, which also includes a microphone for Siri input, with some Apple employees already calling it the “eyes and ears” of the phone.
Unlike the ill-fated Humane Ai Pin – which tried to replace the smartphone entirely – Apple’s pendant will serve as a companion device. It will offload the processing to the iPhone rather than acting as a standalone device.
Gurman explains the difference clearly: The Humane AI Pin didn’t fail because the form factor was inherently wrong. It failed because its AI was slow, its battery life was bad, and it overcame that by trying to replace a device that people love. Apple will not make such a mistake.
AirPods equipped with a camera

Photo: Apple
The improved AirPods complete the trio of Apple AI devices and may be the first product to bring visual intelligence to the widest audience. A version with cameras should arrive in late 2026.
These cameras would be low-resolution or infrared, designed less for photography and more for giving Apple Intelligence a view of the world. Given how many people already wear AirPods for hours every day, this might be the most practical entry point for persistent visual AI.
What could it actually be doing?
Practical applications of visual intelligence range from the mundane to the truly transformative. Simply put, you can point glasses or pours at a plate of food and instantly get a breakdown of ingredients and nutritional content. A step further and you could receive turn-by-turn walking directions that refer to actual landmarks – “turn left at the cafe on the corner” – rather than abstract distances.
This technology could also reveal contextual reminders. You can walk up to your car and be prompted to check your tire pressure, or walk into a grocery store and be reminded of what you need.
For the visually impaired, the options go deeper. Gurman flagged Meta’s plan to add facial recognition to its smart glasses, noting that while privacy concerns are real, properly implemented recognition of people in your own contacts could be a real win for accessibility.
The road ahead is not smooth

Image: ChatGPT/Cult of Mac
Real technical and software challenges stand in the way of Apple achieving its vision of ambient artificial intelligence. Miniaturization remains a constant limitation – fitting cameras and the necessary electronics into AirPods or a lightweight eyeglass frame is no small engineering feat.
And all three devices ultimately depend on the next generation of Siri, which Apple has yet to deliver. A more advanced chatbot version of Siri will come in iOS 27 and will rely on artificial intelligence models developed by Google.
There is also the issue of privacy. Cameras built into everyday wearables—the glasses you wear all day, the earplugs, the pendant around your neck—raise legitimate questions about consent and surveillance that Apple, regulators, and the public will have to grapple with.
The third act is being prepared
Among the new categories of Apple hardware since the release of the iPhone, the Apple Watch has succeeded spectacularly and over time has become a true health device. However, Vision Pro is still looking for an audience. The AI wearables now taking shape appear to be another new category for which Cook has quietly and methodically laid the groundwork.
It’s impossible to say at this stage whether the smart glasses, camera-enabled AirPods, and artificial intelligence pendant will resonate with consumers the way the Apple Watch eventually did. But the strategic logic is clear. With more than 2.5 billion active Apple devices in the hands of users and a growing set of AI capabilities, the company has both the infrastructure and the motivation to put intelligence into everything people wear.
Visual intelligence — today a relatively modest feature that relies on third-party technology — is the seed of that ambition. What Apple grows out of this is a story worth watching.