Summary created by Smart Answers AI
In summary:
- Macworld explores Apple’s Visual Intelligence technology, which launched with the iPhone 16 Pro and could become a defining feature for upcoming AI wearables like smart glasses and camera-equipped AirPods Pro.
- This artificial intelligence system identifies objects through cameras and provides contextual information while maintaining privacy through on-device processing and Private Cloud Compute architecture.
- Tim Cook sees visual intelligence as a central element of Apple’s future product strategy, which can give Apple a competitive advantage in the emerging market for artificial intelligence wearables.
Mark Gurman’s latest Power On newsletter has some interesting tidbits about Apple’s upcoming products, but perhaps the most fascinating one concerns Apple’s plans for future AI-powered wearables.
We’ve heard about them before – Apple is working on smart glasses (similar to the Meta Ray-Bans), AirPods Pro with cameras, and some kind of pin/tag. All are in various stages of development, and all are likely to rely heavily on visual intelligence.
That’s Apple’s brand for applying AI to things your device’s camera sees. It launched as part of the iPhone 16 Pro and then came to other devices with enhanced capabilities. You can take a photo of something around you to get contextual information about it, or even take a screenshot and do the same.
You can also ask ChatGPT about this topic, and the system is smart enough to contextually change your options. If you are looking at an event poster with dates and times, you can easily add it to your calendar. If it’s a restaurant, you can look for reviews, opening hours or a menu. You can identify plants or animals and look for similar objects online using Google Image Search.
Tim Cook apparently sees this area of AI technology as central to his upcoming AI devices. Apple is building its own visual models and intends to make this technology — contextual awareness based on what AI “sees” — a central pillar of future devices.
For example, you can simply look at your plate of food and get information about ingredients, portions or nutritional information. Turn-by-turn directions could use visual landmarks instead of just street names or distances. Reminders can be triggered by approaching and seeing something, not just times and places.
Cook sings this feature in recent performances. He shouted it at the company’s most recent earnings call and at a joint meeting where he discussed the company’s AI ambitions. It’s kind of weird to bring it up so consistently when it’s not exactly new and hasn’t changed much in the last year or so. It’s clear that the technology is on his mind, likely as he focuses on the company’s upcoming new products.
Obviously, privacy is essential for artificial intelligence processing what it sees around you. And in this area, Apple has an advantage – powerful neural processors in hundreds of billions of devices enable more processing per device than most competitors, and the company’s Private Cloud Compute architecture ensures that everything processed in the cloud protects your privacy by design.