This article is from The Debrief with Mat Honan, MIT Technology Review’s weekly newsletter from its editor in chief. To receive it every Friday, sign up here.

In case you missed the memo, we are barreling toward the next big consumer device category: smart glasses. At its developer conference this week, Meta (née Facebook) introduced a positively mind-blowing new set of augmented reality (AR) glasses dubbed Orion.  Snap unveiled its new Snap Spectacles last week. Back in June at Google IO, that company teased a pair. Apple is rumored to be working on its own model as well. Phew.

Both Meta and Snap have now put their glasses in the hands of (or maybe on the faces of) reporters. And both have proved that after years of promise, AR specs are at last A Thing. But what’s really interesting about all this to me isn’t AR at all. It’s AI.

Take Meta’s new glasses. They are still just a prototype, as the cost to build them—reportedly $10,000—is so high. But the company showed them off anyway this week, awing basically everyone who got to try them out. The holographic functions look very cool. The gesture controls also appear to function really well. And possibly best of all, they look more or less like normal, if chunky, glasses. (Caveat that I may have a different definition of normal-looking glasses than most people. ) If you want to learn more about their features, Alex Heath has a great hands-on writeup in The Verge.

But what’s so intriguing to me about all this is the way smart glasses enable you to seamlessly interact with AI as you go about your day. I think that’s going to be a lot more useful than viewing digital objects in physical spaces. Put more simply: it’s not about the visual effects, it’s about the brains.

Related work from others:  Latest from MIT : Mapping the brain pathways of visual memorability

Today if you want to ask a question of ChatGPT or Google’s Gemini or what have you, you pretty much have to use your phone or laptop to do it. Sure, you can use your voice, but it still needs that device as an anchor. That’s especially true if you have a question about something you see—you’re going to need the smartphone camera for that. Meta has already pulled ahead here by letting people interact with its AI via its Ray-Ban Meta smart glasses. It’s liberating to be freed from the tether of the screen. Frankly, staring at a screen kinda sucks.

That’s why when I tried Snap’s new Spectacles a couple of weeks ago, I was less taken by the ability to simulate a golf green in the living room than I was with the way I could look out on the horizon, ask Snap’s AI agent about the tall ship I saw in the distance, and have it not only identify it but give me a brief description of it. Similarly, in The Verge Heath notes that the most impressive part of Meta’s Orion demo was when he looked at a set of ingredients and the glasses told him what they were and how to make a smoothie out of them.

The killer feature of Orion or other glasses won’t be AR ping-pong games—batting an invisible ball around with the palm of your hand is just goofy. But the ability to use multimodal AI to better understand, interact with, and just get more out of the world around you without getting sucked into a screen? That’s amazing.

Related work from others:  AI Trends - Promise and Perils of Using AI for Hiring: Guard Against Data Bias 

And really, that’s always been the appeal. At least to me. Back in 2013, when I was writing about Google Glass, what was most revolutionary about that extremely nascent face computer was its ability to offer up relevant,  contextual information using Google Now (at the time the company’s answer to Apple’s Siri) in a way that bypassed my phone.

While I had mixed feelings about Glass overall, I argued, “You are so going to love Google Now for your face.” I still think that’s true.

Assistants that help you accomplish things in the world, without having to be given complicated instructions or interfacing with a screen at all, are going to usher in a new wave of computing. While Google’s Project Astra demo, a still unreleased AI agent that it showed off this summer, was wild on a phone, it was not until Astra ran on a pair of smart glasses that things really fired up.

Years ago, I had a spox from Magic Leap, an early company working on AR headsets, try to convince me that leaving virtual objects, like a digital bouquet of flowers, around in physical spaces for others to find would be cool. Okay… sure. And yeah, Pokemon Go was hugely popular. But it has taken generative AI, not AR gimmicks, to really make smart glasses make sense.

Multimodal AI that can understand speech, video, images, and text, combined with glasses that let it see what you see and hear what you hear, will redefine the way we interact with the world every bit as much as the smartphone did.

Related work from others:  Latest from Google AI - HEAL: A framework for health equity assessment of machine learning performance

Finally, a weird aside: Orion was the great huntsman of Greek mythology. (And of course, is the constellation you see up in the sky.) There are lots of versions of his story, but a common one is that the king of Chios blinded him after Orion drunkenly raped the king’s daughter.  He eventually regained his vision by looking into the rising sun.

It’s a dramatic story, but maybe not the best product name for a pair of glasses.

Share via
Copy link
Powered by Social Snap