Google Gave Me an Exclusive Look at Project Aura and Big Android XR Updates
Here’s What I Learned
Last week I found myself in a nondescript building in the Bay Area, getting hands-on time with hardware that Google hasn’t shown to many people yet. I walked out with a more developed understanding of where XR is headed and why Google thinks they’ve finally cracked the code that eluded them with Google Glass over a decade ago.
In this video, I break down Google’s ambitious Android XR platform strategy, go hands-on with the new Project Aura glasses, and sit down with Juston Payne, Google’s Senior Director of Product Management for XR, for a conversation about why the timing is finally right.
You can watch the full deep-dive on YouTube here. But first, let me walk you through the highlights.
The Platform Play That Changes Everything
Google isn’t trying to win a hardware race. They’re running the same playbook that made Android dominant on phones, and applying it to XR. When I put on the Android XR glasses, the apps that already exist on the phone just work. During the demo, interactive YouTube Music notifications appeared in my field of view, Uber pickup information surfaced automatically, all with no special XR version of the app required. That means that no extra developer effort is needed to participate in a meaningful way.
This addresses the chicken-and-egg problem that has killed many platforms and efforts before it. Developers don’t need to build for an unproven device with no users. Their apps already work, because quite simply, they already created an app for Android. I’m guessing that once they see their experiences running on glasses, they’ll get inspired to build even more on top to make it even better.
The promise is simple: if you build for Android, you already built for Android XR.
Gemini and XR Were Made for Each Other
I continue to believe that AI and XR are perfect companions. Multimodal AI needs cameras, microphones, and contextual awareness to reach its full potential. XR hardware provides exactly that, in spades. Google didn’t bolt Gemini onto Android XR as a feature, as we already know. They baked it into the foundation from day one.
I had conversations with Gemini inside the glasses about what I was looking at. I asked for recipe suggestions based on ingredients sitting on a counter in front of me. I watched the system take a photo and edit the background with Nano Banana using a single “compound query.” The translation feature matched the intonation of the person speaking to me in Spanish. All of this is what can happen when AI and hardware merge together at the foundation of the OS.
Project Aura Fills a Gap Nobody Else Is Targeting
Project Aura, the prototype made in collaboration with Google and XREAL, sits in a category that is far less traveled. It offers more immersion than the lightweight AI glasses from Meta, Ray-Ban, and even Google’s own Astra prototypes, but demands less commitment than strapping on a full VR headset like the Apple Vision Pro or even the Samsung Galaxy XR.
Google says this headset is perfect for “episodic” use. You aren’t meant to wear Aura all day. You put them on when you have a reason, like getting work done on a flight, tabletop gaming with friends across different devices, private and immersive media viewing, or focused work across multiple floating windows pinned to the room. The compute, the same Snapdragon XR2+ Gen 2 chip on the Galaxy XR headset, lives in a separate wired pack, keeping the glasses light at around 90 grams. The field of view hits 70 degrees. And because it runs Android XR, every app and experience from the Galaxy XR headset transfers directly. Even hand tracking is here, thanks to two on-board tracking cameras.
Honestly, the experience largely mirrors what you get on the much larger and likely more expensive Galaxy XR headset, but on a much more portable and approachable glasses form factor.
The Three Things Missing from Google Glass
Juston Payne was pretty candid in our sit-down interview about why Google Glass failed. The vision was right, he told me. A product that subtly gives you information so you can have a better experience in the real world was always the goal.
But three things were missing. The display technology wasn’t ready. Direct line-of-sight displays matter, and shrinking them into something that looks like normal glasses wasn’t possible in 2012. There was no platform, no app ecosystem, no developer tools, no content story. And there was no AI, not quite like what we have today anyway. The intelligence required to make the experience actually useful simply didn’t exist yet.
Thirteen years later, all three pieces have arrived simultaneously and Google believes they can finally honor that original vision. Whether society at large is ready for that is a whole other can of worms that we discuss in my interview.
The 10-Year-Old Test
One moment from my conversation with Juston really drove home where we are at with immersive, wearable tech like what I saw last week. He told me about a family trip to Rome where he put prototype Android XR glasses on his 10-year-old son. His son navigated the entire family through winding streets using nothing but voice directions and turn-by-turn cues in his field of view, with the promise of gelato on the other end of the journey. Using the glasses as his guide, he walked confidently to their destination with his family following behind.
“If a 10-year-old can just get it,” Juston said, “then you know you’re onto something.” That’s the bar Google is setting. Technology intuitive enough that a child could conceivably pick it up and goes.
Don’t miss my complete interview with Juston Payne so you can better understand what Google is building with these devices. That’s on the AI Inside channel and the podcast feed.

