To say that the road towards a spatial computing future has been bumpy for Google is an understatement. Google has had a long and challenging history with XR, encompassing products such as Google Glass smart glasses and the Google Daydream VR platform. Despite facing numerous obstacles, the company has continued to invest in XR technologies, recognizing their potential to enable advancements in AI; it has also continued to build spatial features into its products across search, maps, and more.
The recent release of the Gemini 2.0 AI models and Project Astra multi-modal agents exemplifies Google’s leadership in AI and underscores its commitment to integrating XR and AI. I recall when Google first introduced Astra via a phone demo at Google’s I/O event this year: all the interactions had me begging for a headset . . . which the demo then transitioned to midstream.
Today Google launched Android XR, which is what the industry has needed for a long time from Google—a platform to rally around for XR empowered by AI. This is something that Meta has also developed over the years with a lesser focus on AI and hopes to spread in the form of its Horizon OS, which now stands as a competitor to Android XR.
My Experiences With Android XR
Having the opportunity to experience Samsung’s Moohan headset, complete with Gemini, affirmed Google’s comprehensive approach to XR for me. Google’s strategy spans from lightweight single-display smart glasses to full AR dual-screen glasses and includes the Moohan prototype MR goggles equipped with high-definition passthrough, eye-tracking and hand-tracking functionalities. While many people will compare Moohan to Apple’s Vision Pro and Meta’s Quest 3 or Pro, it physically felt like some kind of blend in between those products. I was pleased to know it uses both hand and eye tracking and that the passthrough quality was extremely high, with very low latency. Thanks to Gemini, the interface felt like it borrowed some familiar ideas from Apple while also being much more capable.
A big part of Moohan’s performance and appeal comes thanks to the three-way partnership among Google, Samsung and Qualcomm, which delivered the computing power through its XR2+ Gen 2 platform. This is another way of saying that Google is able to access the best chips and hardware to empower the best AI and XR experiences. Thanks to Google’s partnership with Qualcomm on Android XR, other OEMs, including Lynx, Sony and Xreal, will also have devices running the OS, ensuring a diverse ecosystem of experiences and capabilities. Google will also absorb Qualcomm’s work with Snapdragon Spaces and enable forward compatibility as developers transition to Android XR. Snapdragon Spaces was Qualcomm’s attempt to fill the hole that Google left by not launching something like Android XR. With the existence of Android XR, there should no longer be a need for Snapdragon Spaces. The new OS should soften the move from Snapdragon Spaces for those developers and OEMs that use it.
Although convincing developers to build for Android XR might be a considerable challenge considering Google’s past, its initial low-friction strategy appears to align closely with Apple’s approach for the Vision Pro. Both companies aim to simplify application support for 2-D applications using existing apps in their stores; however, Google’s approach to spatial XR apps differs because of its embrace of open standards like OpenXR and WebXR that are familiar to XR developers. Google’s approach to accessories on Android will also translate to Android XR, making support for things like keyboards, mice, controllers and headphones a breeze. Google has already tapped industry veteran developers, including Resolution Games, Virtual Desktop and Tripp, as early developers for Android XR, which indicates that it has already spread its reach across gaming, productivity and health/fitness. That said, I believe that Google needs to add considerable fuel to the developer ecosystem to improve developer excitement and engagement.
When I tried out the Moohan headset, it was evident that the version of Gemini being used was multi-modal—and that it significantly enhanced the user experience by facilitating ease of use and comprehension. Given that keyboards are unlikely to become a primary interface for XR, Gemini plays a vital role in providing a seamless and high-quality user experience based on voice, gaze and gesture commands.
Astra on glasses also offered an impressive experience, notably allowing multi-language interactions and the ability to recall overlooked details visually. While these technologies are currently in the prototype stage, Google’s consistent integration of AI across its XR platforms is apparent. Google’s AI capabilities in XR appear even more credible when compared to Meta’s AI on Ray-Bans, which is less advanced, and Apple’s seeming reluctance to fully integrate Apple Intelligence into VisionOS.
Android XR’s Future
Google believes that now is the time to launch Android XR because AI tools like Gemini and Astra have matured enough that they can empower spatial computing in ways that weren’t possible before. I believe that Google’s development environment will be attractive to developers already familiar with XR and Android, and that it should make porting applications easy. I have long believed that AI and XR are complementary technologies, which is why I was truly surprised to see Apple bypass VisionOS with Apple Intelligence. Clearly, Google agrees because it is infusing AI everywhere within Android XR; I believe this is the right approach and will only increase the appetite for AI computing, whether in the cloud using Google’s Trillium silicon or on-device using Snapdragon. I expect Google’s launch of Android XR to be slow-rolled through 2025, with the Samsung Moohan starting the rollout but many other devices arriving throughout the year.
The industry has needed something like Android XR for years, and while I have said some less-than-nice things about Google’s role in XR in the past, I do believe Android XR’s deep integration with Gemini and Astra will be transformational for the industry. It was really powerful to experience the spectrum of XR from smart glasses up to a mixed-reality headset and understand how Android XR bridges all of those platforms in a way few companies could. It’s quite clear that Meta has some real competition from Google, and I’m genuinely glad to see that Google is back in the XR space with real gusto.
XR’s biggest problem is that the install base is too small for many developers to get on board; this was evident with Google’s earlier efforts, as it is today with the Vision Pro. Meta is the only company that has somewhat bucked that trend, but it has done so by spending tens of billions of dollars—far in advance of the XR revenue that would sustain that spending. Now, however, I believe that Android XR has the real potential to finally break the install-base problem with a single unified operating system for the XR ecosystem.
Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently has (or has had) a paid business relationship with Google, Meta, Qualcomm, Samsung and Sony.