Samsung has made waves as the first to officially introduce a mixed reality headset, riding high on the freshly revealed Android XR technology. Dubbed "Project Moohan," this headset is set to hit the consumer market in 2025, and I had the opportunity to test an early version firsthand.
Now, a quick disclaimer: Samsung and Google are keeping a tight lid on certain specifics for now—details like resolution, weight, field of view, and even the price haven’t been disclosed. During my demo session, snapping photos or recording videos was off the table, so for now, we’re limited to the official imagery.
In imagining what Project Moohan is like, picture a blend of the Quest and Vision Pro—it shares a fair bit in common with both. But that’s not just a lazy analogy. From a design perspective alone, you can see Vision Pro’s influence clearly reflected in Project Moohan. The colors, button layout, even the nuanced bits of calibration add up to make it apparent that Samsung is fully aware of what’s already out there.
On the software front, think of Android XR as a concoction that mixes elements from Horizon OS and VisionOS into something new yet familiar. If I had tasked you with merging those two operating systems, and you came back with Android XR, I’d say you got it right.
It’s rather interesting how Project Moohan and Android XR seem to carve their identity from the DNA of the major headset giants in the market.
However, let’s not jump to conclusions that there’s been any intellectual theft. The tech world is all about borrowing and building upon the success of others. As long as Android XR and Project Moohan deliver the goods by adopting the best features and dodging the pitfalls, it’s a boon for developers and users alike.
And indeed, many positive aspects are already present.
Exploring Samsung’s Project Moohan
Let’s take a closer look at Project Moohan’s hardware. It’s a striking piece, unmistakably drawing elements from the Vision Pro’s goggle-inspired design. But unlike the Vision Pro’s soft strap, which can be uncomfortable unless upgraded with third-party solutions, Samsung opts for a sturdier strap complete with a dial for adjusting fit, reminiscent of the Quest Pro’s ergonomic setup. This results in an open-peripheral design ideal for augmented reality experiences. Just like the Quest Pro, it includes clip-on blinders for those seeking a fully immersive journey.
Though Project Moohan borrows several design elements from Vision Pro—particularly button location and shape—it skips the external display showing the user’s eyes. That’s too bad, really, because Vision Pro’s ‘EyeSight’ feature, although divisive, is something I personally find useful. The experience of seeing someone’s eyes through the headset, while they see you, adds a touch of human connection that Moohan currently lacks.
Samsung remains fairly secretive about the technology packed into this prototype. Nevertheless, we do know it’s powered by a Snapdragon XR2+ Gen 2 processor—an improvement over the chips in Quest 3 and Quest 3S.
During my trial, I noticed a few interesting facets. For starters, the headset utilizes pancake lenses paired with automatic IPD adjustments, facilitated by built-in eye-tracking. The field-of-view seemed slightly narrower compared to the Quest 3 or Vision Pro, though to declare that with certainty, I’d need to experiment with different forehead pads to bring my eyes closer to the lenses.
From my experience, the field-of-view felt a bit restricted—still immersive, but with some diminishing brightness at the edges of the display, hinting that the sweet spot could use some refinement. Lens proximity might improve this, but as of now, it feels like the Quest 3 leads the pack, with Vision Pro trailing, and Project Moohan falling slightly behind.
While Samsung confirmed dedicated controllers for Project Moohan, I didn’t get to see or use them. It remains undecided whether these will be included with the headset or sold separately.
Thus, my session focused on hand and eye-tracking modes. This input method draws noticeable parallels to both Horizon OS and VisionOS. You can opt for raycast cursors like Horizon OS or go for eye+pinch inputs akin to VisionOS. The inclusion of downward-facing cameras allows for pinch recognition even when your hands rest naturally in your lap.
Once I put the headset on, the first thing that struck me was how clearly my hands appeared. From memory, the passthrough camera’s sharpness surpassed Quest 3 and exhibited less motion blur than Vision Pro, although tested under ideal lighting conditions. It seemed focused on an arm-length distance, given the clarity of my hands compared to the fuzziness of distant objects.
Exploring the Depths of Android XR
Moving into Android XR, it closely mirrors a fusion of Horizon OS and VisionOS. Similar to Vision Pro, you encounter a ‘home screen’ with app icons floating on a translucent background. Glimpse and pinch to select an app, and voila, you have floating panels housing your applications. Even the gesture to access the home screen mimics Vision Pro; just like it, you glance at your palm and pinch.
The system windows bear a closer resemblance to Horizon OS than VisionOS, owing to their primarily opaque backgrounds and the freedom to reposition windows by grabbing an invisible frame around the panels.
Beyond flat apps, Android XR is equipped for fully immersive experiences too. I explored a VR version of Google Maps—it mirrored Google Earth VR, allowing exploration of the globe, with 3D models of major cities, Street View imagery, and the newly added depth of volumetric captures of interior spaces.
While Street View remains a mono-visual 360-degree experience, volumetric visuals are rendered in real time for in-depth exploration. A described gaussian splat method was used for this, though whether it corresponds with existing interior photos or requires a new scan was unclear. Though not as pristine as expected from photogrammetry, it held up well. This capture operates on-device, with Google assuring sharper results in future updates.
Google Photos has leveled up for Android XR, introducing a feature where any of your existing 2D photos or videos transform into 3D renditions. In my brief encounter, the results were impressively on par with Vision Pro’s similar feature set.
YouTube also steps up in Android XR, transcending standard flat content viewing. It supports existing 180, 360, and 3D assets, promising to expand as more capable headsets emerge. Google shared a converted YouTube video originally in 2D but optimized for 3D viewing; it matched the quality of the 3D photo tech from Google Photos. It’s yet to be clarified if this conversion requires creator permission or happens automatically on YouTube’s end—stay tuned for developments.
The Noteworthy Edge (As of Now)
From both hardware and software angles, Android XR and Project Moohan align closely with Google’s distinctive flavor within the current market. Notably, they excel in conversational AI.
Gemini, Google’s AI agent, specifically the ‘Project Astra’ version, integrates right from the home screen. Besides listening, it sees what you see in both real and virtual realms—continuously. This perception blend makes Gemini smart, seamlessly integrated, and more conversational than AI counterparts in existing headsets.
Granted, Vision Pro has Siri, yet Siri handles one-off tasks without the conversational coherence. Quest, too, has Meta’s experimental AI, but while it’s aware of the real world, it’s oblivious to virtual content, creating disconnect. Meta plans changes, but for now, it works on a query-response basis after snapping a static image.
Conversely, Gemini gets a streaming approximation of your virtual and real-world views; thus, avoiding awkward pauses or requiring a fixed gaze to capture a single frame query.
On Android XR, Gemini benefits from a rolling memory for contextual awareness. Google boasts a 10-minute memory span, retaining “key details of past conversations,” allowing for referrals to earlier discussions and visible content. In demonstration, I quizzed it in a room filled with objects—Gemini adeptly sidestepped my tricky questions.
For instance, I asked it to translate a Spanish sign into English, which it swiftly did. Next, I requested French translation for an already-French sign—Gemini astutely recognized this and relayed the French content back with authentic pronunciation. Minutes later, when I questioned, “what did that sign say earlier?” it responded with precision, reciting the French sign. Pushing it further with, “what about the prior one?” it recounted the Spanish sign perfectly.
It’s significant; just a few years back, handling such nuanced queries would challenge most AI systems. Yet, Gemini succeeded effortlessly, illuminating its impressive context-tracking capability.
Additionally, Gemini transcends general queries—it can control the headset to some degree. One demonstration entailed asking it to “take me to the Eiffel tower,” which promptly delivered a 3D Google Maps experience. Since it detects both the virtual and real environments, conversations like “how tall is it?” and “when was it built?” flow seamlessly.
Gemini also retrieves pertinent YouTube videos as answers. For instance, saying, “show a video of the view from the ground,” while admiring the virtual Eiffel tower, conjures up just the right footage.
Eventually, Gemini on Android XR should perform standard assistant duties like messaging, email composition, and reminder setups—but its depth in XR-specific actions will be intriguing to observe.
Gemini’s iteration on Android XR rates as arguably the best AI agent currently available for headsets, even surpassing Meta’s Ray-Ban smartglasses AI. Yet, the competitive landscape is fluid; both Apple and Meta are surely working on similar functionalities, potentially narrowing Google’s lead.
Gemini on Project Moohan adds a substantial edge for spatial productivity, but perhaps its real potential lies in being integrated into smaller, everyday smartglasses, which I had the chance to test—but that’s another story for a future discussion.