/r/augmentedreality

Photograph via snooOG

AR News and Community: All about the Evolution ► AI Glasses ► Smart Glasses ► Augmented Reality ► Mixed Reality

XREAL Rokid RayNeo INMO Viture Even Realities HoloLens Niantic 8th Wall ARKit Apple Vision Pro WebXR Vuzix QonoQ HUD Ubiquitous Ambient Spatial Computing Eyewear Frames Optics Computer Vision Snapdragon Spaces Vuforia ARCore Android XR Magic Leap Snap Spectacles Lens Studio Ray-Ban Meta Orion

/r/augmentedreality

53,995 Subscribers

3

Can iphone handle AR realtime tracing?

I saw people use iPad Pro to render objects that fit into the real world. As the camera moves, the object remains in its position, which requires correct position tracing using things like LiDAR.

Could iPhone with LiDAR do these things equally good?

The main concern is that such AR rendering requires a lot of GPU calculation of grids in 3D space, and I won't be surprised to see an iPad Pro with an M series chip to handle that. However, I have no idea if the iPhone A series is powerful enough.

1 Comment
2024/12/01
03:29 UTC

4

XREAL's approach to success in the Japanese AR market

While XREAL is planning to expand in China with a new partnership with eyewear retail chain Doctorglasses, Yin Zhiqiang from XREAL Japan shared their experience and strategies in the Japanese market with 36kr. Here's a summary.

Understanding the Japanese Market:

Yin emphasizes that the Japanese market is more traditional than China's. Direct advertising and KOL (Key Opinion Leader) marketing, while effective in China, don't yield the same quick results in Japan. Building strong relationships with traditional media, government bodies, and industry institutions is crucial for long-term success. XREAL focused on cultivating these relationships, recognizing their profound impact on market penetration.

Adapting to the Local Culture:

Yin highlights the importance of cultural sensitivity and adaptation. He recounts how he made a conscious effort to integrate into the Japanese work culture, starting with simple gestures like bowing to clients. He also cautions against blindly applying strategies that worked in other markets. Many companies try to "empower" overseas markets with their domestic experience, but this often fails due to differing market dynamics. Yin describes a three-stage process many companies go through in Japan: initial confidence ("this will work"), followed by confusion ("why isn't this working?"), and finally, anxiety ("what do we do now?").

Long-Term Vision and "Growing Downwards":

Instead of short-term, explosive campaigns, XREAL focused on a "grow downwards" strategy, building a strong foundation through consistent, localized efforts. This involved:

  • Localization: Building a local team to connect with Japanese society and establish brand presence.
  • Deepening Relationships: Fostering long-term relationships with various stakeholders, including universities, government agencies, and media outlets, to create a supportive ecosystem.

Content Education as a Core Strategy:

XREAL's core strategy revolves around "content education." Since AR glasses are a new product category, educating consumers about their use cases and value proposition is crucial. This involves:

  • Focusing on familiar scenarios: Demonstrating how XREAL glasses enhance existing experiences like watching movies, gaming, and working, rather than introducing entirely new usage scenarios.
  • Leveraging social media: Utilizing a diverse pool of KOLs and KOCs (Key Opinion Consumers) to create engaging content that showcases the glasses in real-life situations. Interestingly, XREAL found that micro-influencers passionate about technology were more effective than top-tier KOLs with limited tech knowledge.
  • Securing PR endorsements: Collaborating with media outlets, universities, industry associations, and government institutions to build credibility and social proof. XREAL secured significant media coverage, including features on national television, by actively engaging with these stakeholders.
  • Enhancing retail experience: Establishing offline experience zones to allow consumers to try the product firsthand. XREAL set up over 40 experience counters across major Japanese cities.

Results and Future Plans:

XREAL's approach has yielded impressive results:

  • Market share in Japan approaching 90%.
  • 93% growth in 2023.
  • Dominating Amazon's smart glasses category.
  • High online engagement and discussion volume.

In 2024, XREAL plans to further enhance content education by creating more engaging experiences, such as the "glasses fashion show" at Tokyo Tower, which aimed to bridge the gap between the functional and aesthetic aspects of AR glasses.

Key Takeaways:

XREAL's success in Japan highlights the importance of:

  • Cultural sensitivity and adaptation.
  • Long-term vision and relationship building.
  • Content-driven marketing focused on consumer education.
  • Integrating online and offline experiences.

This approach provides valuable insights for any company seeking to enter and thrive in the Japanese market.

1 Comment
2024/11/30
23:48 UTC

25

Bosch has a new module for AR smart glasses. It projects the images directly onto the retina with lasers

"It provides key features such as a unique visual experience, delivering bright, always-in-focus content whether you're indoors or outdoors. Our solution ensures high lens transparency and user privacy, with content visible only to the wearer. Additionally, our integrated camera-less eye tracking enables seamless access to contextual information.

Our Light Drive solution enables prescription lenses with a lightweight design of just 40 grams."

For more information about retinal scan displays:

https://en.m.wikipedia.org/wiki/Virtual_retinal_display

"A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display (like a television) directly onto the retina of the eye."

11 Comments
2024/11/30
22:55 UTC

4

Last chance to take part! — We give away 1 Rokid Station per 50 upvotes

2 Comments
2024/11/30
15:32 UTC

54

Dynamic Gaussian Splatting in Gracia app — on Quest 3 — Apple Vision soon

7 Comments
2024/11/30
03:10 UTC

3

It’s the content library.

Over the years so many big tech companies have tried and failed AR/VR devices. Every single time, we are told the hardware specs - FOV, nits, weight etc. I would think by now we would’ve learned that success or failure rests on library of apps at launch. If you are one of these companies please tell your team to spend money on getting devs on board, at least as much effort as you spend on hardware r&d engineering. Software design for games or non-games can work w any hardware spec.

1 Comment
2024/11/29
20:19 UTC

1

Conversation Starter app custom code advice?

Hi mixed reality devs, I'm Dustin!

I'm new to mixed reality but know coding. Before I use my Christmas money to buy some smart glasses, I want to ask: are they open to devs and custom apps? Excuse me for asking both on the official forums and Reddit.

The goal is a Conversation Starter app. It overlays the real world with a text box slash menu in the upper right corner. It has five items of news like football scores and prewritten jokes. After navigating to a topic, it shows the topic content like yesterday's Chiefs game outcome. Then you can navigate back to the menu.

I ask you today, is it possible to make this app? If I buy some smart glasses, can I make the app pictured below? Just as important, would it help me conversationally in public like bars by helping me remember people's names and strike up with today's news?

What are the requirements? Developer tool and how would you approach this app?

I know C, Python, Web Dev, or you can ELI5. Thanks! - Dustin

1 Comment
2024/11/29
20:15 UTC

2

Even Realities G1 Questions

Few questions before purchasing

  1. Can you set reminders? I see notifications is a thing but can I “set reminder for tomorrow at 6am to…” ?
  2. Hows their roadmap for software development ? Are they on top of it or what you get now is what you get / don’t expect it to change software wise much.
  3. Return Policy - I see it’s 14 days unused or unopened but what if I find it not to work as intended or buggy in use and need to return it?
  4. Lastly what was one thing you were pleasantly surprised by and unpleasantly surprised by?

Thank you!!

0 Comments
2024/11/29
19:28 UTC

6

Google trends says it: the 2020s are the time for AR glasses to shine!

4 Comments
2024/11/29
19:28 UTC

11

INMO GO 2 — smart glasses as a universal translator with high quality on-device translation

INMO GO 2

Offline Translation and Transcription in 8 Languages: Chinese, English, Japanese, Korean, French, Spanish, Russian and German. Online Translation in 40 Languages. Recognition of 90 Accents. Customization for Industry-Specific Technical Jargon.

Another Use Case: Teleprompter. Discretely Contolled via the INMO RING 2.

microLED and Diffractive Waveguide. Monochrome Green. Waveguide Front Light Leakage Reduction. 15 Degree Downward Tilt.

Dual Batteries: 440mAh. Charged to 80% in 20 Minutes. Battery Life 150 Minutes.

Price: 3999 Yuan ($550). Launch Discount Price: 3299 Yuan ($455)

12 Comments
2024/11/29
18:06 UTC

8

Augmented Reality From 1991 {vintage video}

1 Comment
2024/11/29
17:24 UTC

18

Watch out, a flying rollercoaster approaches! 🎢

3 Comments
2024/11/29
15:15 UTC

12

INMO Air 3 — Smart Glasses with 1080p Displays

22 Comments
2024/11/29
09:56 UTC

31

INMO AIR 3 — Full Color Sony OLED — 62 PPD — 1080p 😲

44 Comments
2024/11/29
08:28 UTC

9

Rokid Spatial Link — Floating Windows for All Devices

1 Comment
2024/11/29
06:18 UTC

6

Have we talked about the Rokid Spatial Link yet? — It enables 3DoF for all devices you connect your 'virtual monitor glasses' with: computers, phones, tablets, consoles

14 Comments
2024/11/29
05:00 UTC

3

【New Product Release】XREAL's Next Revolution:Unveiling the Future with the All-New X1 Chip! Find Out More on December 4th at 10 AM EST

2 Comments
2024/11/29
02:32 UTC

16

Goolton launches its first Smart Glasses — Goolton Star 1S comes with Android and OLED & waveguide displays in an all-in-one package

38 Comments
2024/11/28
21:13 UTC

8

EgoPoser: Real-time pose estimation from headset-based egocentric observations for AR and VR avatars

1 Comment
2024/11/28
19:36 UTC

90

Full body motion capture for AR and VR — with IMUs in phone, watch, ear buds only — we're getting closer!

7 Comments
2024/11/28
18:19 UTC

3

Looking for first pair of XR/AR glasses

I'm fairly new to this space and in need of some advice. I'm looking to buy a pair of AR glasses to use primarily for watching movies or playing games while traveling. I have a Quest 3 but feel it takes up too much space so I want something considerably smaller for use on the plane. I also may use with my PS5 at home as a second screen when the TV is in use. I'm also confused about battery life as none of them really mention how long they last on say a 10000 mah battery pack. I was looking at RayNeo Air 2s as there's a pretty good deal on them now but I'm open to any suggestions.

TL;DR: Need suggestions for a AR glasses that are:

  • compatible with Pixel 7 Pro, Nintendo Switch, PS5
  • good for watching movies and gaming on a plane

Also how big of a battery pack do I need to get 5+ hours out it?

Thanks in advance!

2 Comments
2024/11/28
18:04 UTC

2

Looking for guidance in AR Domain

I am interested in AR and I am looking for people who have knowledge in this domain.

For my initial project I have made a simple goal:
To render AR objects on a website like ecommerce (similar to what Amazon does). I can create AR models using my iphone but they are exported in usdz format. I am not sure if they play well in android and web.

I am looking for guidance.

2 Comments
2024/11/28
13:43 UTC

3

I have to wonder, what's the minimum hardware requirements for "Inside Out Tracking of Controllers" and can headsets that don't launch with Controllers, like Apple Vision Pro, still add Controllers down the line?

This something I been curious about. I don't know all the bells and whistle of how VR hardware companies handle Inside out tracking. I believe Quest 3 has 4 IR cameras for tracking the lights on the controllers.

I believe the Apple Vision Pro has 2 IR cameras and projector for hand tracking. But hypothetically Speaking, could that be enough IR sensors to add a form of Inside Out Controllers tracking.

I been thinking about this, because Samsung's XR Headset and a Future Meta headset is rumored to follow the Apple VisionPro controller-less model. We know there are other methods to implement controllers with self tracking, like the Quest Pro controllers. But that's a different subject. I want to know how possible it is for inside out tracking

4 Comments
2024/11/28
10:02 UTC

23

XREAL will announce new AR glasses on December 5! — "A core revolution, framing your passions" could be the translation of the teaser phrase, hinting at the new XREAL X1 chip inside

20 Comments
2024/11/28
07:44 UTC

8

Any AR glasses to replace large monitors?

I practice tax in transactions, so my work is primarily to deal with Word, PPT, Excel and Outlook. I have a 43-inch monitor at home which is very helpful especially when I need to have multiple windows shown on the screen at the same time and when I need to read something in details. But I find myself unwilling to go to the office because the monitors at the office are much smaller which make me less productive. Are there any reliable AR glasses that enable me to work in the office but still having a large and helpful monitor in front of me? Thanks

5 Comments
2024/11/28
06:49 UTC

17

Will Smartphones Be Replaced by AR Smart Glasses?

Short Answer: Yes, but under certain conditions.


Long Answer:

To understand if AR smart glasses will replace smartphones, let’s look back at history to see how smartphones became an integral part of our lives.

The Evolution of Smartphones: A long time ago, smartphones were considered luxury items due to their high cost. Back then, people relied on letters for communication. However, as technology advanced, smartphones became increasingly affordable and user-friendly. But was affordability the only factor that made smartphones indispensable? Not entirely.

A significant contributor was the developer community. Companies started providing access to developers, enabling them to build customized functionalities for mobile devices. This led to rapid innovation, attracting early adopters, particularly tech-savvy and affluent users.

Over time, a critical mass of users was reached. For example, certain smartphone functionalities—like emails and phone calls—only made sense if the recipient also had a smartphone or a similar device. This created a network effect: if most of your connections used smartphones for communication, you had to adopt one as well to stay relevant.

As the developer community continued to innovate, apps like WhatsApp, Instagram, and multiplayer games emerged. These applications enhanced the smartphone experience, encouraging even more people to adopt the technology. The momentum snowballed, making smartphones a necessity in modern life.


Will AR Smart Glasses Follow the Same Path? It’s plausible that AR smart glasses could replace smartphones, but they must fulfill similar conditions:

  1. Affordability: AR smart glasses must become cheap enough for mass adoption.

  2. Developer Ecosystem: Companies need to provide deep access to their platform to developers, enabling them to create compelling use cases and applications.

  3. Network Effect: A framework must emerge that makes owning AR smart glasses necessary to stay connected or competitive.

For example, consider real-time translations using AR smart glasses. To enable seamless bi-directional communication, both users would need to have the same app or device. This could create a scenario where the adoption of AR smart glasses becomes essential for effective communication.

In the early stages, tech enthusiasts and developers might adopt AR smart glasses to build apps and games to earn money, much like how people in India are currentlt buying Meta Quest to create VR-AR experiences for global markets and earning. As a result, small communities of users will form, indirectly promoting the use of AR smart glasses.

Eventually, new use cases—beyond translation—will emerge, compelling others to adopt the technology. At some point, AR smart glasses could achieve the critical mass needed to replace smartphones altogether.


Conclusion While it might take time, AR smart glasses have the potential to replace smartphones if the right conditions are met. They need to be affordable, supported by an active developer community, and integrated into our daily lives through innovative applications and network effects.

What do you think? Share your thoughts!

35 Comments
2024/11/28
04:59 UTC

18

Opinion: The current "AI Glasses" are awkward and will inevitably transition to AR + AI Glasses!

Here is another blog by our guest author Axel Wong. His previous blog is one of the most read posts in r/AugmentedReality with 143,000 views (Meta Orion AR Glasses: The first DEEP DIVE into the optical architecture). This time he gives a breakdown of the current AI Glasses hype with countless companies investing in this type of glasses with camera and audio input for multimodal AI models — but without a display, without multimodal output. Enjoy!

____

In the past year, the news that Meta’s Ray-Ban glasses sold over one million units has excited many companies. The so-called “AI glasses,” seen as a promising new product category, have been hyped up once again. On the streets of the U.S., it’s not uncommon to spot people wearing these glasses, which are equipped with dual cameras and audio functionality.

A large number of companies, including Baidu and Xiaomi, have rushed into this space, even attracting entrants from unexpected industries like power bank manufacturers. Rumor has it that Apple and Samsung are also eager to join the race. This sudden surge of enthusiasm reminds me of the smart speaker craze from years ago. Back then, over a hundred companies in Shenzhen were making smart speakers — but as we all know, most of them eventually stopped.

Ray-Ban AI Glasses

At its core, what we call “AI glasses” today are essentially glasses equipped with audio and camera capabilities. Bose was among the first to introduce audio-enabled glasses with its so-called BoseAR, which was essentially a pair of headphones in the form of sunglasses. Around the same time, Snap released its first-generation Spectacles, which allowed users to record short videos. I bought both out of curiosity at the time—but predictably, they’ve long since disappeared into a corner, gathering dust.

Clearly, the concept of adding sensors to eyewear isn’t new. So why do “AI glasses” suddenly seem fresh again? The answer is simple: large language models (LLMs) have entered the picture. The current buzz revolves around the idea of using LLMs on smartphones (usually via an app, like ByteDance’s Doubao) to “empower” these devices. You might think it’s just cameras and speakers, but no—this is AI-powered smart hardware! 👀

OK, now that we’ve laid the groundwork, let’s get to the conclusion: In my opinion, today’s so-called “AI glasses” will inevitably transition (in the short term) to AR glasses. That means evolving from “audio + cameras” to “audio + cameras + near-eye displays.”

____

The Moment You’re Forced to Pull Out Your Phone, AI Glasses Lose the Game

This isn’t a criticism stemming from years of working in XR, nor is it a forced negative view of AI glasses. The issue lies in product logic: AI glasses without a display are fundamentally awkward and lack coherence. (For an analysis of why Ray-Ban glasses sell well, see the end of this article.)

For any product, the three most critical aspects are scenarios, scenarios, and scenarios.

AI Glasses with on-device AI processing

Let’s examine the scenarios for “AI glasses.” Take Baidu’s Xiaodu AI Glasses as an example. According to reports, they offer:

  • First-person video recording,
  • On-the-go Q&A,
  • Calorie recognition,
  • Object identification,
  • Visual translation,
  • Intelligent reminders.

When summarized, these features boil down to two core functionalities:

  1. Recognition + audio prompts for information (on-the-go Q&A, calorie recognition, object identification, visual translation, intelligent reminders).
  2. First-person video recording.

Let’s step back for a moment. How do we typically interact with AI today? Most of the time, it’s through a smartphone. The truth is, all the functions mentioned above can already be fully achieved with a smartphone screen and camera. AI glasses merely relocate the phone’s audio and camera capabilities to your head. Their biggest advantage is that you don’t need to take your phone out, which can be convenient in certain scenarios—such as when your hands are occupied (e.g., cycling or driving) and you need navigation or recording.

Now let’s consider the typical interaction flow between a user and AI on a phone. For example, when you want to know something, you ask the AI, and it responds with a long block of text, like this:

This is from Doubao; the Q&A itself is unrelated to this article, and only half the response is shown.

As you can see, the response is full of text. Most of the time, we don’t have the patience to listen to the AI read the entire thing aloud. That’s because the brain processes text or visual information far more efficiently than audio. Often, we just skim through the text, grasp the key points, and immediately move on to the next question.

Now, if we translate this scenario to AI glasses, problems arise. Imagine you’re walking down the street wearing AI glasses. You ask a question, and the AI responds with a long-winded explanation. You may not remember or even care to listen to the entire response. By the time the AI finishes speaking, your attention or location may have shifted. Frustrated, you’ll end up pulling out your phone to read the full text instead.

Moreover, there’s the issue of interaction itself: audio is inherently a “laggy” form of interaction. Anyone familiar with real-time interpretation, smart speakers, or in-car voice assistants will know this. You have to finish an entire sentence for the AI to process it and respond. The response might often be incorrect or irrelevant—like answering a completely different question.

(For more on this issue, see my earlier article: “The Media and Big Thinkers Are Hyping a New AI+AR ‘Unicorn,’ But I Think It’s Better Suited for Street Fortune-Telling.”)

This means there’s a high likelihood that:

  • You spend a long time talking to the AI, and it doesn’t understand you.
  • You find the response too slow, so you pull out your phone to type the command yourself.
  • You feel the AI is rambling, so you take out your phone to skim the full text.
  • Privacy concerns arise—you wouldn’t want to use voice commands to ask the AI to send a flirty message to your girlfriend in a public place.

In the end, the moment you’re forced to pull out your phone, the significance of AI glasses drops to almost zero.

After Audio, Let’s Talk About Cameras

A person holding a phone up in the air to take a picture

Admittedly, having a camera on your head provides a more elegant option for taking photos. Personally, I’m not a fan of taking pictures, for two main reasons: first, pulling out a phone to take a picture feels awkward and inelegant to me; second, it often seems disrespectful to the person speaking to you (for example, even if you’re using your phone to record what they’re saying, it can still come across as rude).

But I wonder how many people who use glasses for photography are genuinely taking photos in their daily lives. When photographing people, objects, or scenery, you typically need to rely on the framing guidelines provided by a phone’s viewfinder. Often, you might need to crouch or adjust the angle to capture the perfect shot—something that AI glasses in their current form are almost incapable of doing. And let’s not forget that the camera quality of AI glasses is inevitably far inferior to that of a smartphone.

Of course, many might argue that these glasses are mainly designed for first-person video recording or quick snapshots. To that, I can only say: if you have absolutely no expectations for the quality of your footage and just want to casually capture something, then yes, AI glasses could be somewhat useful. However, the discomfort of "not being able to see what you’re recording while you’re recording it" is likely to bother most people. And in the vast majority of cases, these functions can be completely replaced by a smartphone.

It all comes back to the same point: the moment you’re forced to pull out your phone, the significance of AI glasses drops to almost zero.

AI + AR Will Streamline the Entire Product Logic

Why do I say that AI glasses will inevitably transition to AR glasses in the near future?

To make it easier to understand, let’s stop calling them “AR glasses” for now. Instead, think of them as “Siri with near-eye displays for text and images” (I’ll call this "Piri"). This term captures the core concept better.

Let’s go back to Baidu’s AI glasses as an example. Looking at their own promotional materials—take a close look at these images—anyone unfamiliar with the product might think these are advertisements for AR glasses. (They even include thoughtfully designed AR-style UI elements. 👀)

Frames from Baidu's promo video for the Xiaodu AI Glasses

Frames from Baidu's promo video for the Xiaodu AI Glasses

From these images alone, it’s clear that once near-eye display functionality allows AI-provided information to be presented directly—even if it’s just monochrome text—the entire product logic suddenly makes sense.

Let’s revisit the scenarios we discussed earlier:

  1. Recognition + audio prompts for information: With near-eye displays, text information can now appear directly in the user’s view, making it instantly readable. What used to take minutes to listen to can now be grasped in seconds. Additionally, AI could automatically generate memos that float in your field of view, ready to be checked at any time (ideally disappearing after a short period).

Translation functionality also becomes more convenient for the wearer. While it’s not perfect (you can’t guarantee the other person is also wearing similar glasses), the vision of widespread AR adoption is precisely what the industry is striving for, right? 😎

  1. Photography: A simple viewfinder on the side could let users see what they’re capturing. This provides guidance and resolves the issue of blindly taking photos or videos.

This type of product doesn’t have to stick to the traditional shape of ordinary glasses. Monochrome waveguides could easily handle the basic functionality of Baidu’s AI glasses. Moreover, combining them with traditional optical systems (such as BB/BM/BP geometrical optics) could open up entirely new scenarios—like virtual companions (imagine a virtual Xiao Zhan accompanying you to watch a movie) or interactive training (a virtual tutor practicing a foreign language with you face-to-face). These are scenarios that display-limited waveguides struggle to achieve effectively.

AI Powers AR But Can’t Solve All Optical Challenges

While AI’s capabilities enhance the potential of AR glasses, they can’t unify the variety of optical solutions in AR glasses. For instance, AI cannot improve the display quality of certain optical designs, like waveguides. However, it can add more functionality to existing AR products:

  • For waveguide-based glasses, AI could resolve the lack of compelling use cases, turning them into more practical tools.
  • For BB-style large-screen AR glasses, AI might not only enrich their features but also address their current dilemma: difficulty justifying a high price tag (it’s almost like selling at a loss just to gain attention).

Additionally, this combination might spur the development of entirely new optical systems, potentially leading to innovative product categories.

Here’s an old concept model from 2018 (apologies for the rough design). 👀

From this image, you can see how this type of product fundamentally differs from today’s large-screen AR glasses. The latter, positioned as “portable large screens,” are more akin to plug-and-play ‘glasses-shaped monitors.’ In contrast, AI + AR glasses would emphasize the practicality and usability of the app ecosystem. These two types of devices have completely different design and development philosophies.

This is also why current waveguide + microLED glasses haven’t gained widespread acceptance. Most of them are simply following the design philosophy of large-screen glasses, stacking hardware to achieve near-eye displayswithout thoroughly refining the app ecosystem. Some even fail to deliver decent hardware performance.

The Path Forward: AI Glasses Transitioning to AI + AR Glasses

Looking ahead, we can predict that companies making AI glasses today will face mixed market feedback:

  • Those that entered the space blindly, without understanding the product’s core value, will likely abandon it altogether.
  • Companies serious about developing a viable product will eventually incorporate display functionality, transitioning to AI + AR glasses.

Blindly following trends is meaningless and often leads to dead ends. But for those willing to innovate, AI + AR is the natural evolution of AI glasses.

Blindly Following Trends Is Pointless and Often Leads to Pitfalls

That brings us back to the question: Why have Ray-Ban’s smart glasses sold so well?

Ray-Ban AI Glasses

In my opinion, the success of Ray-Ban’s smart glasses lies in a pragmatic commercial strategy. Let’s break it down:

  1. The strong brand appeal of Ray-Ban:Ray-Ban is a well-established mid-to-high-end eyewear brand with strong recognition in the consumer market, especially in the United States.
  2. Extensive offline retail channels:AI glasses are hardware products and a new category, which makes them hard to sell online alone. Ray-Ban’s robust offline retail network allows users to try the glasses in-store, significantly increasing the likelihood of a purchase.
  3. Reasonable pricing:The price of Meta’s smart glasses is comparable to that of regular Ray-Ban sunglasses. For consumers who were already planning to spend this much on sunglasses, adding a few trendy features makes it an easy upgrade.
  4. Practical applications for certain users:Some users genuinely benefit from first-person video recording, such as livestreamers who wear the glasses for hands-free filming or visually impaired individuals who use apps like Be My Eyes.
  5. Most importantly: Even if users stop using the AI or camera features after a month or two, the glasses remain a stylish and functional product. They are something you can confidently wear out in public—or even enjoy wearing daily—purely as eyewear.

In summary, the success of Meta’s Ray-Ban smart glasses has little, if anything, to do with AI or AR. It may not even have much to do with their functionality. Instead, it’s a combination of brand strength and a well-thought-out product positioning. It’s also worth noting that Meta only achieved this after two product iterations; the first-generation Ray-Ban Stories had lackluster sales.

Be My Eyes app, now available on Ray-Ban glasses

For example, the Be My Eyes app, now available on Ray-Ban glasses, allows visually impaired individuals to connect with a network of 8.1 million volunteers. These volunteers can use the glasses’ camera feed to view the wearer’s surroundings and provide instructions via audio.

Lessons for AI Glasses in the Chinese Market

It’s clear that some Chinese companies are trying to replicate this model by partnering with eyewear brands like Boshi or Bolon. However, this approach may not be enough because the Chinese consumer market is vastly different from the U.S.. How many people in China are willing to spend over a thousand yuan on a pair of sunglasses? Not many. Personally, I wouldn’t. 👀

If companies want to make the AI features compelling enough for consumers to buy, the next logical step is to transition to AI + AR.

_____

Meta’s Next Steps: Toward AI + AR Glasses

In my article “Meta AR Glasses Optics Breakdown: Where Did $10,000 Go?”, I mentioned rumors that Meta plans to release glasses with waveguide technology in 2025. The optical design is said to use 2D reflective (array) waveguides paired with LCoS projectors.

While this optical design is likely a transitional step, the evolution from “audio + cameras” to “audio + cameras + near-eye displays” is a sound and logical progression for AI glasses.

_____

A Final Note: The Risks of Blindly Following Trends

The consequences of blindly copying others are often dire. Take Apple’s Vision Pro as an example. When it was first released last year, I predicted it would fail (see my article “Vision Pro Is Not a Savior but Apple’s Cry for Help”).

The core question that every product must answer remains the same: What are people going to use this for?

Vision Pro’s biggest issue isn’t its hardware—it’s the severe lack of content. VR has always been heavily reliant on PC/console gaming ecosystems. Even with Vision Pro’s impressive hardware specifications, it’s essentially useless without content. For the companies that are still copying Vision Pro (I know of several), what’s the point if you don’t have a robust content ecosystem? 👀

12 Comments
2024/11/27
18:12 UTC

Back To Top