Whenever writing about Meta’s Ray-Ban smart glasses, predictable comments arise: “Cool hardware, but hard pass on anything Meta makes; will wait for someone else to come along.” This sentiment is unlikely to change soon, especially after The New York Times reported that Meta considered launching facial recognition software during a politically dynamic time, possibly to avoid scrutiny from privacy advocates.
Smart glasses proponents argue these fears are exaggerated. Phones also have cameras, governments use facial recognition, and CCTV is widespread. Those familiar with true-crime documentaries or shows like Law & Order are aware that public surveillance is common. The recent Guthrie case, where law enforcement accessed Nest Doorbell footage, emphasizes this. Smart glasses are worrisome because their cameras are small, privacy LEDs are faint, and the design is inconspicuous. This invisibility is intentional; the glasses resemble normal eyewear.
This poses a dilemma: Meta’s glasses are excellent due to their discreet design but are also unsettling monitoring tools. Wearing modern smart glasses can feel like espionage, despite privacy indicator lights. In public, people typically don’t notice them. However, it feels uncomfortable, and seeing others with them sometimes is uneasy too. Meta asserts their glasses can’t record if their light is tampered with, but reports, such as by 404 Media, show a $60 modification can disable the light. Anecdotally, my spouse’s privacy light malfunctioned but the recording worked fine.
This is troubling even without considering Meta’s involvement. Reflecting on Meta’s past, like the Cambridge Analytica scandal, Mark Zuckerberg’s interactions with Donald Trump, and smart glasses privacy policy changes to enhance AI training, raises concern. Zuckerberg’s past comments about Facebook users and statements about smart glasses users being at a cognitive disadvantage if they opt out don’t inspire confidence. It’s alarming that Meta might exploit the political climate to introduce facial recognition.
Unsurprisingly, Meta reportedly explored identifying people via smart glasses if they have a public account on a Meta site such as Instagram. Although this feature is requested, assisting low-vision and blind individuals, or aiding memory for names in social situations, deploying it universally opens a Pandora’s box.
In my Meta Ray-Ban Display review, I extensively discussed privacy. Smart glasses makers haven’t resolved the glasshole issue that led to Google Glass’s downfall.
Giving powerful tools to irresponsible users and merely advising responsible behavior isn’t sufficient. Meta’s smart glasses privacy policy largely boils down to this. There are reports of “manfluencers” using the glasses to record women without consent. Meta hasn’t strongly opposed this. For instance, in response to reports of misuse, Meta referred to its terms of service and LED lights, urging safe usage. Even when two college students used the glasses to dox strangers, a Meta official pointed to the LED light as deterrence.
Recently, I mentioned in a column that there’s no consensus on naming this tech, with terms ranging from spy glasses to e-waste and more. Some imagery is extreme, such as a GIF smashing glasses with a hammer. Many claim they’d confront users, though most wouldn’t notice the glasses. Nonetheless, a woman received praise for snapping an influencer’s Ray-Ban Meta glasses.
Smart glasses aren’t intrinsically bad. Conversations with blind and low-vision users revealed positive change due to Meta’s glasses. Other accessibility advocates are excited about potential benefits for the deaf, hard of hearing, and those with limb differences.
Yet, trust in Meta is lacking. Some were displeased by Meta labeling facial recognition as accessibility in the NYT report. Fans of the VR game Supernatural, which Meta discontinued, argued that Meta abandoned users, including veterans and those with limited mobility who used it for fitness.
Smart glasses currently sit on a precarious edge. Meta’s tarnished privacy image is a major barrier to their ambitions. Although many trade privacy for convenience, public perception is crucial. Oura’s collaboration with Palantir led to backlash and required CEO Tom Hale to publicly address data privacy concerns. Similarly, Ring and Amazon retracted a controversial video doorbell feature after criticism. If Meta were wise, it would overhaul its policies to prioritize consumer privacy.
Multiple factors led to Google Glass’s failure: bizarre design, high cost, and user behavior. Consumers consistently rejected perceived surveillance, sometimes destroying the devices. Meta has ushered in a new wave of smart glasses and made some right moves, but its reputation is inescapable, especially as other major companies enter the market. Without trust, smart glasses may revert to being solely science fiction once more.
