Memories AI is building the visual memory layer for wearables and robotics

Memories AI is building the visual memory layer for wearables and robotics

2 Min Read

Shawn Shen asserts that AI must remember visual data to thrive in the physical world. His company, Memories.ai, is utilizing Nvidia AI tools to design infrastructure for wearables and robotics to recall visual memories.

Memories.ai revealed a collaboration with Nvidia during its GTC conference. They are employing Nvidia’s Cosmos-Reason 2 model and Nvidia Metropolis for developing visual memory technology.

Shen, alongside CTO Ben Zhou, conceived the company while working on AI for Meta’s Ray-Ban glasses, realizing the importance of recalling recorded video data.

After finding no existing solutions for visual memory in AI, they left Meta to build it themselves. Shen emphasizes the necessity for AI wearables and robotics to possess visual memories for success in the physical realm.

AI memory development is new; OpenAI’s ChatGPT, xAI, and Google Gemini introduced text-based memory, but Shen highlights the gap in visual memory crucial for physical AI interaction.

Memories.ai, founded in 2024, has gained $16 million, with Susa Ventures leading the investment. Shen notes that creating a visual memory layer involves embedding and indexing video data effectively and acquiring training data.

Their large visual memory model (LVMM), launched in July 2025, is likened to a smaller version of Gemini Embedding 2. To collect data, they developed LUCI, a device used by data collectors, emphasizing they don’t aim to sell hardware but needed specialized recording tools.

Their second LVMM generation and a partnership with Qualcomm will integrate these technologies in their processors soon. Shen mentioned ongoing collaborations with major wearable companies and foresees greater future opportunities in wearables and robotics, though the focus remains on model and infrastructure development.

You might also like