Why we invested in Memories.ai, pioneering the world’s first Large Visual Memory Model
Enterprise video analysis is fundamentally broken. Organizations generate petabytes of visual data but can only analyze fragments in real-time before that context disappears forever. Security teams can’t track suspicious behavior across weeks, content creators struggle to find relevant footage from vast archives, and marketing teams miss emerging trends buried in social video streams.
Memories.ai represents a paradigm shift from reactive video analysis to persistent visual intelligence. Their Large Visual Memory Model (LVMM) builds cumulative understanding that compounds over time. Where competitors offer expensive, narrow-window analysis, Memories.ai delivers cost-effective, unlimited-context intelligence that gets smarter with every frame.
The technical breakthrough centers on memory consolidation architecture that reduces clips to key visual signatures while preserving contextual relationships. This delivers 100x more video memory capacity than existing solutions while maintaining real-time performance. Crucially, this architecture enables efficient on-device processing, ensuring sensitive visual data never leaves the user’s control while reducing latency and bandwidth costs. The 1 million hours already indexed demonstrates both technical scalability and genuine market pull.
We joined Memories.ai’s $8 million seed round alongside Susa Ventures, Fusion Fund, Crane Venture Partners, Seedcamp, and Creator Ventures because this technology directly enables a multi-device AI strategy. Visual memory creates powerful synergies across an ecosystem of devices both in your home and in your pocket.
The founding team’s Meta Reality Labs background provides unique insight into computer vision, memory systems, and real-world deployment challenges. Dr. Shawn Shen’s research in temporal computing and Enmin (Ben) Zhou’s experience with foundational models on consumer hardware offer exactly the right foundation for scaling visual memory to billions of devices.
What excites us most is the platform effect. Visual memory becomes more valuable as it processes more content, creating natural competitive moats. Early partnerships with companies like Aosu, PixVerse, and Viggle demonstrate broad appeal across industries handling significant video workflows.
Visual memory addresses a critical gap in multimodal AI development. While language models excel at text reasoning, they struggle with temporal visual understanding. Memories.ai’s LVMM creates the visual cognition layer needed for truly intelligent AI assistants that understand and learn from the visual world over time.
We’re backing Memories.ai because they’re solving the right problem at the right time with the right approach. As video becomes the dominant data format and AI moves from narrow tasks to broad understanding, visual memory will be as fundamental as databases are today.
Sam Campbell is an investor at Samsung Next. Samsung Next’s investment strategy is limited to its own views and does not reflect the vision or strategy of any other Samsung business unit, including, but not limited to, Samsung Electronics.