Modal: SOTA in long-term multimodal AI memory

We’ve been working on a new multimodal memory system called Modal, designed to serve as a personalization layer for AI models. It handles multimodal ingestion + retrieval (text, images, audio, video), can be queried in real time, and currently achieves SOTA on the LoCoMo personalization benchmark, outperforming long-context ICL even at 29k tokens.

Early access API is open: https://www.elicitlabs.ai/early-access
Benchmark details: https://elicitlabs.ai/blog/memory

1 Like