One of the main challenges in AI agent design is that large language models lack state persistence and have restricted context windows. This demands meticulous engineering to ensure consistency and dependability in sequential LLM interactions. To achieve optimal performance, agents require efficient systems for storing and retrieving short-term conversation data, summaries, and long-term information.
Redis is a popular open-source, in-memory data store, commonly utilized for high-performance caching, analytics, and message brokering. New developments have expanded Redis’s functionality to include vector search and semantic caching, enhancing its role in the agentic application stack.
Andrew Brookins, a Principal Applied AI Engineer at Redis, converses with Sean Falconer about the obstacles in building AI agents, the importance of memory, hybrid search vs. vector-only search, the notion of world models, and more.
This episode is sponsored by Redis.

Sean has a background as an academic, startup founder, and Google employee. He has published a variety of works on topics ranging from AI to quantum computing. Presently, Sean is an AI Entrepreneur in Residence at Confluent, focusing on AI strategy and thought leadership. You can connect with Sean on LinkedIn.
Click here to read the transcript of this episode.
For sponsorship inquiries, contact: [email protected]