LangChain memory primitives for long-horizon agent workflows.
Memory building blocks for AI agents — lets your AI remember important facts and context across long conversations.
LangMem is LangChain's native memory library for building long-horizon agent workflows that need to remember information across sessions. Unlike standalone memory products, LangMem is designed to integrate deeply with the LangGraph ecosystem, providing memory primitives that work as nodes in LangGraph state machines.
The core abstraction in LangMem is the memory manager — a component that processes conversation transcripts and extracts memories using configurable strategies. LangMem supports three memory formation approaches: extracting semantic memories (facts and preferences), forming episodic memories (event recollections), and creating procedural memories (learned instructions that modify the agent's system prompt). This three-type memory model is more theoretically grounded than most memory tools, drawing from cognitive science research on human memory systems.
Semantic memories capture facts ('the user works at Company X', 'the user prefers Python over JavaScript'). Episodic memories capture events ('the user had a frustrating experience debugging auth last Tuesday'). Procedural memories are the most interesting — they modify the agent's behavior by adding or updating system prompt instructions based on learned patterns ('when the user asks about deployment, always check which cloud provider they use first').
LangMem integrates with LangGraph through memory functions that can be called as graph nodes. Memories are stored in a LangGraph store (persistent key-value storage) and retrieved based on namespace, semantic similarity, or explicit keys. This means memory operations participate in LangGraph's state management, checkpointing, and human-in-the-loop workflows.
The library is open-source and relatively new, reflecting LangChain's evolving approach to memory. Earlier LangChain memory classes (ConversationBufferMemory, ConversationSummaryMemory) were simple but limited. LangMem represents a more sophisticated take, but it's still maturing. Documentation is sparse, APIs may change, and the examples are primarily focused on LangGraph integration.
LangMem is the right choice if you're already invested in the LangGraph ecosystem and want memory that's native to your graph architecture. If you're using a different framework or want a standalone memory service, Mem0 or Zep are more mature and framework-agnostic alternatives.
Was this helpful?
LangMem brings memory management directly into the LangGraph ecosystem as a library rather than a separate service. For LangGraph users, this tight integration is valuable — memory operations become graph nodes rather than external API calls. The semantic and episodic memory abstractions are well-designed. However, it's tightly coupled to LangGraph, limiting its usefulness for teams using other frameworks. Being newer, the community and documentation are still developing.
Extracts factual information and user preferences from conversations. Facts are stored as discrete memories with metadata and can be updated or superseded by newer information.
Use Case:
An agent that remembers a user's tech stack, communication preferences, and project context across multiple sessions.
Captures event-based memories from conversations — what happened, when, and the user's reaction. Episodic memories include temporal context and emotional valence.
Use Case:
Remembering that a customer had a frustrating deployment failure last week and bringing up that context when they ask about deployment again.
Extracts behavioral patterns from interactions and creates system prompt modifications. The agent literally learns how to behave better over time by updating its own instructions.
Use Case:
An agent that learns to always ask about the user's Python version when they report library errors, after discovering this is frequently the root cause.
Memories are stored in LangGraph's persistent key-value store with namespace-based organization. Memory operations are LangGraph nodes that participate in graph state management and checkpointing.
Use Case:
Building a customer support graph where memory retrieval and update are explicit nodes that can be modified, monitored, and replayed.
Choose between different memory formation strategies: background processing (asynchronous extraction after conversations), inline processing (real-time extraction during conversations), or batch processing (periodic extraction from accumulated transcripts).
Use Case:
Using background processing for a high-throughput chatbot where memory extraction latency would hurt user experience.
Memories are organized in hierarchical namespaces (e.g., user/preferences, user/projects, global/procedures). Retrieval can scope to specific namespaces for precise context loading.
Use Case:
Retrieving only project-related memories when the user asks about a specific project, without loading unrelated personal preferences.
Free
forever
Ready to get started with LangMem?
View Pricing Options →LangGraph-based agent systems that need persistent memory integrated directly into the graph state machine
Applications that benefit from procedural memory — agents that learn and improve their behavior based on interaction patterns
Multi-session agents built on LangGraph that need to maintain user context, preferences, and history across conversations
Teams already invested in the LangChain/LangGraph ecosystem who want native memory without external service dependencies
LangMem works with these platforms and services:
We believe in transparent reviews. Here's what LangMem doesn't handle well:
LangChain's older memory (ConversationBufferMemory, etc.) was simple session-level context management. LangMem is a full memory formation system with extraction, classification, and cross-session persistence. It's designed for LangGraph and supports semantic, episodic, and procedural memory types.
Technically the memory extraction functions can be used standalone, but the storage and retrieval system is designed around LangGraph's store. Without LangGraph, you lose the native integration benefits and would need to provide your own storage backend.
Mem0 is a standalone memory service with its own storage and API. LangMem is a library that integrates with LangGraph's architecture. Mem0 is more mature and framework-agnostic. LangMem is better if you're building with LangGraph and want memory as a native part of your graph.
It's usable but still maturing. APIs may change between versions, documentation is evolving, and production case studies are limited. For production LangGraph applications, it works, but plan for potential migration effort as the library stabilizes.
Weekly insights on the latest AI tools, features, and trends delivered to your inbox.
People who use this tool also find these helpful
Open-source vector database designed for AI applications with fast similarity search, multi-modal embeddings, and serverless cloud infrastructure for RAG systems and semantic search.
Open-source framework that builds knowledge graphs from your data so AI systems can reason over connected information rather than isolated text chunks.
Open-source embedded vector database built on Lance columnar format for multimodal AI applications.
Stateful agent platform inspired by persistent memory architectures.
Universal memory layer for AI agents and LLM applications. Self-improving memory system that personalizes AI interactions and reduces costs.
Enterprise memory management platform for AI applications. Managed cloud service with advanced analytics, SSO, and enterprise security controls.
See how LangMem compares to CrewAI and other alternatives
View Full Comparison →AI Agent Builders
CrewAI is an open-source Python framework for orchestrating autonomous AI agents that collaborate as a team to accomplish complex tasks. You define agents with specific roles, goals, and tools, then organize them into crews with defined workflows. Agents can delegate work to each other, share context, and execute multi-step processes like market research, content creation, or data analysis. CrewAI supports sequential and parallel task execution, integrates with popular LLMs, and provides memory systems for agent learning. It's one of the most popular multi-agent frameworks with a large community and extensive documentation.
Agent Frameworks
Open-source multi-agent framework from Microsoft Research with asynchronous architecture, AutoGen Studio GUI, and OpenTelemetry observability. Now part of the unified Microsoft Agent Framework alongside Semantic Kernel.
AI Agent Builders
Graph-based stateful orchestration runtime for agent loops.
AI Agent Builders
SDK for building AI agents with planners, memory, and connectors. - Enhanced AI-powered platform providing advanced capabilities for modern development and business workflows. Features comprehensive tooling, integrations, and scalable architecture designed for professional teams and enterprise environments.
No reviews yet. Be the first to share your experience!
Get started with LangMem and see if it's the right fit for your needs.
Get Started →Take our 60-second quiz to get personalized tool recommendations
Find Your Perfect AI Stack →Explore 20 ready-to-deploy AI agent templates for sales, support, dev, research, and operations.
Browse Agent Templates →