LangChain memory primitives for long-horizon agent workflows.
Memory building blocks for AI agents — lets your AI remember important facts and context across long conversations.
LangMem is LangChain's native memory library for building long-horizon agent workflows that need to remember information across sessions. Unlike standalone memory products, LangMem is designed to integrate deeply with the LangGraph ecosystem, providing memory primitives that work as nodes in LangGraph state machines.
The core abstraction in LangMem is the memory manager — a component that processes conversation transcripts and extracts memories using configurable strategies. LangMem supports three memory formation approaches: extracting semantic memories (facts and preferences), forming episodic memories (event recollections), and creating procedural memories (learned instructions that modify the agent's system prompt). This three-type memory model is more theoretically grounded than most memory tools, drawing from cognitive science research on human memory systems.
Semantic memories capture facts ('the user works at Company X', 'the user prefers Python over JavaScript'). Episodic memories capture events ('the user had a frustrating experience debugging auth last Tuesday'). Procedural memories are the most interesting — they modify the agent's behavior by adding or updating system prompt instructions based on learned patterns ('when the user asks about deployment, always check which cloud provider they use first').
LangMem integrates with LangGraph through memory functions that can be called as graph nodes. Memories are stored in a LangGraph store (persistent key-value storage) and retrieved based on namespace, semantic similarity, or explicit keys. This means memory operations participate in LangGraph's state management, checkpointing, and human-in-the-loop workflows.
The library is open-source and relatively new, reflecting LangChain's evolving approach to memory. Earlier LangChain memory classes (ConversationBufferMemory, ConversationSummaryMemory) were simple but limited. LangMem represents a more sophisticated take, but it's still maturing. Documentation is sparse, APIs may change, and the examples are primarily focused on LangGraph integration.
LangMem is the right choice if you're already invested in the LangGraph ecosystem and want memory that's native to your graph architecture. If you're using a different framework or want a standalone memory service, Mem0 or Zep are more mature and framework-agnostic alternatives.
Was this helpful?
LangMem brings memory management directly into the LangGraph ecosystem as a library rather than a separate service. For LangGraph users, this tight integration is valuable — memory operations become graph nodes rather than external API calls. The semantic and episodic memory abstractions are well-designed. However, it's tightly coupled to LangGraph, limiting its usefulness for teams using other frameworks. Being newer, the community and documentation are still developing.
Extracts factual information and user preferences from conversations. Facts are stored as discrete memories with metadata and can be updated or superseded by newer information.
Use Case:
An agent that remembers a user's tech stack, communication preferences, and project context across multiple sessions.
Captures event-based memories from conversations — what happened, when, and the user's reaction. Episodic memories include temporal context and emotional valence.
Use Case:
Remembering that a customer had a frustrating deployment failure last week and bringing up that context when they ask about deployment again.
Extracts behavioral patterns from interactions and creates system prompt modifications. The agent literally learns how to behave better over time by updating its own instructions.
Use Case:
An agent that learns to always ask about the user's Python version when they report library errors, after discovering this is frequently the root cause.
Memories are stored in LangGraph's persistent key-value store with namespace-based organization. Memory operations are LangGraph nodes that participate in graph state management and checkpointing.
Use Case:
Building a customer support graph where memory retrieval and update are explicit nodes that can be modified, monitored, and replayed.
Choose between different memory formation strategies: background processing (asynchronous extraction after conversations), inline processing (real-time extraction during conversations), or batch processing (periodic extraction from accumulated transcripts).
Use Case:
Using background processing for a high-throughput chatbot where memory extraction latency would hurt user experience.
Memories are organized in hierarchical namespaces (e.g., user/preferences, user/projects, global/procedures). Retrieval can scope to specific namespaces for precise context loading.
Use Case:
Retrieving only project-related memories when the user asks about a specific project, without loading unrelated personal preferences.
Free
forever
Ready to get started with LangMem?
View Pricing Options →LangMem works with these platforms and services:
We believe in transparent reviews. Here's what LangMem doesn't handle well:
Weekly insights on the latest AI tools, features, and trends delivered to your inbox.
AI Agent Builders
Open-source Python framework that orchestrates autonomous AI agents collaborating as teams to accomplish complex workflows. Define agents with specific roles and goals, then organize them into crews that execute sequential or parallel tasks. Agents delegate work, share context, and complete multi-step processes like market research, content creation, and data analysis. Supports 100+ LLM providers through LiteLLM integration and includes memory systems for agent learning. Features 48K+ GitHub stars with active community.
Multi-Agent Builders
Microsoft's open-source framework for building multi-agent AI systems with asynchronous, event-driven architecture.
AI Agent Builders
Graph-based workflow orchestration framework for building reliable, production-ready AI agents with deterministic state machines, human-in-the-loop capabilities, and comprehensive observability through LangSmith integration.
AI Agent Builders
SDK for building AI agents with planners, memory, and connectors. - Enhanced AI-powered platform providing advanced capabilities for modern development and business workflows. Features comprehensive tooling, integrations, and scalable architecture designed for professional teams and enterprise environments.
No reviews yet. Be the first to share your experience!
Get started with LangMem and see if it's the right fit for your needs.
Get Started →Take our 60-second quiz to get personalized tool recommendations
Find Your Perfect AI Stack →Explore 20 ready-to-deploy AI agent templates for sales, support, dev, research, and operations.
Browse Agent Templates →