Open-source observability platform for AI agents with trace capture, step-restart debugging, browser session recording, and natural language pattern detection. Self-host free or use managed cloud from $30/month.
Open-source monitoring for AI agents. Trace every step, debug failures by restarting from any point, record browser sessions, and catch problems with natural language pattern matching.
Laminar is an open-source observability tool built specifically for AI agents. If you're running agents that chain LLM calls with tool use, retrieval, and browser interactions, Laminar captures every step so you can figure out why things broke.
The setup is minimal. Add two lines of code (import and init), and Laminar auto-instruments LangChain, LlamaIndex, CrewAI, OpenAI, Anthropic's Claude Agent SDK, AI SDK, and LiteLLM. Every LLM call, tool invocation, and retrieval operation gets traced with inputs, outputs, token counts, latency, and cost. No manual span creation needed for supported frameworks.
The standout feature is the Agent Debugger. When an agent fails 40 minutes into a complex task, you don't have to rerun everything from scratch. The debugger lets you restart from any specific step with full context: LLM calls replay from cached responses, and external state (browser sessions, sandboxes) gets restored. For agents that run long or fail in hard-to-reproduce ways, this saves hours of debugging time.
Signals is the other feature worth highlighting. Describe a pattern in plain English ("agent retried the same action more than 3 times" or "user expressed frustration") and Laminar automatically finds matching instances across your production traces. No custom queries or log parsing required. It runs continuously against new traces too.
For browser agent developers, Laminar captures screen recordings and syncs them to trace timelines. You can watch exactly what your agent saw and did at each step, with integrations for Browser Use, Stagehand, Playwright, and Browserbase.
Pricing is transparent. Self-host everything for free via Docker with no feature restrictions. The managed cloud starts with a free tier (1 GB data, 100 signal runs, 15-day retention, 1 project). The Hobby plan at $30/month includes 3 GB data and 1,000 signal runs with 30-day retention. Pro at $150/month gives 10 GB and 10,000 signal runs with 90-day retention. Overage charges are $2/GB on Hobby and $1.50/GB on Pro. Enterprise pricing is custom with on-premise deployment.
Laminar is Y Combinator backed (S24 batch) with $3M in seed funding raised in March 2026. Current customers include Browser Use, OpenHands, and Rye.com.
The limitations are straightforward. It's a young platform with a smaller community than Langfuse or established tools like Datadog. Cloud pricing details require checking the website for current overage rates. Documentation is still catching up with the pace of feature releases. If you're building a simple single-call LLM wrapper, Laminar's agent-focused tooling is more than you need. And the dashboard is desktop-first with no mobile-optimized view.
Was this helpful?
Laminar is the best debugging tool for complex AI agents. The step-restart debugger and browser session recordings solve problems no other observability platform addresses. Self-host for free or use managed cloud starting at $30/month. Young platform with a growing ecosystem, best suited for teams building agents that chain multiple LLM calls with tools and browser interactions.
Restart a failed agent run from any step with full context. LLM calls replay from cached responses, external state (browser sessions, sandboxes) is restored. No full rerun needed.
Use Case:
An agent fails 40 minutes into a multi-step research task. Instead of rerunning the entire thing, restart from the exact decision point that went wrong and iterate on the fix.
Two lines of code instrument LangChain, LlamaIndex, CrewAI, OpenAI, Claude Agent SDK, AI SDK, and LiteLLM. Captures inputs, outputs, token counts, latency, and cost for every call.
Use Case:
Get full production visibility into an agent's behavior and cost by adding a single import and init call. No manual span creation.
Captures screen recordings from browser agents and syncs them with trace timelines. Integrates with Browser Use, Stagehand, Playwright, and Browserbase.
Use Case:
Debug why a browser automation agent clicked the wrong button by watching the recording alongside the agent's decision trace.
Describe a failure pattern in plain English and Laminar automatically finds matching instances across thousands of production traces. Runs continuously against new data.
Use Case:
Find every instance where an agent entered a retry loop or a user expressed frustration, without writing custom log queries.
Run LLM-as-judge, deterministic, or custom Python evaluation functions against traces or curated datasets. Results tracked over time for regression detection.
Use Case:
Nightly evaluations against a golden dataset catch quality drops in a customer support agent before users report problems.
Query all platform data with SQL. Feed evaluation inputs from SQL queries and pull data into external applications via SQL API.
Use Case:
Build custom analytics correlating token usage with user satisfaction across different agent versions and prompt configurations.
Free
month
$30.00/month
month
$150.00/month
month
Contact sales for pricing
Free
forever
Ready to get started with Laminar (LMNR)?
View Pricing Options →We believe in transparent reviews. Here's what Laminar (LMNR) doesn't handle well:
Weekly insights on the latest AI tools, features, and trends delivered to your inbox.
Analytics & Monitoring
Leading open-source LLM observability platform for production AI applications. Comprehensive tracing, prompt management, evaluation frameworks, and cost optimization with enterprise security (SOC2, ISO27001, HIPAA). Self-hostable with full feature parity.
Analytics & Monitoring
LangSmith lets you trace, analyze, and evaluate LLM applications and agents with deep observability into every model call, chain step, and tool invocation.
Analytics & Monitoring
Open-source LLM observability platform and API gateway that provides cost analytics, request logging, caching, and rate limiting through a simple proxy-based integration requiring only a base URL change.
Analytics & Monitoring
Open-source LLM observability and evaluation platform built on OpenTelemetry. Self-host for free with comprehensive tracing, experimentation, and quality assessment for AI applications.
No reviews yet. Be the first to share your experience!
Get started with Laminar (LMNR) and see if it's the right fit for your needs.
Get Started →Take our 60-second quiz to get personalized tool recommendations
Find Your Perfect AI Stack →Explore 20 ready-to-deploy AI agent templates for sales, support, dev, research, and operations.
Browse Agent Templates →