Master Laminar (LMNR) with our step-by-step tutorial, detailed feature walkthrough, and expert tips.
Explore the key features that make Laminar (LMNR) powerful for analytics & monitoring workflows.
Restart a failed agent run from any step with full context. LLM calls replay from cached responses, external state (browser sessions, sandboxes) is restored. No full rerun needed.
An agent fails 40 minutes into a multi-step research task. Instead of rerunning the entire thing, restart from the exact decision point that went wrong and iterate on the fix.
Two lines of code instrument LangChain, LlamaIndex, CrewAI, OpenAI, Claude Agent SDK, AI SDK, and LiteLLM. Captures inputs, outputs, token counts, latency, and cost for every call.
Get full production visibility into an agent's behavior and cost by adding a single import and init call. No manual span creation.
Captures screen recordings from browser agents and syncs them with trace timelines. Integrates with Browser Use, Stagehand, Playwright, and Browserbase.
Debug why a browser automation agent clicked the wrong button by watching the recording alongside the agent's decision trace.
Describe a failure pattern in plain English and Laminar automatically finds matching instances across thousands of production traces. Runs continuously against new data.
Find every instance where an agent entered a retry loop or a user expressed frustration, without writing custom log queries.
Run LLM-as-judge, deterministic, or custom Python evaluation functions against traces or curated datasets. Results tracked over time for regression detection.
Nightly evaluations against a golden dataset catch quality drops in a customer support agent before users report problems.
Query all platform data with SQL. Feed evaluation inputs from SQL queries and pull data into external applications via SQL API.
Build custom analytics correlating token usage with user satisfaction across different agent versions and prompt configurations.
Both are open-source LLM observability tools with self-hosting options. Laminar's differentiators are the Agent Debugger (step-restart for failed runs), browser session recording, and Signals (natural language pattern detection). Langfuse has a larger community and more third-party integrations. Pick Laminar if you're building complex, long-running agents. Pick Langfuse if you want broader ecosystem support.
Laminar auto-instruments LangChain, LlamaIndex, CrewAI, OpenAI, Anthropic Claude Agent SDK, AI SDK, LiteLLM, Browser Use, Stagehand, and OpenHands. For anything else, add custom spans using the Python or TypeScript SDK.
The SDK sends traces asynchronously without blocking agent execution. Typical overhead is under 5ms per span, which is negligible for most agent workloads.
Yes. The self-hosted version includes all core features: tracing, evaluation, datasets, and dashboards. Many teams run it in production via Docker. The managed cloud adds team collaboration, higher retention, and support SLAs.
It depends on trace verbosity and call frequency. A moderately active agent making 100 LLM calls/day generates roughly 50-100 MB/month. The free cloud tier's 1 GB handles that comfortably. High-volume production deployments with thousands of daily runs will need Hobby or Pro plans.
Now that you know how to use Laminar (LMNR), it's time to put this knowledge into practice.
Sign up and follow the tutorial steps
Check pros, cons, and user feedback
See how it stacks against alternatives
Follow our tutorial and master this powerful analytics & monitoring tool in minutes.
Tutorial updated March 2026