Master TruLens with our step-by-step tutorial, detailed feature walkthrough, and expert tips.
Explore the key features that make TruLens powerful for testing & quality workflows.
TruLens can evaluate a wide range of LLM-powered applications including AI agents, retrieval-augmented generation (RAG) pipelines, summarization systems, and custom agentic workflows. It is designed to assess critical components of an app's execution flow such as retrieved context quality, tool call accuracy, planning steps, and final output quality. This makes it versatile enough for both simple chatbot evaluations and complex multi-step agent assessments.
TruLens uses feedback functionsโautomated evaluation routinesโto measure metrics like groundedness and context relevance. Groundedness checks whether the LLM's generated response is supported by the retrieved source material, flagging hallucinated or unsupported claims. Context relevance evaluates whether the retrieved documents are actually pertinent to the user's query. These metrics are computed using LLM-based evaluators or custom scoring functions that you can configure to match your quality standards.
TruLens now supports OpenTelemetry (OTel), an open standard for distributed tracing and observability. This means traces generated by TruLens can be exported to any OTel-compatible backend such as Jaeger, Grafana Tempo, or Datadog. For teams that already have observability infrastructure in place, this eliminates the need for a separate monitoring stack and allows LLM application traces to live alongside traditional service traces for unified debugging and performance analysis.
TruLens is designed to be framework-agnostic and integrates with popular LLM frameworks and providers. It works with applications built using LangChain, LlamaIndex, and custom implementations, and can evaluate outputs from various LLM providers including OpenAI, Anthropic, and open-source models. The instrumentation is lightweight and typically requires only a few lines of code to wrap your existing application for evaluation and tracing.
TruLens provides a leaderboard view where you can compare different versions or configurations of your LLM application across multiple evaluation metrics simultaneously. Each app variant is scored on metrics like groundedness, relevance, coherence, and any custom metrics you define. This allows you to objectively identify which combination of prompts, models, retrieval strategies, or hyperparameters produces the best results, replacing manual review with data-driven decision-making at scale.
Now that you know how to use TruLens, it's time to put this knowledge into practice.
Sign up and follow the tutorial steps
Check pros, cons, and user feedback
See how it stacks against alternatives
Follow our tutorial and master this powerful testing & quality tool in minutes.
Tutorial updated March 2026