No free plan. The cheapest way in is LLM Observability (Trace + Evaluations) at $2.50 per 1M indexed LLM spans for tracing; $1.50 per 1K evaluations executed. Requires a Datadog APM or Infrastructure subscription (from $15/host/month).. Consider free alternatives in the analytics & monitoring category if budget is tight.
Free plan includes: full feature parity with cloud version, unlimited traces, users, and data retention, complete control over data and infrastructure
Free plan includes: 10,000 requests per month, full dashboard access, cost analytics & request logging
Free plan includes: basic features
LangSmith and Langfuse are purpose-built LLM platforms focused on prompt engineering, dataset management, and developer-centric evaluation workflows. Datadog LLM Observability is built for production operations: it stitches LLM spans into the same distributed traces as your infrastructure, APM, and logs, and reuses Datadog's monitor, alerting, RBAC, and security detection systems. It is stronger for SRE and platform teams running AI in production, weaker for prompt iteration during development.
Datadog supports OpenAI, Anthropic, Amazon Bedrock, Azure OpenAI, Google Vertex AI, and other major providers, plus orchestration frameworks including LangChain, LlamaIndex, and OpenAI Assistants. Custom instrumentation is available through Datadog's SDKs for Python, Node.js, and other supported runtimes.
No. Datadog is a SaaS product and does not offer a self-hosted or on-prem version of LLM Observability. Teams with strict data residency requirements can choose between US, EU, and other regional Datadog sites, and sensitive data scrubbing can be applied client-side before telemetry is shipped.
Datadog offers built-in LLM-as-judge evaluations for quality, faithfulness, topic relevance, and toxicity, plus custom rule-based and code-based evaluators. Evaluations can run on sampled production traffic or on curated datasets, and results are stored alongside the trace so regressions are visible in the same UI as latency or cost spikes.
Yes. LLM Observability integrates with Datadog's Sensitive Data Scanner and detection rules engine to flag prompt injection attempts, jailbreaks, and PII or secrets that appear in prompts or responses. Findings can route to Datadog Cloud SIEM workflows for security teams to triage.
See Datadog LLM Observability plans and find the right tier for your needs.
See Pricing Plans →Still not sure? Read our full verdict →
Last verified March 2026