Stay free if you only need basic features. Upgrade if you need advanced features. Most solo builders can start free.
Yes — Phoenix is fully open source under the Elastic License 2.0 and free to self-host with no feature restrictions, user limits, or trace volume caps. The only restriction is that you cannot offer Phoenix itself as a competing managed observability service. Arize monetizes through its commercial Arize AX enterprise platform, which adds SSO, RBAC, audit logs, SLAs, and dedicated support on top of the Phoenix core. The open-source version receives the same core tracing, evaluation, and experimentation features — there is no intentional feature gating to push users toward paid tiers.
All three provide LLM tracing and evaluation, but Phoenix is built on OpenTelemetry and OpenInference standards, making traces portable across any OTel-compatible backend (Jaeger, Grafana Tempo, Datadog). LangSmith is tightly coupled to the LangChain ecosystem and uses a proprietary tracing format, making it the fastest path for LangChain-only teams but creating vendor lock-in. Langfuse is also open source and shares Phoenix's philosophy of openness, but Phoenix offers stronger evaluation and experiment management features, deeper embedding analysis with UMAP visualizations, and benefits from Arize's sustained engineering investment. Phoenix's auto-instrumentation covers the broadest range of frameworks, while LangSmith offers the most polished UX for LangChain-specific workflows.
Phoenix auto-instruments LangChain, LlamaIndex, CrewAI, Haystack, DSPy, AutoGen, Semantic Kernel, and LiteLLM, plus direct SDKs for OpenAI, Anthropic, Google Vertex and Gemini, AWS Bedrock, Mistral, Cohere, and Ollama. Because Phoenix is built on OpenTelemetry, any application that emits OTel-compatible spans can send data to Phoenix, even if a dedicated auto-instrumentation library does not yet exist for that specific framework or provider. New framework integrations are added regularly as the ecosystem evolves.
Phoenix is designed for both development and production use. Many teams run it locally during development for rapid debugging and then deploy it via Docker or Kubernetes with PostgreSQL-backed storage for production observability. For high-volume production workloads, Arize recommends using PostgreSQL persistent storage, configuring appropriate data retention policies, and deploying with Kubernetes Helm charts for reliability and scalability. The managed Phoenix Cloud service is also available for teams that prefer not to manage their own infrastructure. Production deployments should plan for storage growth based on trace volume and configure cleanup policies accordingly.
Yes. Phoenix includes comprehensive workflows for annotating traces with human feedback, building and versioning datasets from production data, running experiments against those datasets, and comparing results across prompt or model variations. Annotators can label traces directly in the UI, and these annotations feed into golden datasets used for regression testing and evaluator calibration. This creates a complete feedback loop where production issues are captured, annotated, added to evaluation datasets, and then used to validate that future changes don't reintroduce the same problems. Teams can also use the annotation API to integrate human review workflows with external labeling tools.
Start with the free plan — upgrade when you need more.
Get Started Free →Still not sure? Read our full verdict →
Last verified March 2026