Open-source LLM observability platform that helps debug AI applications through detailed tracing, evaluation, and prompt experimentation with notebook-first design.
An open-source tool that helps you see inside your AI's thinking — debug and improve AI performance with visual tracing.
Open-source LLM observability platform that helps debug AI applications through detailed tracing, evaluation, and prompt experimentation with notebook-first design.
Was this helpful?
Free
Contact sales
Ready to get started with Arize Phoenix?
View Pricing Options →We believe in transparent reviews. Here's what Arize Phoenix doesn't handle well:
Weekly insights on the latest AI tools, features, and trends delivered to your inbox.
Analytics & Monitoring
LangSmith lets you trace, analyze, and evaluate LLM applications and agents with deep observability into every model call, chain step, and tool invocation.
Analytics & Monitoring
Experiment tracking and model evaluation used in agent development.
Testing & Quality
DeepEval: Open-source LLM evaluation framework with 50+ research-backed metrics including hallucination detection, tool use correctness, and conversational quality. Pytest-style testing for AI agents with CI/CD integration.
Analytics & Monitoring
Leading open-source LLM observability platform for production AI applications. Comprehensive tracing, prompt management, evaluation frameworks, and cost optimization with enterprise security (SOC2, ISO27001, HIPAA). Self-hostable with full feature parity.
No reviews yet. Be the first to share your experience!
Get started with Arize Phoenix and see if it's the right fit for your needs.
Get Started →Take our 60-second quiz to get personalized tool recommendations
Find Your Perfect AI Stack →Explore 20 ready-to-deploy AI agent templates for sales, support, dev, research, and operations.
Browse Agent Templates →