Master Mirascope with our step-by-step tutorial, detailed feature walkthrough, and expert tips.
Explore the key features that make Mirascope powerful for ai agent builders workflows.
Define LLM interactions as decorated Python functions using @llm.call('provider/model'). The function's return value becomes the prompt, and the decorator handles API calls, response parsing, and error handling.
Creating a reusable, testable librarian function that can be called like any Python function but executes an LLM query with structured tool access.
Tools defined as decorated functions with @llm.tool, using typed parameters and docstrings that auto-generate the tool schema. Pydantic validation ensures tool inputs are correct before execution.
Building a search tool with validated query parameters that the LLM can call, with full IDE autocompletion and type checking on both inputs and outputs.
Extract typed data from LLM responses by passing a Pydantic model to the format parameter. Mirascope handles schema generation, response parsing, and validation automatically.
Extracting structured product information from customer reviews with guaranteed schema compliance and automatic retry on validation failures.
The @ops.version() decorator automatically versions prompts, traces LLM calls, and tracks token usage and costs. Changes to decorated functions are detected and versioned automatically.
Tracking which version of a prompt performs best in production and monitoring LLM costs per function across your entire application.
Build agent behaviors using standard Python while loops: call the LLM, check for tool calls, execute tools, resume with outputs. No framework-specific agent class needed — just Python control flow.
Creating a custom agent with specific error handling, fallback logic, and conditional tool execution that wouldn't fit into a rigid agent framework.
Unified interface across OpenAI, Anthropic, Google, Mistral, DeepSeek, and others using provider/model strings. Supports provider-specific features like thinking mode ({"include_thoughts": True}) without losing portability.
Testing the same agent across Claude with thinking mode, GPT-4o, and Gemini to compare quality and cost while using provider-specific optimizations.
Mirascope calls itself 'The LLM Anti-Framework' — it provides building blocks (calls, tools, structured output) that you compose into agents using plain Python. The agent loop is just a while loop, not a framework class. This gives more control but requires writing the loop yourself.
Mirascope is simpler and more Pythonic with better type safety. LangChain provides more pre-built chains, integrations, and RAG utilities but with more abstraction and complexity. Choose Mirascope when you want control and type safety; LangChain when you want batteries-included with extensive integrations.
Yes, through Ollama, vLLM, and any OpenAI-compatible endpoint. Use the provider/model string format (e.g., 'ollama/llama3') to target local models with the same API as cloud providers.
It automatically versions your prompt functions (detecting changes to the decorated function), traces each LLM call with inputs/outputs/latency, and tracks token usage and cost. It integrates with Langfuse and other OpenTelemetry-compatible observability tools.
Now that you know how to use Mirascope, it's time to put this knowledge into practice.
Sign up and follow the tutorial steps
Check pros, cons, and user feedback
See how it stacks against alternatives
Follow our tutorial and master this powerful ai agent builders tool in minutes.
Tutorial updated March 2026