Master CrewAI with our step-by-step tutorial, detailed feature walkthrough, and expert tips.
Install CrewAI via 'pip install crewai' and create a new Python project directory Set up your LLM API keys (OpenAI, Anthropic, etc.) in environment variables or .env file Create your first agent by defining its role, goal, backstory, and available tools in a Python script Define a task with clear expected output and assign it to your agent using the Task class Initialize a Crew with your agents and tasks, then call crew.kickoff() to execute the workflow
💡 Quick Start: Follow these 1 steps in order to get up and running with CrewAI quickly.
Explore the key features that make CrewAI powerful for ai agent builders workflows.
Each agent is configured with a role, goal, backstory, allowed tools, max iterations, and an LLM of your choice. The role-and-backstory pattern measurably improves reasoning quality versus generic system prompts and makes crew composition readable.
A Crew bundles agents with an ordered (or hierarchical) list of Tasks. Each task defines its description, expected output, assigned agent, and optional context dependencies on other tasks, enabling automatic context passing between steps.
Flows complement Crews by providing event-driven, code-first orchestration with explicit state, conditional branching, and the ability to embed Crews as steps. Use Flows when you need predictable control flow and Crews where you need agentic reasoning.
Assign different models to different agents — for example, a cheap model for classification and a frontier model for synthesis — and switch providers with a single config change. Supports streaming, function calling, and structured outputs across providers.
Built-in short-term memory for in-run context, long-term memory persisted across runs, entity memory for tracking people and concepts, and contextual memory that combines them. Backed by vector stores like Chroma to keep recall fast.
Ships with web search, scraping, file I/O, code execution, RAG, SQL, and integration tools. Developers can wrap any Python function as a tool with a description and arg schema, and the LLM will invoke it when reasoning suggests it is needed.
In addition to sequential execution, crews can run in a hierarchical mode where a manager agent (powered by a stronger LLM) plans, delegates, and validates sub-tasks across worker agents — useful for open-ended problems.
Hosted control plane for deploying crews as APIs, viewing execution traces, managing versions, monitoring cost and latency, and granting role-based access. Targets teams running multiple crews in production.
Yes. The CrewAI Python framework is open source under the MIT license and free to use commercially. You only pay for the LLM API calls your agents make to providers like OpenAI or Anthropic. The hosted CrewAI AMP platform has a free tier plus paid Business and Enterprise plans available through sales.
CrewAI uses a role-based mental model (agents with roles, goals, and backstories grouped into crews), which many developers find more intuitive than LangGraph's explicit state-graph approach or AutoGen's conversational multi-agent chat. CrewAI is also independent of LangChain, ships its own tools and memory layers, and supports both freeform Crews and deterministic Flows in one framework.
CrewAI integrates with 100+ LLM providers through LiteLLM, including OpenAI (GPT-4o, GPT-4.1), Anthropic Claude, Google Gemini, Azure OpenAI, AWS Bedrock, Mistral, Groq, Cohere, and local models served via Ollama, vLLM, or LM Studio. You can assign different models to different agents within the same crew.
Yes. Many companies run CrewAI in production either by self-hosting the open-source library inside their own services or by deploying through CrewAI AMP for managed observability, versioning, and scaling. For production you should add tracing (e.g., AgentOps, LangSmith, or AMP's built-in tracing), retry logic, and cost guardrails on top of the core framework.
No. CrewAI is built independently of LangChain and has its own agent, task, tool, and memory abstractions. You can import LangChain tools if you want, but it is not required. A working knowledge of Python, async programming, and prompt engineering is enough to get started.
Now that you know how to use CrewAI, it's time to put this knowledge into practice.
Sign up and follow the tutorial steps
Check pros, cons, and user feedback
See how it stacks against alternatives
Follow our tutorial and master this powerful ai agent builders tool in minutes.
Tutorial updated March 2026