AutoGen allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks.
Microsoft AutoGen is an open-source programming framework developed by Microsoft Research that enables developers to build sophisticated LLM-powered applications using a multi-agent conversation paradigm. Rather than treating a large language model as a single monolithic assistant, AutoGen lets you define multiple specialized agents â each with its own role, system prompt, tools, and capabilities â and have them collaborate through structured conversations to accomplish complex tasks. This approach mirrors how human teams operate, where specialists with distinct expertise coordinate to solve problems that no single member could tackle alone.
The framework originated at Microsoft Research as part of a broader effort to simplify the orchestration, optimization, and automation of LLM workflows. At its core, AutoGen provides customizable and conversable agents that can integrate LLMs, human inputs, and external tools in flexible combinations. Developers can construct simple two-agent chats (for example, an AssistantAgent paired with a UserProxyAgent that executes code) or elaborate group chats where a manager agent routes messages among a team of specialists such as planners, coders, critics, and reviewers. Agents can write and execute Python code, call functions, browse the web, query databases, and hand off work to humans when needed.
AutoGen supports diverse conversation patterns, including fully autonomous agent-to-agent dialogue, human-in-the-loop workflows where a person can intervene or approve steps, and hierarchical structures where one agent supervises others. The framework is model-agnostic, working with OpenAI models, Azure OpenAI, local open-source models via Ollama or LM Studio, and other providers through a unified client interface. It also includes built-in support for code execution in Docker containers or local environments, retrieval-augmented generation, and integration with external APIs.
The project has evolved significantly since its initial release. The modern AutoGen (v0.4 and beyond) introduces a layered architecture with AutoGen Core for event-driven agent runtimes, AutoGen AgentChat for high-level conversation patterns, and AutoGen Extensions for integrations. Alongside the Python library, Microsoft released AutoGen Studio, a low-code interface that lets users prototype multi-agent workflows visually without writing code. AutoGen has become one of the most widely adopted agentic frameworks in the open-source ecosystem, with tens of thousands of GitHub stars and an active research community publishing papers on topics like automated agent design, cost optimization, and evaluation benchmarks such as GAIA.
Was this helpful?
Agents are defined as Python objects with configurable system prompts, LLM backends, tools, and message-handling logic. The AssistantAgent and UserProxyAgent base classes cover the most common patterns, and developers can subclass them to create specialized roles such as planners, critics, or domain experts.
The GroupChat and GroupChatManager classes allow multiple agents to participate in a shared conversation, with the manager selecting the next speaker based on rules, round-robin, or LLM-based routing. This enables team dynamics such as brainstorming, debate, and hierarchical review.
Agents can write and execute Python code in local processes or isolated Docker containers. The framework handles code extraction from LLM outputs, runs it safely, captures stdout/stderr, and returns results to the conversation for iterative refinement.
UserProxyAgent supports three human input modes â ALWAYS, TERMINATE, and NEVER â letting developers control when a human can intervene, approve actions, or supply missing information during an agent conversation.
A web-based interface lets users configure agents, skills, and workflows through forms and drag-and-drop, then run them against real LLMs. It is ideal for prototyping, demos, and enabling non-programmers to experiment with multi-agent patterns.
Agents can be equipped with arbitrary Python functions or OpenAI-compatible tool schemas, letting them call APIs, query databases, invoke external services, and compose results within the conversation loop.
Free
Pay-per-token (provider-dependent)
Ready to get started with Microsoft AutoGen?
View Pricing Options âWeekly insights on the latest AI tools, features, and trends delivered to your inbox.
Through late 2025 and into 2026, AutoGen has continued its v0.4+ architectural direction with a layered design separating AutoGen Core (event-driven runtime), AutoGen AgentChat (high-level patterns), and AutoGen Extensions (integrations). Microsoft has been aligning AutoGen more closely with its broader agentic stack, including Semantic Kernel and the Azure AI Agent Service, while preserving the open-source framework's independence. Recent releases have expanded support for async streaming, improved tool-calling reliability with newer frontier models, and added tighter integration with observability tools for tracing multi-agent conversations. AutoGen Studio has received updates to its workflow editor, and the research community continues to publish new benchmarks and reference patterns for agentic evaluation.
No reviews yet. Be the first to share your experience!
Get started with Microsoft AutoGen and see if it's the right fit for your needs.
Get Started âTake our 60-second quiz to get personalized tool recommendations
Find Your Perfect AI Stack âExplore 20 ready-to-deploy AI agent templates for sales, support, dev, research, and operations.
Browse Agent Templates â