AWS open-source SDK for building AI agents in Python and TypeScript with model-driven tool orchestration, multi-provider LLM support, and native AWS deployment options.
AWS open-source SDK for building AI agents in Python and TypeScript. Create agents that dynamically use tools and coordinate with other agents, with optional managed deployment on AWS.
Strands Agents is an open-source AI agent SDK developed by Amazon Web Services that provides a model-driven approach to building AI agents. Released in May 2025, the SDK has been downloaded over 14 million times and is available for both Python and TypeScript. Unlike rigid framework-based approaches, Strands lets the underlying language model dynamically decide which tools to use and in what order, making agent behavior more natural and adaptive.
The SDK supports multiple LLM providers including Amazon Bedrock, Anthropic, OpenAI, Ollama, LiteLLM, and any OpenAI-compatible endpoint, giving developers flexibility to switch providers without code changes. Strands ships with built-in tools for file operations, shell commands, HTTP requests, code execution, RAG retrieval, and AWS service interactions. Custom tools are created with a simple Python decorator pattern.
Strands includes native conversation memory management, OpenTelemetry observability integration, and supports multi-agent orchestration patterns including hierarchical delegation, parallel execution, swarm coordination, and graph-based workflows with Agent-to-Agent (A2A) communication. The Agent-as-Tool pattern enables building hierarchical architectures where agents can delegate subtasks to other agents.
For production deployment, Strands integrates seamlessly with AWS services: Bedrock AgentCore for managed agent hosting, Lambda for serverless execution, EKS for containerized deployment, and EC2 for VM-based hosting. Enterprise security features include Bedrock Guardrails and AWS IAM integration. The SDK also supports Model Context Protocol (MCP) for connecting to external tool servers. Enterprise customers including Smartsheet, Swisscom, and Eightcap have reported significant results. Eightcap reduced investigation time from 30 minutes to 45 seconds with $5M in operational cost savings.
Strands Labs, announced in February 2026, introduced experimental features including AI Functions that let developers define agents using natural language specifications instead of code, with pre and post conditions in Python that validate behavior and generate working implementations.
Was this helpful?
Strands Agents fills a gap for teams wanting AWS-native agent development with provider flexibility. The model-driven approach produces more adaptive agents than rigid workflow frameworks, while the 14M+ downloads signal strong adoption. Best for Python/TypeScript teams already on AWS who want a lightweight, composable agent SDK. LangChain offers more community resources; CrewAI is more opinionated and easier for non-developers.
The LLM dynamically selects and sequences tools based on the task rather than following hardcoded workflows, enabling more natural and adaptive agent behavior that adjusts its approach based on intermediate results.
Use Case:
A customer support agent dynamically decides whether to search a knowledge base, query a database, or escalate to a human based on the conversation context, without rigid if/then workflow rules.
Works with Amazon Bedrock, Anthropic, OpenAI, Ollama, LiteLLM, and any OpenAI-compatible API. Switch providers by changing a single configuration without modifying agent logic or tool definitions.
Use Case:
A company develops agents on local Ollama models during development, deploys to Bedrock for production, and can switch to Anthropic if pricing or performance changes with zero code modifications.
Ships with 20+ ready-to-use tools for file I/O, shell commands, HTTP requests, code execution, RAG retrieval, and AWS service interactions. Extend with custom tools using a simple @tool Python decorator.
Use Case:
A data pipeline agent uses built-in file and HTTP tools to fetch data, a custom @tool-decorated function to transform it, and the built-in code executor to validate results, all in one agent.
Supports hierarchical delegation, parallel execution, swarm coordination, and graph-based workflows. The Agent-as-Tool pattern lets agents delegate subtasks to specialized sub-agents with A2A communication.
Use Case:
A research agent delegates web scraping to a browser agent, data analysis to a Python agent, and report writing to a content agent, coordinating all three in parallel and merging results.
Deep integration with Bedrock AgentCore for managed hosting, Lambda for serverless execution, EKS for containers, and EC2 for VMs. Includes Bedrock Guardrails for content safety and IAM for access control.
Use Case:
Deploy a customer-facing agent to Bedrock AgentCore with auto-scaling, content guardrails to prevent inappropriate responses, and IAM policies restricting which AWS resources the agent can access.
Built-in Model Context Protocol client support for connecting to thousands of external tool servers. Native OpenTelemetry integration provides tracing, logging, and metrics for debugging agent behavior in production.
Use Case:
Connect an agent to an MCP-compatible database tool server while monitoring every tool call, LLM invocation, and error through CloudWatch dashboards with full request tracing.
$0
Developers and teams building AI agents with full control over deployment and infrastructure
Pay-per-use
Production deployments needing managed infrastructure, auto-scaling, and enterprise support through AWS
Ready to get started with Strands Agents?
View Pricing Options →We believe in transparent reviews. Here's what Strands Agents doesn't handle well:
Weekly insights on the latest AI tools, features, and trends delivered to your inbox.
Strands Labs launched in February 2026 with experimental AI Functions that let developers define agents using natural language specifications instead of code. The SDK continued growth past 14M downloads. Enhanced MCP client support for connecting to thousands of external tool servers. Improved multi-agent orchestration patterns and A2A communication.
AI Agent Builders
Open-source Python framework that orchestrates autonomous AI agents collaborating as teams to accomplish complex workflows. Define agents with specific roles and goals, then organize them into crews that execute sequential or parallel tasks. Agents delegate work, share context, and complete multi-step processes like market research, content creation, and data analysis. Supports 100+ LLM providers through LiteLLM integration and includes memory systems for agent learning. Features 48K+ GitHub stars with active community.
AI Agent Builders
Graph-based workflow orchestration framework for building reliable, production-ready AI agents with deterministic state machines, human-in-the-loop capabilities, and comprehensive observability through LangSmith integration.
Multi-Agent Builders
Microsoft's open-source framework for building multi-agent AI systems with asynchronous, event-driven architecture.
AI Agent Builders
OpenAI's official open-source framework for building agentic AI applications with minimal abstractions. Production-ready successor to Swarm, providing agents, handoffs, guardrails, and tracing primitives that work with Python and TypeScript.
AI Agent Builders
Production-grade Python agent framework that brings FastAPI-level developer experience to AI agent development. Built by the Pydantic team, it provides type-safe agent creation with automatic validation, structured outputs, and seamless integration with Python's ecosystem. Supports all major LLM providers through a unified interface while maintaining full type safety from development through deployment.
No reviews yet. Be the first to share your experience!
Get started with Strands Agents and see if it's the right fit for your needs.
Get Started →Take our 60-second quiz to get personalized tool recommendations
Find Your Perfect AI Stack →Explore 20 ready-to-deploy AI agent templates for sales, support, dev, research, and operations.
Browse Agent Templates →