No free plan. The cheapest way in is Pay-As-You-Go at Usage-based, no upfront cost. Consider free alternatives in the ai memory & search category if budget is tight.
Yes. The hosted agents feature supports Microsoft Agent Framework, LangGraph, Semantic Kernel, or custom code deployment on the same managed runtime. You can bring your existing agent codebase and run it on Azure's managed infrastructure without rewriting for Azure-specific orchestration. This dual-path support — no-code in the portal or code-first through hosted agents — is unique among cloud agent platforms and means you are not locked into a single framework or forced to rewrite existing agent logic to benefit from Azure's managed infrastructure, security, and memory capabilities.
Create prompt-based agents directly in the Microsoft Foundry portal by configuring tools, knowledge sources, and workflows through the UI. You can attach SharePoint sites, Azure AI Search indexes, Bing grounding, OpenAPI tools, and Logic Apps actions without writing code. Deploy with a single click. This path is best suited for simple agents, rapid prototyping, and business teams that need conversational AI without engineering resources. When requirements grow more complex, agents can be transitioned to code-based deployment on the same managed runtime without starting over.
Foundry's managed memory, available in public preview, provides automatic extraction of key information from conversations, consolidation across agent sessions, and intelligent retrieval based on the current request context. Agents remember customer preferences, previous requests, and ongoing project state without you building a vector store, summarization pipeline, or custom retrieval logic. This eliminates weeks of infrastructure work that teams using LangGraph or CrewAI typically invest in building and maintaining their own memory systems. The service handles storage, indexing, and context-aware recall natively within the Azure platform.
Both charge for model tokens with no separate orchestration fee. Azure AI Agent Service's GPT-4o pricing runs $2.50/1M input tokens and $10/1M output tokens at pay-as-you-go rates, comparable to Bedrock's Claude and Llama pricing tiers. Azure adds unique value through Agent Commit Units — pre-purchase volume discounts for committed monthly spend starting at $5,000/month. AWS counters with a broader model marketplace. For high-volume enterprise workloads, ACU discounts can meaningfully reduce total cost versus Bedrock's strictly pay-as-you-go model.
Foundry Agent Service primarily supports models available through Azure OpenAI Service (GPT-4, GPT-4o, GPT-5 family) plus a curated Foundry Models catalog that includes select Meta Llama, Mistral, and partner models. Model availability is narrower than AWS Bedrock's marketplace. Verify your preferred models are available in your target Azure region before committing, as catalog availability varies by region and new models are added on a rolling basis through the Foundry Models program.
See Azure AI Agent Service plans and find the right tier for your needs.
See Pricing Plans →Still not sure? Read our full verdict →
Last verified March 2026