Open-source RPC framework (Apache 2.0) that lets AI agents call functions across network boundaries without opening ports. Supports TypeScript, Go, and Python with long-polling SDKs for long-running agent tasks.
Open-source RPC framework that lets AI agents call functions behind firewalls and across private networks without opening ports — supports TypeScript, Go, and Python with MCP compatibility.
AI agents need to call functions. Sometimes those functions live behind a firewall, inside a Kubernetes cluster, or on a private VPC that the agent can't reach. AgentRPC solves this by providing a hosted RPC layer that bridges network boundaries. Your agent calls a function through AgentRPC's server; AgentRPC routes it to whatever machine registered that function. No open ports. No VPN configuration. No firewall rules.
Most agent frameworks assume your tools run on the same machine or are accessible via public APIs. Reality is different. Your database sits in a private VPC. Your internal APIs require VPN access. Your ML models run on a GPU server with no public endpoint. AgentRPC lets you register functions from any of these locations and expose them to your agents through a single hosted server.
The long-polling approach matters for AI workflows. Standard HTTP requests time out after 30-60 seconds. An agent task might take 5 minutes: query a database, process results, generate a report. AgentRPC's SDKs use long polling to keep connections alive beyond HTTP timeout limits. Your function runs as long as it needs, and the agent gets the result when it's ready.
Setup takes two steps. First, register your function with AgentRPC from whatever machine hosts it:
typescript
import { AgentRPC } from '@agentrpc/sdk';
const rpc = new AgentRPC({ apiSecret: 'your-key' });
rpc.register({ name: 'queryDatabase', handler: async (params) => { / your logic / } });
await rpc.listen();
Second, your agent calls that function through AgentRPC's API or MCP server. The SDK handles routing, health monitoring, and retries. Your function never needs a public endpoint.
SDKs ship for TypeScript, Go, and Python, with .NET in development. You register functions in whichever language your backend uses. A Python data processing pipeline, a Go microservice, and a TypeScript API handler all become available to your agents through the same RPC layer.
This matters for teams with polyglot architectures. Your ML team writes Python. Your backend team writes Go. Your frontend team writes TypeScript. AgentRPC lets AI agents call functions in all three without any team changing their stack.
AgentRPC works with MCP (Model Context Protocol) and OpenAI SDK-compatible agents out of the box. The TypeScript SDK includes a built-in MCP server that exposes registered tools to any MCP-compatible host, including Claude Desktop and Cursor. OpenAI-compatible tool definitions also work with Anthropic, LiteLLM, and OpenRouter.
This makes AgentRPC a complement to MCP, not a competitor. MCP defines how agents discover and call tools. AgentRPC handles the network plumbing to make those calls work across private networks.
The hosted platform provides tracing, metrics, and event logging for every function call. You can see which functions are being called, how long they take, which ones fail, and why. Health tracking monitors registered functions and automatically fails over to healthy instances if one goes down.
AgentRPC's SDKs and core components are open-source under the Apache 2.0 license, hosted on GitHub. The hosted RPC server at api.agentrpc.com is a managed service that handles function registration, health monitoring, and routing.
AgentRPC is a relatively new project with a small user community. Public discussion of production deployments is limited, and real-world case studies are scarce. The documentation covers basics well but lacks depth on advanced deployment patterns, security hardening, and scaling.
For simpler scenarios where your functions are publicly accessible, AgentRPC adds unnecessary complexity. Direct HTTP calls or standard function calling works fine. AgentRPC earns its keep only when network boundaries are the actual problem.
The managed server adds a network hop. For functions that take milliseconds, this latency overhead matters. For agent tasks that take seconds or minutes, it's negligible.
Was this helpful?
AgentRPC solves a specific problem: letting AI agents call functions across private networks without VPNs or open ports. Valuable for teams with polyglot architectures and private infrastructure. Open-source under Apache 2.0 with MCP compatibility. Still early-stage with a small community and limited production track record.
Register functions running behind firewalls, in private VPCs, or inside Kubernetes clusters and make them callable by AI agents through a hosted RPC server — no port forwarding, VPN, or firewall rules required
Use Case:
A data science team registers ML model inference functions running on private GPU servers, making them available to customer-facing AI agents without exposing internal infrastructure
SDKs use long polling to keep connections alive beyond standard HTTP timeout limits, supporting agent tasks that take minutes rather than seconds
Use Case:
An agent triggers a database migration function that takes 8 minutes to complete — AgentRPC maintains the connection and returns results when ready, unlike standard HTTP which would timeout
TypeScript SDK includes a ready-to-use MCP server that automatically exposes all registered functions to any MCP-compatible AI host like Claude Desktop or Cursor
Use Case:
A developer registers internal API functions with AgentRPC and immediately uses them from Claude Desktop through MCP without writing any additional integration code
Register functions in TypeScript, Go, or Python through idiomatic SDKs, allowing polyglot teams to expose backend services in whatever language they're written in
Use Case:
A team exposes Python data processing, Go microservices, and TypeScript API handlers through a single RPC layer without rewriting any service code
Hosted platform provides tracing, metrics, and event logging for every function call, with automatic health tracking that detects failures and routes to healthy instances
Use Case:
DevOps team monitors which agent-called functions are failing, how long calls take, and gets automatic failover when a registered service goes down
Free
Free tier available
Custom (contact sales)
Ready to get started with AgentRPC?
View Pricing Options →Teams with databases, ML models, or internal APIs in private VPCs who need AI agents to call those functions without opening ports or configuring VPN tunnels
Organizations running services across AWS, GCP, and Azure who need a single RPC layer for agents to reach functions in any cloud environment
Engineering teams using Python, Go, and TypeScript who want to expose functions from all languages to AI agents through one unified interface
AI workflows involving database queries, report generation, or data processing that exceed standard HTTP timeout limits and need persistent connections
We believe in transparent reviews. Here's what AgentRPC doesn't handle well:
AgentRPC is designed for AI agent workflows. It handles long-running functions beyond HTTP timeout limits, integrates natively with MCP and OpenAI-compatible SDKs, and works across private networks without port configuration. gRPC requires both endpoints to be network-accessible to each other, which doesn't work for agents calling functions behind firewalls.
No. AgentRPC adds value only when network boundaries prevent direct function calls. If your tools are publicly accessible, standard HTTP calls or local MCP servers work fine without the extra layer.
The hosted RPC server adds a network hop. For functions that complete in milliseconds, this is noticeable (tens of ms added). For typical agent tasks that take seconds or minutes — database queries, report generation, API chains — the overhead is negligible relative to function execution time.
Yes. The SDKs and core components are open-source under the Apache 2.0 license on GitHub. The hosted RPC server at api.agentrpc.com is a managed service for routing, health monitoring, and observability. You can self-host the SDKs and point to your own server if needed.
Temporal is a general-purpose workflow orchestration engine with state management, retries, and complex workflow graphs. AgentRPC is simpler and purpose-built for the specific problem of AI agents calling functions across network boundaries. If you need full workflow orchestration, use Temporal. If you just need agents to reach private functions, AgentRPC is lighter weight.
Weekly insights on the latest AI tools, features, and trends delivered to your inbox.
Stable multi-language SDK support (TypeScript, Go, Python) with .NET in development. Built-in MCP server in TypeScript SDK for Claude Desktop and Cursor. OpenAI-compatible tool definitions supporting Anthropic, LiteLLM, and OpenRouter. Observability features with tracing and metrics on hosted platform.
People who use this tool also find these helpful
11x builds AI sales agents that prospect, email, call, and qualify leads around the clock. Alice handles outbound email sequences while Julian makes phone calls. At ~$5,000/month, it targets mid-market and enterprise teams replacing or augmenting human SDR headcount.
Browser-based platform for creating autonomous AI agents that break goals into tasks and execute them. Open-source with 34K+ GitHub stars, but development has slowed since 2023. Free tier available; Pro at $40/month.
No-code platform for building AI agents with voice interactions, MCP integrations, and multi-model support. Deploy specialized agents for lead routing, document analysis, and business workflows. Free tier includes 1,000 agent calls.
Midjourney is the leading AI image generation platform that transforms text prompts into stunning visual artwork. With its newly released V8 Alpha offering 5x faster generation and native 2K HD output, Midjourney dominates the artistic quality space in 2026, serving over 680,000 community members through its Discord-based interface.
AI-first code editor with autonomous coding capabilities. Understands your codebase and writes code collaboratively with you.
OpenAI's conversational AI platform with multimodal capabilities, web browsing, image generation, code execution, Codex for software engineering, and collaborative editing across six pricing tiers.
See how AgentRPC compares to Temporal and other alternatives
View Full Comparison →Workflow Orchestration
Enterprise durable execution platform designed for AI agent orchestration with guaranteed reliability, state management, and human-in-the-loop workflows.
Deployment & Hosting
Serverless compute for model inference, jobs, and agent tools.
No reviews yet. Be the first to share your experience!
Complete Guide
Deep dive tutorials, advanced techniques, real-world examples, and expert tips to get the most out of AgentRPC.
Get the Guide →Get started with AgentRPC and see if it's the right fit for your needs.
Get Started →Take our 60-second quiz to get personalized tool recommendations
Find Your Perfect AI Stack →Explore 20 ready-to-deploy AI agent templates for sales, support, dev, research, and operations.
Browse Agent Templates →