Master OpenHands with our step-by-step tutorial, detailed feature walkthrough, and expert tips.
Explore the key features that make OpenHands powerful for ai coding workflows.
Works with any LLM provider including Anthropic, OpenAI, Google, open-source models via Ollama, or custom API endpoints, letting teams choose models based on cost, privacy, or task requirements.
A security-conscious enterprise runs OpenHands with a self-hosted Llama model for proprietary code, while using Claude for open-source contributions where privacy is less critical.
Agents execute in isolated Docker or Kubernetes containers with full terminal access, file system manipulation, web browsing, and API interaction — all auditable and controllable.
Running autonomous dependency upgrades in an isolated environment where the agent can test changes, run the full test suite, and verify nothing breaks before pushing to the repository.
Native integration with GitHub and GitLab for issue-driven agent workflows, plus Slack and Jira connectivity for triggering agent tasks from existing team communication channels.
A team labels a GitHub issue as 'openhands' and the agent automatically picks it up, creates a branch, implements the fix, and opens a PR with a description of changes made.
Cloud SDK and APIs enable running thousands of parallel agent instances for batch operations like codebase-wide refactoring, mass dependency updates, or large-scale test generation.
An engineering leader kicks off parallel OpenHands agents across 50 microservice repositories to update a shared library version and fix any resulting test failures.
Agents scan dependencies for known vulnerabilities, propose fixes, run tests to verify compatibility, and open reviewable pull requests — automating the security maintenance loop.
A DevSecOps team configures OpenHands to automatically create PRs for critical CVE patches across all repositories within hours of vulnerability disclosure.
The open-source version is completely free under MIT license. The hosted cloud Individual tier is also free with bring-your-own-key or at-cost model access. You pay only for LLM inference. Enterprise deployments with VPC, SSO, and support require custom pricing.
OpenHands is model-agnostic and open-source, meaning you can use any LLM and self-host the entire platform. Copilot's agent is tightly integrated with GitHub's ecosystem but locked to GitHub's infrastructure and model choices. OpenHands offers more flexibility; Copilot offers deeper GitHub integration.
OpenHands is language-agnostic — it works with any language your chosen LLM can handle. The agent has terminal access and can install language-specific toolchains, run compilers, and execute test suites for any language.
Yes. Self-hosted OpenHands runs entirely in your infrastructure with no external data transmission. The cloud version supports GitHub and GitLab authentication with standard OAuth scopes. Enterprise tier adds VPC deployment for complete data isolation.
Now that you know how to use OpenHands, it's time to put this knowledge into practice.
Sign up and follow the tutorial steps
Check pros, cons, and user feedback
See how it stacks against alternatives
Follow our tutorial and master this powerful ai coding tool in minutes.
Tutorial updated March 2026