aitoolsatlas.ai
BlogAbout
Menu
๐Ÿ“ Blog
โ„น๏ธ About

Explore

  • All Tools
  • Comparisons
  • Best For Guides
  • Blog

Company

  • About
  • Contact
  • Editorial Policy

Legal

  • Privacy Policy
  • Terms of Service
  • Affiliate Disclosure
Privacy PolicyTerms of ServiceAffiliate DisclosureEditorial PolicyContact

ยฉ 2026 aitoolsatlas.ai. All rights reserved.

Find the right AI tool in 2 minutes. Independent reviews and honest comparisons of 875+ AI tools.

  1. Home
  2. Tools
  3. Automation
  4. FDM-1
  5. Pricing
OverviewPricingReviewWorth It?Free vs PaidDiscountAlternativesComparePros & ConsIntegrationsTutorialChangelogSecurityAPI
โ† Back to FDM-1 Overview

FDM-1 Pricing & Plans 2026

Complete pricing guide for FDM-1. Compare all plans, analyze costs, and find the perfect tier for your needs.

Try FDM-1 Free โ†’Compare Plans โ†“

Not sure if free is enough? See our Free vs Paid comparison โ†’
Still deciding? Read our full verdict on whether FDM-1 is worth it โ†’

๐Ÿ’Ž1 Paid Plans
โšกNo Setup Fees

Choose Your Plan

Enterprise

Custom (contact sales)

mo

  • โœ“Full access to FDM-1 foundation model for computer use
  • โœ“30 FPS native video inference for long-horizon tasks
  • โœ“CAD modeling, website automation, and multi-step workflow capabilities
  • โœ“OS checkpoint and forking VM infrastructure for test-time compute
  • โœ“Custom deployment and integration support
  • โœ“Research partnership and co-development options
Contact Sales โ†’

Pricing sourced from FDM-1 ยท Last verified March 2026

Is FDM-1 Worth It?

โœ… Why Choose FDM-1

  • โ€ข First computer-use foundation model trained on internet-scale video (11M hours), versus the largest open computer-use dataset of under 20 hours of 30 FPS video
  • โ€ข Native 30 FPS video processing enables continuous control like smooth mouse movement and CAD operations rather than discrete screenshot-by-screenshot reasoning
  • โ€ข Highly efficient video encoder compresses nearly 2 hours of footage into just 1M tokens, unlocking minute-scale context windows
  • โ€ข Unsupervised training via the inverse dynamics model removes the bottleneck of expensive contractor-labeled screenshots
  • โ€ข Test-time compute via OS checkpoints / forking VMs lets the model retry from validated intermediate states on long-horizon tasks
  • โ€ข Demonstrably general โ€” the same model performs CAD modeling, website fuzzing, and real-world driving without task-specific RL environments

โš ๏ธ Consider This

  • โ€ข No public API, pricing page, or self-serve access โ€” gated to enterprise and research partners
  • โ€ข Capabilities are demonstrated through curated video clips rather than peer-reviewed benchmarks against established computer-use leaderboards
  • โ€ข Released February 23, 2026, so production track record, reliability, and safety guardrails are unproven at scale
  • โ€ข Inference at 30 FPS on minute-long video contexts implies significant GPU cost not disclosed publicly
  • โ€ข No documentation of supported operating systems, integrations, or developer tooling beyond the research blog post

What Users Say About FDM-1

๐Ÿ‘ What Users Love

  • โœ“First computer-use foundation model trained on internet-scale video (11M hours), versus the largest open computer-use dataset of under 20 hours of 30 FPS video
  • โœ“Native 30 FPS video processing enables continuous control like smooth mouse movement and CAD operations rather than discrete screenshot-by-screenshot reasoning
  • โœ“Highly efficient video encoder compresses nearly 2 hours of footage into just 1M tokens, unlocking minute-scale context windows
  • โœ“Unsupervised training via the inverse dynamics model removes the bottleneck of expensive contractor-labeled screenshots
  • โœ“Test-time compute via OS checkpoints / forking VMs lets the model retry from validated intermediate states on long-horizon tasks
  • โœ“Demonstrably general โ€” the same model performs CAD modeling, website fuzzing, and real-world driving without task-specific RL environments

๐Ÿ‘Ž Common Concerns

  • โš No public API, pricing page, or self-serve access โ€” gated to enterprise and research partners
  • โš Capabilities are demonstrated through curated video clips rather than peer-reviewed benchmarks against established computer-use leaderboards
  • โš Released February 23, 2026, so production track record, reliability, and safety guardrails are unproven at scale
  • โš Inference at 30 FPS on minute-long video contexts implies significant GPU cost not disclosed publicly
  • โš No documentation of supported operating systems, integrations, or developer tooling beyond the research blog post

Pricing FAQ

What is FDM-1 and who built it?

FDM-1 is a foundation model for general computer use built by Standard Intelligence (standard intelligence pbc), announced February 23, 2026. Unlike prior computer-use agents that fine-tune a vision-language model on screenshots, FDM-1 trains and infers directly on video at 30 FPS. It was trained on a portion of an 11-million-hour screen recording dataset labeled by a custom inverse dynamics model. The team positions it as the first fully general computer action model.

How is FDM-1 different from screenshot-based agents like Claude Computer Use or OpenAI's Operator?

Traditional computer-use agents fine-tune a VLM on contractor-annotated screenshots, which limits them to a few seconds of context, low framerates, and short-horizon tasks. FDM-1 instead trains directly on 30 FPS video and uses a video encoder that compresses ~2 hours into 1M tokens, giving it minute-scale context. It also avoids per-task reinforcement learning environments, learning unsupervised from the open internet's video corpus. Based on our analysis of 870+ AI tools, this is the only Automation entry that trains a custom video foundation model end-to-end for computer use.

What can FDM-1 actually do today?

Standard Intelligence demonstrated FDM-1 performing multi-action CAD sequences in Blender (including extruding faces on an n-gon to make a gear), exploring and fuzzing complex websites, and driving a car in the real world โ€” all at 30 FPS. The CAD demo uses OS checkpoints created at successful operations (extrude, select, etc.) to enable test-time compute via a forking VM. The blog post emphasizes that capabilities consistently improve with scale, and the team frames the current model as the first step toward CAD, finance, engineering, and ML-research coworker agents.

How much does FDM-1 cost and how do I access it?

FDM-1 has no published pricing or self-serve access as of the February 23, 2026 announcement. Standard Intelligence describes it as a research milestone in a blog post at si.inc/posts/fdm1/, and access appears to be limited to enterprise or research partnerships. Compared to other Automation tools in our directory that publish $20โ€“$200/month tiers, FDM-1 sits firmly in the enterprise / contact-sales segment with no free or developer tier announced.

What are the technical components of FDM-1's training recipe?

The training recipe has three core components, all described in the launch post. First, a video encoder that compresses approximately 2 hours of 30 FPS video into 1 million tokens, enabling long-context training. Second, an inverse dynamics model that labels raw screen recordings with the actions that produced them, removing the need for contractor annotation. Third, a forward dynamics model that predicts future frames conditioned on actions, which is the component used to drive the agent at inference time.

Ready to Get Started?

AI builders and operators use FDM-1 to streamline their workflow.

Try FDM-1 Now โ†’

More about FDM-1

ReviewAlternativesFree vs PaidPros & ConsWorth It?Tutorial