aitoolsatlas.ai
BlogAbout
Menu
📝 Blog
â„šī¸ About

Explore

  • All Tools
  • Comparisons
  • Best For Guides
  • Blog

Company

  • About
  • Contact
  • Editorial Policy

Legal

  • Privacy Policy
  • Terms of Service
  • Affiliate Disclosure
Privacy PolicyTerms of ServiceAffiliate DisclosureEditorial PolicyContact

Š 2026 aitoolsatlas.ai. All rights reserved.

Find the right AI tool in 2 minutes. Independent reviews and honest comparisons of 875+ AI tools.

  1. Home
  2. Tools
  3. Automation
  4. FDM-1
  5. Tutorial
OverviewPricingReviewWorth It?Free vs PaidDiscountAlternativesComparePros & ConsIntegrationsTutorialChangelogSecurityAPI
📚Complete Guide

FDM-1 Tutorial: Get Started in 5 Minutes [2026]

Master FDM-1 with our step-by-step tutorial, detailed feature walkthrough, and expert tips.

Get Started with FDM-1 →Full Review ↗

🔍 FDM-1 Features Deep Dive

Explore the key features that make FDM-1 powerful for automation workflows.

11-Million-Hour Video Training Corpus

What it does:

Use case:

Video Encoder with 1M-Token / 2-Hour Compression

What it does:

Use case:

Inverse Dynamics Model for Unsupervised Labeling

What it does:

Use case:

Forward Dynamics Model at 30 FPS

What it does:

Use case:

OS Checkpointing and Forking VMs for Test-Time Compute

What it does:

Use case:

❓ Frequently Asked Questions

What is FDM-1 and who built it?

FDM-1 is a foundation model for general computer use built by Standard Intelligence (standard intelligence pbc), announced February 23, 2026. Unlike prior computer-use agents that fine-tune a vision-language model on screenshots, FDM-1 trains and infers directly on video at 30 FPS. It was trained on a portion of an 11-million-hour screen recording dataset labeled by a custom inverse dynamics model. The team positions it as the first fully general computer action model.

How is FDM-1 different from screenshot-based agents like Claude Computer Use or OpenAI's Operator?

Traditional computer-use agents fine-tune a VLM on contractor-annotated screenshots, which limits them to a few seconds of context, low framerates, and short-horizon tasks. FDM-1 instead trains directly on 30 FPS video and uses a video encoder that compresses ~2 hours into 1M tokens, giving it minute-scale context. It also avoids per-task reinforcement learning environments, learning unsupervised from the open internet's video corpus. Based on our analysis of 870+ AI tools, this is the only Automation entry that trains a custom video foundation model end-to-end for computer use.

What can FDM-1 actually do today?

Standard Intelligence demonstrated FDM-1 performing multi-action CAD sequences in Blender (including extruding faces on an n-gon to make a gear), exploring and fuzzing complex websites, and driving a car in the real world — all at 30 FPS. The CAD demo uses OS checkpoints created at successful operations (extrude, select, etc.) to enable test-time compute via a forking VM. The blog post emphasizes that capabilities consistently improve with scale, and the team frames the current model as the first step toward CAD, finance, engineering, and ML-research coworker agents.

How much does FDM-1 cost and how do I access it?

FDM-1 has no published pricing or self-serve access as of the February 23, 2026 announcement. Standard Intelligence describes it as a research milestone in a blog post at si.inc/posts/fdm1/, and access appears to be limited to enterprise or research partnerships. Compared to other Automation tools in our directory that publish $20–$200/month tiers, FDM-1 sits firmly in the enterprise / contact-sales segment with no free or developer tier announced.

What are the technical components of FDM-1's training recipe?

The training recipe has three core components, all described in the launch post. First, a video encoder that compresses approximately 2 hours of 30 FPS video into 1 million tokens, enabling long-context training. Second, an inverse dynamics model that labels raw screen recordings with the actions that produced them, removing the need for contractor annotation. Third, a forward dynamics model that predicts future frames conditioned on actions, which is the component used to drive the agent at inference time.

đŸŽ¯

Ready to Get Started?

Now that you know how to use FDM-1, it's time to put this knowledge into practice.

✅

Try It Out

Sign up and follow the tutorial steps

📖

Read Reviews

Check pros, cons, and user feedback

âš–ī¸

Compare Options

See how it stacks against alternatives

Start Using FDM-1 Today

Follow our tutorial and master this powerful automation tool in minutes.

Get Started with FDM-1 →Read Pros & Cons
📖 FDM-1 Overview💰 Pricing Detailsâš–ī¸ Pros & Cons🆚 Compare Alternatives

Tutorial updated March 2026