Frameworks / EU AI Act

The EU AI Act, for U.S. businesses

The EU AI Act is real, broad, and has extraterritorial reach — but it doesn't apply to every U.S. business. Here's how to tell whether you're in scope and what to do if you are.

Reviewed by Level Up Automate.This is general information, not legal advice. Confirm specifics with your own counsel.
TL;DR
  • If you put AI products into the EU market, or your AI's output is used in the EU, you may be in scope — even if your company is U.S.-only.

  • The Act sorts AI uses into four risk tiers: prohibited, high-risk, limited-risk, minimal-risk. Most small-business uses sit in the bottom two tiers.

  • Penalties are real (up to 7% of global turnover at the high end) but enforcement targets the riskiest uses first.

Does this even apply to us?

The EU AI Act applies if any of the following are true: you place an AI system on the EU market, your AI's output is used in the EU, or you are an EU-based deployer of an AI system. There are exceptions for purely personal/non-professional use and some research.

For most U.S. small and mid-size businesses with no EU customers and no EU users of their AI features, the direct impact is low today. But if your SaaS has European customers, or your customers use your AI tooling on European data, this is worth a careful read.

The four risk tiers

The Act sorts AI uses by how risky they are to people. Most small-business uses sit in the bottom two tiers; obligations rise sharply as you move up.

  • Prohibited — social scoring, certain emotion recognition in workplaces and schools, untargeted facial-image scraping. Don't do these. Period.
  • High-risk — AI in employment, education, essential services, law enforcement, migration, justice, and certain product safety. Significant obligations: risk management systems, data governance, technical documentation, human oversight, transparency, accuracy and robustness, registration in an EU database.
  • Limited-risk — AI that interacts with people, generates synthetic content, or uses emotion recognition. Disclosure obligations: tell users they're interacting with AI, label deepfakes.
  • Minimal-risk — most everyday AI uses (spam filters, AI-enabled video games, simple recommendations). No specific obligations under the Act.

What you might owe if you're in scope

For limited-risk uses, the practical impact is usually a disclosure obligation: a clear notice that users are interacting with AI, or that media is AI-generated.

For high-risk uses, the obligations are heavy: risk management, technical documentation, EU registration, conformity assessment, human-oversight mechanisms, ongoing monitoring. If you might be in this tier, get counsel involved early — the documentation alone takes months.

What about general-purpose AI (foundation models)?

The Act has separate obligations for providers of general-purpose AI models (think OpenAI, Anthropic, Google). If you deploy these models in your products, you are typically a deployer, not a provider — so the heaviest obligations fall on the model maker. But you may still owe transparency and documentation depending on your use.

Common questions

Plain-English answers

We sell SaaS in the U.S. only. Are we in scope?
Probably not, unless an EU-based customer uses your AI features and the output is used in the EU. The trigger is reach into the EU market, not just U.S. operations.
What are the penalties?
Up to €35M or 7% of global annual turnover for prohibited-use violations, with lower caps for other violations. Enforcement is staggered as the Act phases in.
Should we just stop doing business in the EU?
Almost never the right answer. Most uses are minimal- or limited-risk and the obligations are manageable. Map your AI uses against the four tiers first, then decide.
Next step

Want a hand getting this right?

A 30-minute conversation often saves weeks of guessing. We'll talk through your team, your data, and what to do first — no slide deck required.