AI Governance / U.S. Laws Tracker

U.S. AI laws by state, in plain English

Most state AI laws are aimed at high-risk uses — hiring, lending, insurance — not at every business that uses ChatGPT or Claude. Here's a calm reading of where you actually have obligations.

Reviewed by Level Up Automate.This is general information, not legal advice. Confirm specifics with your own counsel.
TL;DR
  • Most state laws hit a narrow band of high-risk AI uses: employment decisions, consumer-finance decisions, health decisions.

  • If you don't make those kinds of decisions with AI, your obligations are usually disclosure and reasonable governance.

  • This page is updated quarterly — confirm specifics with counsel before relying on it for compliance.

How to read state AI laws without getting overwhelmed

Almost every state AI law in force or proposed asks the same handful of questions: are you using AI to make a 'consequential decision' about a person, do you tell the person, do you let them appeal, and have you done a risk assessment.

The practical effect for most small and mid-size businesses: if you use AI to draft a contract or summarize meetings, you're outside scope. If you use AI to screen resumes, set credit limits, or make health recommendations, you're squarely in scope and need to read the statute carefully.

Colorado AI Act (effective 2026)

The Colorado AI Act applies to 'high-risk AI systems' that make consequential decisions about consumers in employment, education, financial services, healthcare, housing, insurance, legal, and essential government services.

If you operate in those areas with AI, expect requirements around: risk management programs, impact assessments, consumer disclosures, and the ability for consumers to appeal decisions. Read our [Colorado AI Act page](/ai-governance/frameworks/colorado-ai-act) for the operational summary.

Illinois (employment focus)

Illinois has been an early mover on AI in employment, with rules around AI video interview analysis. If you use AI in hiring decisions in Illinois, expect notice-and-consent obligations and limits on certain biometric uses.

New York City Local Law 144 (employment)

Not a state law, but worth flagging because it affects so many companies hiring in New York City. Employers using automated decision tools in hiring must conduct an annual independent bias audit and disclose use to candidates.

California (general consumer rights + automated decisions)

California's existing privacy law (CCPA / CPRA) gives consumers rights around automated decisions. Expect disclosure obligations and the ability for consumers to opt out or get human review for certain decisions. State-level AI-specific legislation is active and changes often.

Other states with movement

Multiple states have introduced bills addressing AI in employment, deepfakes, election content, generative AI watermarking, and consumer-facing automated decisions. Bill text varies dramatically. The safest stance: maintain a written governance program (policy + risk assessments + audit log), and you're in good shape for most reasonable enacted requirements.

  • Connecticut, Texas, Tennessee, Utah, Virginia: active proposals or narrow enacted laws.
  • If you operate in regulated industries (insurance, finance, healthcare) — your sector regulator is often your tighter constraint.
  • Subscribe to your state's labor and consumer-protection agency notices for hiring and consumer-finance changes.
Common questions

Plain-English answers

We're a small business in Rhode Island. Do any of these apply to us?
Not directly today, in most cases. But if you sell into Colorado or hire candidates in NYC, you may be in scope. Geography of customers and candidates often matters more than your headquarters.
How often is this page updated?
Quarterly. Confirm specifics with counsel before acting — laws move faster than any tracker.
Next step

Want a hand getting this right?

A 30-minute conversation often saves weeks of guessing. We'll talk through your team, your data, and what to do first — no slide deck required.