AI Governance / Incident Response

When AI goes wrong, here's what to do

Almost every AI incident at a small business is recoverable if you act in the first 24 hours. This is the playbook your team can follow without a war room.

Reviewed by Level Up Automate.
TL;DR
  • Stop the bleed first: pause the tool, contain the data, document what happened.

  • Tell affected customers in plain English. Speed and honesty matter more than legal polish.

  • Within 30 days, fix the underlying gap so the same incident can't recur.

The first 60 minutes

Whoever first notices the problem is the temporary owner. Their job is not to fix everything — it's to stop the bleed.

Pause or disable the tool that caused it. If client data has been exposed, do not delete anything yet — preserve what's there for later review. Send a calm internal note to the owner / COO and the person responsible for IT or security. Do not email customers yet — collect facts first.

The first 24 hours

In the first day, you want answers to four questions: what went wrong, who was affected, what data or output was involved, and is it still happening.

Assign one person to each question and have them report back in writing. Even a few sentences is enough. The point is a written record while memories are fresh — months from now, you'll need it.

Communicating with customers

If a customer was given wrong information or had data exposed, tell them — quickly. The companies that recover well share three traits: they call before the customer notices, they describe the issue in plain English, and they tell the customer what's been done so it can't happen again.

Lawyer language ('we regret to inform you that an incident may have occurred') signals you're hiding something. Plain English ('our AI tool gave you the wrong delivery date — here's what really happened, and here's what we're doing') signals you're a serious partner.

The 30-day fix

Within 30 days, every incident gets a written 'never again' note. One page: what happened, why, and what changed so it can't recur. File it with your AI policy.

This is the single most valuable artifact governance produces. Six months from now, when an insurer or a customer asks how you handle AI risk, you hand them the policy plus the never-again file. That's a more compelling story than any framework certification.

When to call a lawyer

Most small AI incidents do not need legal counsel. Call one if: regulated personal data (PHI, financial account numbers, EU/UK personal data) was exposed; a customer is threatening litigation; or the incident involved a vendor breach where contractual liability is in play. For everything else, your insurance broker is the more useful first call.

Common questions

Plain-English answers

What counts as an incident?
Any time AI causes harm — wrong information sent to a customer, data exposed to the wrong place, an embarrassing output that reached anyone outside the company. Near-misses count too; document them.
Do I have to disclose this publicly?
Usually no — most disclosure obligations are tied to specific regulated data types in specific jurisdictions. Your insurance broker and (if needed) counsel can sort this in an hour.
How do we run a tabletop exercise on this?
Take a hypothetical from the news, walk it through this playbook with your leadership team, and time how long each step takes. Most companies do this once a year. We're happy to run one for you.
Next step

Want a hand getting this right?

A 30-minute conversation often saves weeks of guessing. We'll talk through your team, your data, and what to do first — no slide deck required.