AI Governance / Acceptable Use

The AI rules every employee needs to know

Your staff are smart adults who want to do their jobs well. Give them the rules in plain English and they'll follow them. Bury the rules in legalese and they'll guess.

Reviewed by Level Up Automate.
TL;DR
  • Three rules cover most situations: approved tools, off-limits data, and human review for anything client-facing.

  • Tell employees what they can do, not just what they can't. Lists of prohibitions don't change behavior.

  • Make 'when in doubt, ask' a default — and make sure asking is never penalized.

Why this exists

AI tools can save your team hours every week. They can also accidentally leak client data, produce confidently wrong information, or create work that no one has actually checked. This page is the rules of the road so people can use AI without worrying every time they hit Enter.

What you can do (the green list)

These uses are encouraged. Don't ask permission, just go.

  • Drafting internal documents — meeting notes, memos, proposals — that you will edit before sending.
  • Summarizing long internal documents to save reading time.
  • Brainstorming ideas, outlines, or alternative phrasings.
  • Looking up general information you would have Googled.
  • Translating internal content between languages.
  • Generating example code, scripts, or formulas — to be tested before use.

What requires a person to check it (the yellow list)

These uses are fine, but the AI's output must be reviewed by a human before it goes anywhere.

  • Anything sent to a client, prospect, or partner — emails, proposals, decks.
  • Anything that becomes a public statement — website copy, social posts, press materials.
  • Anything that goes into a contract, an offer letter, or an HR document.
  • Customer support replies that recommend a product, an action, or a financial decision.

What is not allowed (the red list)

If you're tempted to do one of these, stop and ask first. The list is short on purpose.

  • Pasting client names, contact info, or any PII into a tool that isn't on the approved list.
  • Pasting financial information — invoices, account numbers, banking details — into any AI tool.
  • Pasting source code from internal systems into consumer AI tools (use the approved coding assistant).
  • Using AI to impersonate a person, a customer, or a colleague.
  • Using AI to generate synthetic media (images, audio, video) that could be mistaken for real.

Approved tools

These are the tools your company has cleared. Using anything outside this list — even a free version of a tool you use at home — requires approval from [owner role]. We're not trying to block creativity; we just need to know where company information is going.

  • [Tool 1: e.g. ChatGPT Team or Anthropic's Claude (Team/Pro) — for general drafting and summarization]
  • [Tool 2: e.g. Microsoft Copilot — for email, Word, Excel inside our tenant]
  • [Tool 3: e.g. an approved coding assistant]
  • Other tools require a 5-minute conversation. Email [contact] and you'll get an answer within one business day.

When AI gets it wrong

AI tools are confident in a way that can be misleading. They will sometimes invent facts, miscount, or confuse two similar topics. If you spot something wrong, do not pass it on — fix it, or escalate to your manager.

If wrong AI output reached a client, that is not a 'gotcha' moment for the employee — it is a fix-it moment for the company. Tell your manager. We will work the issue, and we may update this page so others don't hit the same trap.

Personal AI tools

What you do with AI on your own time, on your own devices, is your business. The line we draw: anything related to your work at this company should run through approved tools. That includes meeting notes you take on your phone, drafts you write at home, and ideas you bounce off ChatGPT, Claude, or any other consumer AI for a presentation.

Common questions

Plain-English answers

Can I use ChatGPT, Claude, or Copilot on my personal account for work?
Not for content involving client information, financial data, or anything confidential — regardless of which assistant. For general brainstorming with no sensitive content, ask first — the answer might be yes, but it depends on what you're working on and which tool. Personal accounts on any of these (ChatGPT Free/Plus, Claude Free/Pro, Copilot consumer) leave your work outside the contracts our company has signed.
I think a coworker is breaking these rules. What should I do?
Talk to your manager. We are not looking to punish people; we want to catch issues before a customer does. Reporters in good faith are protected from retaliation under our standard policies.
Can I run a meeting through an AI note-taker?
Only if the meeting note-taker is on the approved tools list. Many of these tools record audio and store it in third-party clouds — we need to know which ones are touching our conversations.
What if AI gives me information about a customer I shouldn't have?
Stop, don't act on it, and tell your manager. This is rare but it happens. We treat it as a near-miss and update our controls.
Next step

Want a hand getting this right?

A 30-minute conversation often saves weeks of guessing. We'll talk through your team, your data, and what to do first — no slide deck required.