Resources / FAQ

The questions every owner is asking about AI

The questions we hear most often from owners, operators, and HR leaders. Plain-English answers, updated quarterly. If your question isn't here, ask us.

Reviewed by Level Up Automate.

Getting started

Do we really need an AI policy?
If anyone at your company is using AI tools — and they are — yes. A policy is the difference between governed use and shadow use. It does not need to be long; a one-pager beats a 40-pager that no one reads.
Where should we start?
Find out what's already in use, write a one-page policy, hold a 30-minute team meeting, and put a quarterly review on the calendar. The first 30 days of work close 80% of the most common risks.
How long does this take?
For a small business with no policy today, expect two weeks of light work to get from zero to 'we have a policy and the team has read it.' Bigger and regulated businesses take longer.
What does it cost to do this with help?
Most clients hire us for a fixed-fee engagement that runs $2,000–$10,000 depending on size and complexity. We can usually scope on a 30-minute call.

Policy and rules

Can my staff use ChatGPT, Claude, or Copilot for client work?
It depends on which version, which client data, and what the output is. ChatGPT's Free and Plus consumer tiers may use your prompts to train OpenAI's models — usually a no-go for client work. ChatGPT Team and Enterprise have data-protection commitments. Anthropic's Claude is more conservative: Claude Free and Pro do not train on consumer chats by default, and Claude for Work / Team / Enterprise add formal contractual commitments. Microsoft Copilot in your M365 tenant inherits your existing Microsoft data agreements. The safest default across any of these is a paid business tier and a rule against pasting client identifiers.
Should employees use their personal ChatGPT or Claude account at work?
We recommend not, for anything involving company information. Either provide a paid Team account or restrict use to non-confidential brainstorming. Personal accounts are also a leak path for compliance audits.
Can we block AI tools at the firewall?
You can, but it usually backfires — staff move to personal devices and you lose visibility. Better: be clear about which tools are approved and make those tools easy to use.
How often should we review the policy?
Quarterly for the first year, annually after that. Re-review any time a major new tool launches in your stack.

Vendors and data

How do we know if a vendor is safe?
Send our [12 vendor questions](/ai-governance/vendor-questions). A serious vendor answers in writing within a week. Lack of clear answers is a red flag.
Will our data be used to train AI models?
Sometimes yes, by default, in consumer tools. Most business tiers commit not to. The contract is what matters — not the marketing page. Get the commitment in writing.
What data should never go into AI tools?
Default no-go list: client identifiers, financial account numbers, salary information, source code from internal systems, and anything labeled Confidential. Adjust for your industry and regulators.
What about HIPAA, SOC 2, and other compliance regimes?
Compliance does not block AI use, but it does shape it. For HIPAA: only use AI vendors that will sign a Business Associate Agreement, and never paste PHI into a tool that hasn't. For SOC 2 environments: document AI use in your risk register and treat it like any other vendor.

Staff and training

What if our team doesn't want to use AI?
Don't force it. Start with the curious employees, prove the value with two or three concrete wins, and the rest will come. Mandates create resentment; demonstrations create adoption.
What if our team is using AI behind our back?
They are. Treat the discovery as an opportunity, not a betrayal. Hold a non-punitive meeting, learn what they're doing, and channel it into your approved-tools list.
Do we need to retrain existing staff or only new hires?
Both, ideally. A 60-minute primer for the whole team plus a recorded version for onboarding new hires is the minimum sensible posture.

Risk and incidents

What's the worst that could happen?
Realistically: a customer gets wrong information from an AI-drafted email, a contract has a fabricated clause that goes unnoticed, or staff data leaks via a free meeting note-taker. None are fatal. All are recoverable if you respond well in the first 24 hours.
Do we need cyber insurance for AI?
Most cyber policies cover AI-related incidents the same way they cover traditional incidents. Talk to your broker about coverage for first-party errors as well as third-party data exposure.
What if a customer sues us over AI-generated output?
Rare but possible. Defenses are stronger when you can show: written policy, training of staff, vendor due diligence, human review for customer-facing output. The point of governance is to make the lawsuit unnecessary.

Compliance and regulation

Does any law actually apply to small U.S. businesses?
Today: usually only if you're hiring in NYC, operating in regulated industries (insurance, healthcare, finance), or selling AI products into Colorado or the EU. Tomorrow: more states are following Colorado, so it pays to be ready.
Are we required to disclose AI use to customers?
Sometimes. The EU AI Act requires it for AI interactions and synthetic media. Several U.S. states require disclosure for AI in hiring and consumer-finance decisions. As a default best practice: disclose when AI is materially affecting a person's experience.

Working with Level Up Automate

Do you only work in Rhode Island and Eastern Connecticut?
Most of our in-person work is regional. Training, governance, and remote engagements are nationwide.
Are you a law firm?
No. We are an automation and operations partner. We translate regulations into practical operating decisions and bring in counsel when something genuinely needs a lawyer.
How do we get started?
Book a free 30-minute call. We'll listen first, scope second, and quote a fixed fee third.