Protecting client data when staff use AI
The simple framing: client data goes into approved tools only, and only the data needed for the task. Both halves matter.
Approved tools have a contract that says your data isn't used to train models. Free consumer tools usually don't.
Even in approved tools, paste the minimum needed. Names and account numbers rarely need to be in the prompt.
Make 'paste less' a habit, not a slogan.
The two-part rule
Approved tools first, minimum data second. Each protects against a different failure. Approved tools defeat the 'we're training on your data' problem. Minimum data defeats the 'we leaked something embarrassing' problem. Need both.
Practical patterns staff can use
Real techniques to teach in training.
- Replace names with [Client A], [Client B] when asking AI to draft something.
- Strip identifiers before pasting (account numbers, addresses, dates of birth).
- Summarize before sharing: ask AI to summarize a doc, then share the summary, not the doc.
- Use vendor-cleared upload paths (e.g., Microsoft Copilot in Office, ChatGPT Team, or Claude for Work) before consumer ones.
What to actually monitor
You don't need a DLP system. You need open conversations and a clear escalation path. The single most predictive indicator of safe behavior is whether staff feel comfortable asking 'can I paste this' without fear of looking dumb.
Plain-English answers
Should we block consumer AI tools at the network level?
Want a hand getting this right?
A 30-minute conversation often saves weeks of guessing. We'll talk through your team, your data, and what to do first — no slide deck required.