Resources / Glossary

AI in plain English: a glossary for business owners

Every AI term a business owner is likely to hear in 2026, defined in 1–2 sentences. We update this quarterly. If a term you need is missing, email us and we'll add it.

Reviewed by Level Up Automate.

A

Acceptable Use Policy
A document telling staff how they can and cannot use AI tools at work. Usually one page; lives next to your employee handbook.
Agentic AIalso: AI agents
AI systems that take multi-step actions on a user's behalf — for example, booking travel or processing a refund — rather than just answering a question.
AI Act
Usually refers to the European Union's AI Act, the first comprehensive AI law in a major economy. Sorts AI uses into four risk tiers and applies to anyone whose AI affects EU residents.
AI Governance
The combination of policies, processes, and oversight that lets a company use AI safely. The smaller the company, the simpler this can be.
AI Incident
Any time AI causes harm — wrong information sent to a customer, data leaked, an embarrassing output — or had a near-miss. Worth documenting even if no real damage was done.
AI Policy
A short document setting the rules for AI use at your company. Best ones are one page, plain-English, and reviewed quarterly.
AI Readiness
How prepared your business is to use AI safely and effectively today. Most small businesses overestimate this.
Algorithmic Discrimination
When an AI system's decisions consistently disadvantage a protected group. A legal risk in hiring, lending, housing, and similar 'consequential decision' uses.
Anthropic
The AI safety company behind the Claude family of AI assistants and the Claude Code coding tool. Headquartered in San Francisco. Considered one of the two leading consumer-AI vendors alongside OpenAI.

B

Bias (in AI)
Patterns in AI output that reflect skewed training data or flawed model design — often along lines of gender, race, geography, or age. Real but manageable in most small-business uses.

C

Chatbot
A software application that converses in text. Modern chatbots are usually built on large language models (LLMs).
ChatGPTalso: GPT chat
The most widely-used consumer AI assistant, made by OpenAI. Plans: Free, Plus (individual), Team and Enterprise (business with data protection). The closest direct competitor is Anthropic's Claude.
Claudealso: Anthropic Claude, Claude AI
Anthropic's family of AI assistants. Plans: Free, Pro (individual), Team, and Enterprise. Widely considered the strongest assistant for long-form writing, analysis, and conversational tasks. Claude Free and Pro do not train on user conversations by default — a more conservative posture than the equivalent ChatGPT consumer tiers.
Claude Codealso: Anthropic Claude Code
Anthropic's coding assistant — a terminal- and IDE-integrated tool that lets developers delegate code-writing, refactoring, and review tasks to Claude. Peer to GitHub Copilot and Cursor. Used by engineering teams as part of the same approved-tools list as their other AI assistants.
Colorado AI Act
A 2026 Colorado state law requiring risk management and consumer notice for 'high-risk' AI systems making consequential decisions. The first broad AI law in the United States.
Confidence Threshold
A score above which AI output is allowed to act automatically; below the score, a human reviews. Setting these well is most of the work in building safe AI workflows.
Copilot
Microsoft's AI assistant, integrated into Office, Windows, and Edge. Many versions exist; the one in your business depends on your Microsoft 365 plan. Note: GitHub Copilot is a different (and older) Microsoft product aimed at developers — peer to Anthropic's Claude Code and Cursor.

D

Data Leak
Confidential information ending up somewhere it shouldn't. AI tools create new leak paths because employees paste data into them, often without thinking.
Deepfake
Synthetic audio, video, or images that convincingly impersonate a real person. Increasingly used in scams targeting finance and HR teams.

E

Embedding
A way of representing text, images, or other data as numbers so AI can compare and search them. Powers most AI search and recommendation features.
EU AI Act
The European Union's comprehensive AI law. Applies to any AI system whose output is used in the EU, even if the company is outside Europe.

F

Fine-tuning
Adjusting an AI model with additional training data so it performs better at a specific task. Usually done by vendors; rarely something a small business does directly.
Foundation Modelalso: Base model, General-purpose AI
A large, general-purpose AI model that other AI products are built on. Examples: GPT-4, Claude, Gemini.

G

Generative AIalso: GenAI
AI that produces new content — text, images, audio, video, code — rather than only classifying or scoring existing content.
GPT
Generative Pre-trained Transformer — the family of language models from OpenAI. Numbers (GPT-3.5, GPT-4, GPT-5) refer to successive generations. Anthropic's equivalent is the Claude family of models; Google's is Gemini.
Guardrails
Rules and constraints built around an AI system so it stays within safe behavior. Often a combination of model rules, software checks, and human review.

H

Hallucination
When AI confidently produces information that is wrong or invented. The single biggest reason every AI output for a customer needs human review.
High-Risk AI System
Term used in the Colorado AI Act and EU AI Act for AI that makes consequential decisions about people in regulated areas like employment, lending, healthcare, and housing.
Human-in-the-Loopalso: HITL
Workflow design where a human reviews or approves AI output before it acts. Standard practice for any high-stakes AI use.

I

Impact Assessment
A written analysis of how an AI system will affect users — covering risks, mitigations, and benefits. Required by several state laws for high-risk AI.
ISO/IEC 42001
An international standard for AI management systems. You can be formally certified against it. Useful mainly for companies with enterprise customers asking about it.

J

Jailbreak
A prompt or trick that gets an AI tool to ignore its safety rules. A risk to be aware of, especially for tools used by the public.

L

LLM (Large Language Model)
An AI trained on huge amounts of text that can read, write, summarize, and answer questions. ChatGPT, Claude, Gemini, and Copilot are all built on LLMs.

M

Machine Learningalso: ML
A broader category of AI in which systems learn patterns from data. AI tools you use every day (spam filters, recommendations) are built with machine learning.
Model Drift
When an AI model's performance changes over time as inputs or the world shifts. A reason to monitor AI use even after it's working well.

N

NIST AI RMF
The U.S. National Institute of Standards and Technology's voluntary framework for managing AI risk. Increasingly the de facto baseline for U.S. business AI governance.

O

Open-source AI
AI models whose weights are published openly so anyone can run them. Different from proprietary models like GPT or Claude. Important for businesses needing on-premise control.

P

Prompt
The instruction you give to an AI tool. Better prompts produce better output.
Prompt Engineering
The practice of writing prompts that consistently get good results. Less of a specialized job than it sounds; most staff can learn the basics in an hour.
Prompt Injection
A security attack where hidden instructions in a document, image, or webpage trick an AI into doing something unintended. Affects AI tools that read external content.

R

RAG (Retrieval-Augmented Generation)
An AI design pattern where the model is given relevant documents to read before answering a question. Reduces hallucinations and lets AI 'know' your business's documents.
Responsible AI
An umbrella term for the practices that keep AI use ethical, safe, and aligned with business values. Includes governance, fairness, transparency, and accountability.
Risk Assessment
A written analysis of the risks of using a specific AI tool or workflow. Covers data, accuracy, dependence, and mitigation.

S

Shadow AI
AI use inside a company that wasn't authorized. Almost always present at companies that haven't communicated a policy yet.
SOC 2
A third-party audit of a vendor's security practices. Common ask when evaluating AI vendors. Type II is more rigorous than Type I.
Synthetic Media
Audio, video, or images generated by AI rather than recorded from reality. Includes deepfakes but also legitimate uses like AI voiceovers and product imagery.

T

Tokens
The chunks of text AI models read and produce. Roughly four characters or three-quarters of a word per token. Most pricing is per token.
Training Data
The data an AI model learned from. Whether your business's data becomes training data for a vendor's model is a critical contract question.
Transparency
Telling people when they're interacting with AI, what data it uses, and how decisions are made. Required in some jurisdictions for certain AI uses.

V

Vendor Due Diligence
The process of evaluating an AI vendor's security, data handling, and reliability before signing a contract. Worth one to four hours per vendor.

W

Watermarking
Embedding a subtle, machine-detectable signal into AI-generated content so it can be identified later. Increasingly a regulatory expectation for synthetic media.

Z

Zero-shot
When an AI handles a task without being given examples first. Most everyday AI use is zero-shot. Adding even one example often improves results dramatically.