If you run a small business and you use AI — chatbots, automated hiring tools, customer scoring, content generation, anything — you now have compliance obligations. Not theoretically. Not "someday." Right now, with hard deadlines in 2026 that carry real fines.

The EU AI Act is the biggest one. Its high-risk system requirements take full effect on August 2, 2026. Violations can cost up to €35 million or 7% of global annual revenue, whichever is higher. But the EU isn't alone. US states are passing their own AI laws at a pace that's hard to track — Colorado, Illinois, New York City, and others already have enforceable rules on the books.

The problem for small businesses isn't that the rules are complicated. It's that the guidance is written for enterprises with legal departments and compliance teams. If you're a 5-person shop using ChatGPT for customer emails and an AI tool to screen resumes, nobody's telling you what specifically applies to you and what doesn't.

This guide fixes that. We'll walk through the regulations that actually matter for SMBs, how to figure out if your AI usage falls into a regulated category, and the concrete steps to get compliant before the deadlines hit.

01 The Regulatory Landscape Right Now

There isn't one AI law. There are dozens, across multiple jurisdictions, at different stages of enforcement. Here's what's live and what's coming.

EU AI Act — The Big One

The EU AI Act is the world's first comprehensive AI regulation. It uses a risk-based framework — the higher the risk your AI system poses to people, the stricter the rules. The rollout is phased:

If you sell to European customers or have European employees, the EU AI Act applies to you regardless of where your company is based. This isn't optional. The Act has extraterritorial reach — same as GDPR.

US State Laws

The US has no federal AI law yet. Instead, you're dealing with a patchwork of state-level regulations:

More states are introducing bills every legislative session. The direction is clear: AI disclosure and impact assessment requirements are becoming standard, not exceptional.

Industry-Specific Rules

On top of general AI laws, existing industry regulators are publishing AI-specific guidance:

02 Does This Apply to My Business?

This is the first question every small business owner asks, and the answer depends entirely on how you're using AI. Not whether you use it — how.

The Risk Classification Framework

The EU AI Act sorts AI systems into four risk tiers. Most SMB use cases fall into the bottom two, but you need to verify — not assume.

Common SMB Use Cases — Where They Land

Here's a reality check on tools small businesses actually use:

The key question: Is your AI system making or substantially influencing decisions that significantly affect people's access to employment, credit, insurance, education, or essential services? If yes, you're probably in high-risk territory. If no, you likely have transparency obligations at most.

03 What You Actually Need to Do

Let's get practical. Here are the concrete steps, organized by what most SMBs need versus what only high-risk users need.

Everybody: The Baseline

Regardless of your risk tier, every business using AI should do the following. Think of it as basic hygiene — like having a privacy policy on your website.

1. Inventory your AI systems. List every AI tool your business uses. Include the vendor, what it does, what data it processes, and who it affects. You can't assess risk on tools you don't know about. This sounds obvious, but most businesses have AI tools scattered across departments that nobody's tracking centrally.

2. Classify each system by risk level. Use the framework above. For each tool, ask: does it make or influence decisions that materially affect individuals? Map each to unacceptable, high, limited, or minimal risk.

3. Implement AI literacy training. The EU AI Act requires that staff using AI systems have "sufficient AI literacy." This doesn't mean everyone needs a machine learning degree. It means the people operating AI tools understand what the tool does, its limitations, and when to override it. Document that you've provided this training.

4. Add AI disclosures where required. If you use chatbots, tell users they're talking to AI. If you generate content with AI that could be mistaken for human-created, disclose it. If you use AI in hiring, tell candidates. When in doubt, disclose.

5. Review your data practices. AI compliance sits on top of data protection. If your AI tools process personal data, you need a lawful basis under GDPR (EU) or applicable state privacy laws (US). Make sure your privacy policy mentions AI-assisted processing.

Need the complete walkthrough?

This post covers the essentials, but compliance details matter. The AI Compliance Checklist for SMBs includes a step-by-step assessment framework, a quick-reference card for risk classification, and a customizable AI use policy template you can adapt for your business.

Get the checklist — $24

High-Risk Users: The Full Requirements

If any of your AI systems landed in the "high risk" category — hiring tools, credit decisioning, insurance underwriting — you have additional obligations. These aren't suggestions. They're legally required, with enforcement starting August 2026.

1. Conduct an AI impact assessment. Before deploying (or continuing to use) a high-risk AI system, you need a documented assessment covering:

This is similar to a Data Protection Impact Assessment (DPIA) under GDPR. If you've done DPIAs before, the format will feel familiar.

2. Ensure human oversight. Someone qualified must be able to review, override, or stop the AI system's decisions. "The algorithm decided" is not an acceptable final answer. For hiring tools, this means a human reviews every AI-generated candidate ranking before decisions are made. For credit scoring, it means a human can override the AI's recommendation.

3. Test for bias. High-risk AI systems must be evaluated for discriminatory outcomes across protected characteristics — race, gender, age, disability, religion. If you're using a vendor's AI tool, ask them for their bias testing documentation. Under NYC Local Law 144, automated employment decision tools require an independent annual bias audit published on your website.

4. Maintain technical documentation. You need records of:

5. Implement logging. High-risk AI systems must log their inputs and outputs in a way that enables traceability. If a decision is challenged, you need to be able to show what data went in and what recommendation came out. Retention period: at minimum, the duration of the system's intended purpose and any applicable statute of limitations.

04 Working with AI Vendors

Most SMBs don't build their own AI — they buy it. That doesn't eliminate your compliance obligations, but it shifts the dynamic. You need to know what to ask your vendors and what to document.

Questions to Ask Every AI Vendor

Shared Responsibility

Under the EU AI Act, both providers (the company that built the AI) and deployers (you, the business using it) have obligations. The provider handles the conformity assessment and technical documentation. You handle implementation-side responsibilities: human oversight, impact assessments for your specific use case, transparency to affected individuals, and logging.

Don't assume the vendor has you covered. Their compliance addresses the system in general. Your compliance addresses how you specifically use it, with your data, affecting your customers or employees.

05 Building Your AI Use Policy

Every business using AI needs an internal policy document. It doesn't have to be a 50-page legal tome. It needs to be clear, actionable, and actually followed.

What to Include

Approved AI tools. List every AI tool authorized for business use. Include what it's approved for and what it's not. "We use ChatGPT for drafting marketing copy" is different from "we use ChatGPT for everything, including handling customer complaints and evaluating employee performance."

Data handling rules. Specify what data can and cannot be entered into AI tools. Customer PII, financial data, health records, trade secrets — each needs clear guidelines. Many AI tools send data to external servers for processing. Your employees need to know what's off-limits.

Human review requirements. Define which AI outputs require human review before action. At minimum: anything affecting employment decisions, customer eligibility, pricing, and public-facing communications.

Disclosure requirements. When and how do you tell customers, candidates, or employees that AI is being used? Spell it out. Include template language people can copy-paste.

Incident response. What happens when the AI makes a mistake that harms someone? Who investigates? How is the affected person notified? What's the escalation path? Having this documented before an incident occurs is the difference between a managed situation and a crisis.

Review cadence. AI regulation is moving fast. Commit to reviewing your AI policy quarterly. New laws, new tools, new use cases — your policy needs to evolve.

06 The August 2026 Deadline — Your Action Plan

Five months from now, the EU AI Act's high-risk requirements take full effect. Here's a realistic timeline for a small business to get compliant.

Month 1: Inventory and Classify

Month 2: Vendor Assessment

Month 3: Documentation and Policy

Month 4: Training and Implementation

Month 5: Review and Audit

Realistic note: Most SMBs don't need to hire a compliance officer for this. If your AI use is mostly limited-risk (chatbots, content generation), the baseline steps take a day or two. If you have high-risk systems (hiring, credit), budget a week of focused effort plus ongoing quarterly reviews.

07 Common Mistakes to Avoid

These are the pitfalls we see small businesses fall into repeatedly:

Assuming "we're too small to matter." Size doesn't determine applicability — usage does. A 3-person startup using AI to screen candidates in the EU has the same obligations as a 3,000-person enterprise. Regulators have historically started enforcement with high-profile cases, but the rules apply equally.

Relying on vendor compliance alone. Your vendor can be fully compliant with the AI Act's provider obligations while you're completely non-compliant with your deployer obligations. These are separate sets of requirements. The vendor builds a safe car. You still need to drive it responsibly.

Treating AI compliance as a one-time project. Compliance is ongoing. Your AI tools update. Regulations evolve. Your business changes how it uses AI. A quarterly review cycle is the minimum to stay current.

Not documenting decisions. The absence of documentation is treated as the absence of compliance. If you conducted a risk assessment but didn't write it down, you didn't conduct a risk assessment. This is one area where paper trails protect you.

Ignoring the state-level patchwork. If you operate in or hire from multiple US states, you may have overlapping obligations. NYC's bias audit requirements, Colorado's impact assessments, and Illinois's video interview rules can all apply to the same hiring process. Map your obligations by jurisdiction, not just by tool.

What We Didn't Cover

This guide gives you the framework and the action items. But compliance lives in the details, and those details vary by industry, jurisdiction, and how your business specifically uses AI. Areas we touched on but didn't fully unpack:

Get the Complete AI Compliance Checklist for SMBs

Everything in this guide, plus detailed assessment templates, a ready-to-use AI policy document, risk classification quick-reference cards, and vendor evaluation checklists. Built for businesses that need to get compliant without hiring a compliance department.

Download the checklist — $24

Ongoing Maintenance

AI compliance isn't something you finish. Build these into your operating rhythm:

The businesses that handle AI compliance well treat it the same as cybersecurity or data privacy — not as a burden, but as a baseline practice that protects the business and its customers. The regulations aren't going away. They're going to expand. Getting ahead of the curve now is cheaper and less disruptive than scrambling after an enforcement action.