US AI rules for SMEs: the patchwork of federal action and state laws

A woman at a home study desk in the evening with a printed US map beside her, several states circled in pencil, a notebook and a laptop in front of her
TL;DR

The US has no comprehensive federal AI statute. What it has is a patchwork of executive orders, federal agency guidance from the FTC, EEOC, CFPB and NIST, plus state-level laws in Colorado, New York City, Illinois, Texas and California. Most state laws apply based on the residency of the data subject, which means a UK or Canadian SME serving even a handful of US customers can fall inside one or more state regimes. The practical question is which states your customers live in and what each one expects.

Key takeaways

- The US does not have a comprehensive federal AI law. President Trump's Executive Order 14179 revoked Biden's Executive Order 14110 in January 2025, leaving federal AI governance to agency guidance from NIST, the FTC, the EEOC and the CFPB, plus a growing set of state laws. - State AI laws apply based on the residency of the data subject or where the AI is used, not where the company is incorporated. A UK SME with customers in Colorado, New York City, Illinois, Texas or California is potentially in scope for those states' laws regardless of having no US office. - Colorado's AI Act (SB24-205, effective 1 February 2026) is the most comprehensive state regime, covering high-risk AI in housing, employment, credit, education and healthcare. Texas's Responsible AI Governance Act (HB 149, effective 1 January 2026) applies more broadly and carries a private right of action. - Five expectations are converging across states: pre-deployment impact assessments, disclosure that AI is involved, the right to opt out or request human review, bias auditing for consequential decisions, and audit trails of AI outputs. A compliance baseline that satisfies Colorado goes a long way toward satisfying the others. - This is not US legal advice and not a state-by-state guide. The right move for cross-border situations is to map your customer base by state, document a baseline impact assessment and disclosure approach, and bring in a US AI specialist before deploying anything consequential.

The owner of a 14-person UK consultancy has a handful of US clients spread across four states. She has been quietly assuming that since her company is registered in Manchester, US AI rules do not yet apply to her. A client in Denver asks her, in a polite end-of-call aside, whether her AI-assisted screening tool has been audited for bias under Colorado law. She has no immediate answer. She closes her laptop and writes a single line on her notepad. Find out which states actually matter, and what each one expects.

This is where the US picture catches many non-US SMEs serving American customers. The headline owners typically read is that the US has no AI Act. The underlying reality is that the US has something messier and, in some respects, more demanding. A patchwork of federal agency enforcement, executive action, and state laws that mostly apply based on where your customer lives, not where your company is incorporated.

What are the US AI rules right now?

The US has no comprehensive federal AI law as of May 2026. Executive Order 14179, signed by President Trump in January 2025, revoked the Biden-era Executive Order 14110. What remains at federal level is agency guidance and enforcement. The substantive AI rules live at state level, primarily in Colorado, New York City, Illinois, Texas, and a set of targeted California laws.

Federal coverage runs through existing law and existing regulators. NIST publishes the AI Risk Management Framework, which sets the reference standard used by federal agencies and many private buyers. The Federal Trade Commission applies the FTC Act to AI accuracy and capability claims through its Operation AI Comply initiative. The EEOC applies Title VII to AI used in hiring, promotion and termination. The CFPB applies fair lending law to AI used in credit. There is no AI-specific statute behind any of this. The pattern is to apply existing consumer protection, employment and lending law to AI, rather than to write something new.

Why does it matter for your business?

It matters because US state AI laws apply based on where the customer lives, not where the company is incorporated. A UK or Canadian SME with a handful of customers in Colorado or Texas can fall inside those state regimes without ever opening a US office. Colorado’s AI Act captures any entity deploying a high-risk system for use in Colorado, and Texas’s TRAIGA carries a private right of action.

The practical implication is that the firm’s geographic footprint and the customer’s geographic footprint are now different compliance objects. The US compliance question is not “where are we incorporated”, it is “which states do our customers live in, and what does each one expect from an AI that touches them”. For a small services firm with a handful of US clients, that mapping is often a one-evening job, but it has to be done before the AI is deployed, not after a complaint reaches the Colorado Attorney General or a Texas plaintiff’s lawyer.

Where will you actually meet it?

You will meet it through whichever of the five state regimes touches your customer base, plus federal enforcement through the FTC, EEOC and CFPB. The Colorado AI Act, effective 1 February 2026, is the most comprehensive state regime. It covers high-risk AI in housing, employment, credit, education and healthcare, and requires impact assessments, customer disclosure, opt-out rights and ongoing risk mitigation.

New York City Local Law 144 has been in force since July 2023 and applies to any vendor or employer using an automated employment decision tool for NYC employees or candidates. The law requires an annual bias audit, candidate notice that an AEDT is in use, and disclosure of the tool’s performance characteristics broken down by protected class. A small UK recruitment-tech vendor with even one NYC client is in scope, and the audit is not a small line item.

Illinois carries two layers. The Biometric Information Privacy Act, in force since 2008, requires informed written consent for any collection of biometric identifiers and gives individuals a private right of action that has produced significant class action litigation. House Bill 3773, effective 1 January 2026, regulates AI in employment decisions for employers with more than 15 employees. The Texas Responsible AI Governance Act, also effective 1 January 2026, applies to any AI system used in Texas and requires that systems meet an “appropriate operation” standard, with a private right of action behind it. California has not passed a single comprehensive AI law. SB 1047 was vetoed in September 2024. What it has instead is a set of targeted laws, including AB 2013 on training-data disclosure for large language models, watermarking requirements under AB 2885 and SB 942, and SB 53 on election-related AI content, plus the standing CCPA and CPRA privacy regimes.

When to ask, and which states should you check first?

Ask the moment your AI is being used to make or significantly influence a decision about a US-resident individual, and ask before deployment rather than after. The practical first move is to map your active customer list by state, then check whether any of them sit in Colorado, New York City, Illinois, Texas or California. If none of them do, the immediate exposure is narrower, but FTC and EEOC rules still apply.

If one or more do, the convergence across states gives you a workable baseline. Five expectations recur. A pre-deployment impact assessment that documents the AI’s purpose, the data used to train it, the decisions it influences and the risks of discrimination or inaccuracy. A clear customer-facing disclosure that AI is involved in the decision, written in plain English. An opt-out or human-review pathway for individuals who do not want an AI deciding for them. Bias auditing for any system used in employment, lending, housing, credit, education or healthcare decisions. Audit trails of inputs, outputs and any human overrides, retained long enough to support a regulator request or a private claim. A baseline that satisfies the Colorado AI Act covers much of what New York City, Illinois and Texas expect, with a few state-specific extras like the New York City annual bias audit and the Texas appropriate-operation standard.

This post is a map of the territory, not US legal advice on a specific deployment. The point at which a US AI specialist becomes worth the fee is the point at which a consequential AI decision is being made about a US-resident individual. For cross-border situations, that specialist call is now part of the cost of doing AI-supported work for American customers, and it should be priced in before the deployment, not after the first complaint.

The neighbouring topics inside this cluster are worth holding together. Start with the EU AI Act explainer if any of the firm’s customers or staff sit in the EU, and the UK pro-innovation regulation explainer for the UK anchor. Read the pillar on AI risk and governance for owner-operated businesses for the proportionate frame, and the minimum viable AI policy for a small business for the written response.

For a non-US SME with American customers the practical sequence is short. Build a single spreadsheet listing every US customer by state and by the type of decision your AI touches in their workflow. Identify which of the five state regimes apply on that map. Document a one-page impact assessment template that you will run before any consequential AI deployment, and a one-page customer disclosure template you will publish on your site or in your service terms. Set up a human-review fallback for any AI making decisions about people. Watch federal agency enforcement, especially the FTC under Operation AI Comply and the EEOC on AI in hiring, since both have shown they will act under existing law. When the customer footprint reaches a consequential US state, bring in a US AI specialist for that jurisdiction before deployment, not after. If you want to talk through what proportionate US-facing AI governance looks like at your scale, book a conversation.

Sources

- White House (2025). Executive Order 14179, revoking Executive Order 14110 on the safe, secure and trustworthy development of AI, the federal pivot away from the Biden-era AI governance framework in January 2025. https://www.whitehouse.gov/briefing-room/presidential-actions/2025/01/24/executive-order-revocation-of-biden-executive-order-on-ai-governance/ - National Institute of Standards and Technology (2023). AI Risk Management Framework 1.0, the foundational US federal methodology for governing, mapping, measuring and managing AI risk, used as a reference standard by federal agencies and many private buyers. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf - Federal Trade Commission (2023). FTC announces Operation AI Comply, the enforcement initiative applying the FTC Act's prohibition on unfair or deceptive practices to AI products and claims, including the Rite Aid, Rytr and DoNotPay enforcement actions. https://www.ftc.gov/news-events/news/news-releases/2023/09/ftc-announces-ai-comply-initiative - Equal Employment Opportunity Commission (2023). Guidance on AI and employment discrimination, the EEOC position that Title VII applies to AI-driven hiring, promotion and termination decisions, including the iTutorGroup age discrimination case. https://www.eeoc.gov/guidance/ai-and-employment-discrimination - Consumer Financial Protection Bureau (2024). Guidance on artificial intelligence in lending, the CFPB position that AI-driven credit decisions remain subject to fair lending obligations and require adequate adverse action explanations. https://www.consumerfinance.gov/about-us/newsroom/cfpb-issues-guidance-on-artificial-intelligence-in-lending/ - Colorado General Assembly (2024). Senate Bill 24-205, the Colorado AI Act, the comprehensive state regime governing high-risk AI in housing, employment, credit, education and healthcare, effective 1 February 2026. https://leg.colorado.gov/bills/sb24-205 - New York City Department of Consumer and Worker Protection (2023). Automated employment decision tools, the operational guide to Local Law 144, in force from 5 July 2023, including the annual bias audit requirement and the candidate notice rule. https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page - Illinois General Assembly (2024). House Bill 3773, the amendment to the Illinois Human Rights Act regulating AI in employment decisions, effective 1 January 2026, alongside the long-standing Biometric Information Privacy Act. https://www.ilga.gov/legislation/BillStatus.asp?DocNum=3773&GAID=17&DocTypeID=HB&SessionID=112&GA=103 - Texas Legislature (2025). House Bill 149, the Texas Responsible AI Governance Act (TRAIGA), the broad state regime covering any AI system used in Texas, with a private right of action, effective 1 January 2026. https://capitol.texas.gov/BillLookup/History.aspx?LegSess=89R&Bill=HB149 - California Legislature (2024). Assembly Bill 2013, the training-data transparency disclosure requirement for large language model providers serving California residents, signed into law in October 2024. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2013

Frequently asked questions

I am a UK or Canadian SME with a small US customer base. Do US state AI laws really apply to me?

Yes, in many cases. Most US state AI laws (Colorado, New York City, Illinois, Texas, and the California measures) apply based on where the data subject lives or where the AI is used, not where the company is incorporated. If a Colorado resident is on the receiving end of a consequential AI decision, the Colorado AI Act applies, even with no Colorado office. The practical test is your customer base, mapped by state. The position is true as of May 2026 and is moving.

There is no federal AI Act, so does that mean federal rules can be ignored?

No. The Federal Trade Commission applies the FTC Act to AI claims under its Operation AI Comply initiative, the EEOC applies Title VII to AI used in hiring, the CFPB applies fair lending law to AI used in credit, and the NIST AI Risk Management Framework is the reference standard for federal agencies and many private buyers. Federal enforcement on AI runs through existing consumer protection, employment and lending laws, not a single AI statute. The position is current as of May 2026.

How should a small services firm sequence this without overspending on US lawyers?

Map your customers by state first. If you have nobody in Colorado, NYC, Illinois, Texas or California you have a narrower problem. For any state that does land in your customer base, document a one-page impact assessment for each consequential AI use, write a short customer-facing disclosure, and set up a human-review fallback. Then, before deploying anything that makes decisions about people, hire a US AI specialist for the jurisdictions that apply. The position is current as of May 2026 and the picture is moving quickly.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation