The owner of a 14-person UK consultancy has a handful of US clients spread across four states. She has been quietly assuming that since her company is registered in Manchester, US AI rules do not yet apply to her. A client in Denver asks her, in a polite end-of-call aside, whether her AI-assisted screening tool has been audited for bias under Colorado law. She has no immediate answer. She closes her laptop and writes a single line on her notepad. Find out which states actually matter, and what each one expects.
This is where the US picture catches many non-US SMEs serving American customers. The headline owners typically read is that the US has no AI Act. The underlying reality is that the US has something messier and, in some respects, more demanding. A patchwork of federal agency enforcement, executive action, and state laws that mostly apply based on where your customer lives, not where your company is incorporated.
What are the US AI rules right now?
The US has no comprehensive federal AI law as of May 2026. Executive Order 14179, signed by President Trump in January 2025, revoked the Biden-era Executive Order 14110. What remains at federal level is agency guidance and enforcement. The substantive AI rules live at state level, primarily in Colorado, New York City, Illinois, Texas, and a set of targeted California laws.
Federal coverage runs through existing law and existing regulators. NIST publishes the AI Risk Management Framework, which sets the reference standard used by federal agencies and many private buyers. The Federal Trade Commission applies the FTC Act to AI accuracy and capability claims through its Operation AI Comply initiative. The EEOC applies Title VII to AI used in hiring, promotion and termination. The CFPB applies fair lending law to AI used in credit. There is no AI-specific statute behind any of this. The pattern is to apply existing consumer protection, employment and lending law to AI, rather than to write something new.
Why does it matter for your business?
It matters because US state AI laws apply based on where the customer lives, not where the company is incorporated. A UK or Canadian SME with a handful of customers in Colorado or Texas can fall inside those state regimes without ever opening a US office. Colorado’s AI Act captures any entity deploying a high-risk system for use in Colorado, and Texas’s TRAIGA carries a private right of action.
The practical implication is that the firm’s geographic footprint and the customer’s geographic footprint are now different compliance objects. The US compliance question is not “where are we incorporated”, it is “which states do our customers live in, and what does each one expect from an AI that touches them”. For a small services firm with a handful of US clients, that mapping is often a one-evening job, but it has to be done before the AI is deployed, not after a complaint reaches the Colorado Attorney General or a Texas plaintiff’s lawyer.
Where will you actually meet it?
You will meet it through whichever of the five state regimes touches your customer base, plus federal enforcement through the FTC, EEOC and CFPB. The Colorado AI Act, effective 1 February 2026, is the most comprehensive state regime. It covers high-risk AI in housing, employment, credit, education and healthcare, and requires impact assessments, customer disclosure, opt-out rights and ongoing risk mitigation.
New York City Local Law 144 has been in force since July 2023 and applies to any vendor or employer using an automated employment decision tool for NYC employees or candidates. The law requires an annual bias audit, candidate notice that an AEDT is in use, and disclosure of the tool’s performance characteristics broken down by protected class. A small UK recruitment-tech vendor with even one NYC client is in scope, and the audit is not a small line item.
Illinois carries two layers. The Biometric Information Privacy Act, in force since 2008, requires informed written consent for any collection of biometric identifiers and gives individuals a private right of action that has produced significant class action litigation. House Bill 3773, effective 1 January 2026, regulates AI in employment decisions for employers with more than 15 employees. The Texas Responsible AI Governance Act, also effective 1 January 2026, applies to any AI system used in Texas and requires that systems meet an “appropriate operation” standard, with a private right of action behind it. California has not passed a single comprehensive AI law. SB 1047 was vetoed in September 2024. What it has instead is a set of targeted laws, including AB 2013 on training-data disclosure for large language models, watermarking requirements under AB 2885 and SB 942, and SB 53 on election-related AI content, plus the standing CCPA and CPRA privacy regimes.
When to ask, and which states should you check first?
Ask the moment your AI is being used to make or significantly influence a decision about a US-resident individual, and ask before deployment rather than after. The practical first move is to map your active customer list by state, then check whether any of them sit in Colorado, New York City, Illinois, Texas or California. If none of them do, the immediate exposure is narrower, but FTC and EEOC rules still apply.
If one or more do, the convergence across states gives you a workable baseline. Five expectations recur. A pre-deployment impact assessment that documents the AI’s purpose, the data used to train it, the decisions it influences and the risks of discrimination or inaccuracy. A clear customer-facing disclosure that AI is involved in the decision, written in plain English. An opt-out or human-review pathway for individuals who do not want an AI deciding for them. Bias auditing for any system used in employment, lending, housing, credit, education or healthcare decisions. Audit trails of inputs, outputs and any human overrides, retained long enough to support a regulator request or a private claim. A baseline that satisfies the Colorado AI Act covers much of what New York City, Illinois and Texas expect, with a few state-specific extras like the New York City annual bias audit and the Texas appropriate-operation standard.
This post is a map of the territory, not US legal advice on a specific deployment. The point at which a US AI specialist becomes worth the fee is the point at which a consequential AI decision is being made about a US-resident individual. For cross-border situations, that specialist call is now part of the cost of doing AI-supported work for American customers, and it should be priced in before the deployment, not after the first complaint.
Related concepts and what to do next
The neighbouring topics inside this cluster are worth holding together. Start with the EU AI Act explainer if any of the firm’s customers or staff sit in the EU, and the UK pro-innovation regulation explainer for the UK anchor. Read the pillar on AI risk and governance for owner-operated businesses for the proportionate frame, and the minimum viable AI policy for a small business for the written response.
For a non-US SME with American customers the practical sequence is short. Build a single spreadsheet listing every US customer by state and by the type of decision your AI touches in their workflow. Identify which of the five state regimes apply on that map. Document a one-page impact assessment template that you will run before any consequential AI deployment, and a one-page customer disclosure template you will publish on your site or in your service terms. Set up a human-review fallback for any AI making decisions about people. Watch federal agency enforcement, especially the FTC under Operation AI Comply and the EEOC on AI in hiring, since both have shown they will act under existing law. When the customer footprint reaches a consequential US state, bring in a US AI specialist for that jurisdiction before deployment, not after. If you want to talk through what proportionate US-facing AI governance looks like at your scale, book a conversation.



