The owner of an eight-person consultancy has been quietly using AI to draft chunks of paid client deliverables for the past four months. The reports go out under her firm’s letterhead, signed off by her, billed at her usual day rate. She has not told the clients. She has not put anything about AI in the engagement letters. Tonight, reading a piece on the Thaler ruling in the United States, she finds herself wondering for the first time whether she actually owns what she has been delivering.
It is a question many owner-led firms are circling without quite asking out loud. The honest answer is that the law is not as settled as either the AI optimists or the AI pessimists like to claim. The headline position varies by jurisdiction and depends heavily on how much of the work the human actually shaped. None of what follows is legal advice on a specific contract. It is a clear-eyed read of where the law stood in May 2026 for an SME selling work that AI helped produce.
What does “who owns the work” actually mean when AI wrote it?
Ownership of AI-generated work breaks into two distinct questions. The first is contractual, who owns the output as between you and the AI vendor. The second is statutory, whether that output is protected by copyright at all. Major vendors assign output rights to the customer. Statutory protection is the harder question, because copyright in the US, EU, and increasingly the UK requires human authorship that the AI cannot provide.
The contractual side is straightforward. Under OpenAI’s May 2025 Business Terms, the customer owns all output and OpenAI assigns its interest. Google, Microsoft, and Anthropic take similar positions. The statutory side is where the difficulty sits. A US federal appeals court ruled in Thaler v Perlmutter in March 2025 that human authorship is a bedrock requirement of US copyright. The Supreme Court declined to review the ruling in March 2026. Wholly AI-generated output is not copyrightable in the US, regardless of what your vendor terms say.
Why does it matter for your business?
It matters because the things you do with copyright in a commercial relationship rely on the work being protected. Licensing a deliverable, restricting a competitor from copying it, registering a brand asset, defending an infringement claim. If the work is not copyrightable, those tools do not apply to the AI-generated portion. The contractual right to the output is not the same as the legal right to stop someone else using a near copy.
The Anthropic settlement in September 2025 sharpens the second exposure. Judge Alsup ruled that training AI on lawfully acquired material can be fair use, but training on pirated material is not. Anthropic paid $1.5 billion to authors whose books had been downloaded from pirate sites, split per title across roughly 500,000 eligible works. For an SME, this means the vendor you use becomes part of your supply chain risk. If the vendor cannot represent that its training data was lawfully acquired, the indemnity clause in the vendor’s terms is the only thing standing between your client work and a third-party copyright claim. Google’s two-pronged indemnification, training data and output, sits at one end of the spectrum. Anthropic’s silence on a blanket output indemnity sits at the other.
Where is the line between AI-assisted and AI-generated?
The line is fact-specific and turns on whether the human or the AI made the expressive choices. A graphic novel where the author wrote the text and used Midjourney for illustrations was registered by the US Copyright Office for the text and the selection and arrangement of images, but not for the images themselves. The same logic applies under EU originality doctrine. Substantial human creative input, protectable. Pure prompt-and-output, not.
The US Copyright Office articulated this in January 2025. Inputting a prompt is not authorship, because the same prompt produces different outputs and the human did not control the expressive elements. Selecting from several AI outputs is also not authorship, because selection of a single option is not itself a creative act. Material modification, editing, arrangement, and curation, can be authorship if the human contribution meets the originality threshold. The pending Allen v Perlmutter case, the Midjourney “Théâtre d’opéra Spatial” work, will test whether 600 iterative prompts and post-generation edits cross the line. As of writing it remains pending in Colorado federal court. The closer parallel for an SME is not the contested art piece but the everyday client report, where the human direction is heavier, the iterations fewer, and the expressive elements clearly shaped by the consultant rather than the model.
When should you ask, and when can you ignore it?
Ask when the work has commercial value you need to defend, when a client contract warrants originality, when a deliverable will be registered as a trademark or design, or when a sector regulator requires disclosure. Ignore the temptation to wait for the law to settle, inaction is its own risk. For ordinary SME work, a disclosure clause and a record of human input is sufficient. Specific situations call for a qualified IP solicitor.
The practical checklist is short. First, keep a working record of the human contribution for any deliverable where AI did substantial drafting, briefs, iterations, edits, the trail that shows the expressive choices were yours. Second, read your AI vendor’s terms for output ownership, indemnification, and training data warranties, and prefer commercial tiers where those terms are stronger. See the free versus paid AI tiers post for the parallel privacy lens. Third, add a clean disclosure clause to client engagement letters where AI is part of the work. Fourth, where the work is commercially material, treat copyright as one tool among trade secret, trademark, and contract, rather than the only one.
Related concepts
This post sits in a 21-post cluster on AI risk, trust, and governance for SMEs. For the proportionate frame, read AI risk and governance for owner-operated businesses and what AI governance actually means without a compliance team. For the daily data exposure that often surfaces the IP question, where your data goes when you paste it into a chatbot.
For the adjacent training-data question, the next post in the section copyright and training data, what business owners need to know addresses the supply-chain side of the Anthropic settlement. For the disclosure question, disclosing AI use to customers sits one place along. None of these replace specific commercial legal advice on your contracts. They map the territory so the conversation with a qualified IP solicitor lands faster and cheaper.
If you are looking at the work your firm has been delivering and the ownership question has started to feel unsettled, book a conversation.



