Who owns the work when AI wrote it

A woman at a desk holding a printed document in one hand and looking at her laptop screen, a coffee cup beside her, late-afternoon light through the window
TL;DR

Ownership of AI-generated work is unsettled and varies by jurisdiction. The US, EU, and an emerging UK position all require human authorship for copyright to attach. The UK's Section 9(3) of the 1988 Act theoretically allows protection for wholly computer-generated work, but it has been tested only once and the government is consulting on reform. AI-assisted work with substantial human creative input can be protected. Wholly AI-generated output usually cannot. For an SME, the practical move is to document the human contribution, treat the AI portion as unprotectable by default, and tighten the warranty and disclosure clauses in client contracts.

Key takeaways

- Copyright across the US, EU, and increasingly the UK requires human authorship. Wholly AI-generated output, where the AI made the expressive choices, is unlikely to be copyrightable. - The UK's Copyright, Designs and Patents Act 1988 Section 9(3) is the only major-jurisdiction provision that explicitly contemplates computer-generated works. It has been litigated once, in 2007, and its application to modern generative AI is untested. - AI-assisted work, where a human directs, modifies, and arranges substantively, can be protected. The protection covers the human contribution, not the AI-generated portions. - SMEs selling AI-assisted client work should keep a record of the human input, disclose AI use where the contract or sector regulator requires it, and check vendor terms for output ownership and indemnification. - Trade secret, trademark, and contract law often carry more weight than copyright for AI-assisted commercial work. For specific situations, a qualified IP solicitor remains the right call.

The owner of an eight-person consultancy has been quietly using AI to draft chunks of paid client deliverables for the past four months. The reports go out under her firm’s letterhead, signed off by her, billed at her usual day rate. She has not told the clients. She has not put anything about AI in the engagement letters. Tonight, reading a piece on the Thaler ruling in the United States, she finds herself wondering for the first time whether she actually owns what she has been delivering.

It is a question many owner-led firms are circling without quite asking out loud. The honest answer is that the law is not as settled as either the AI optimists or the AI pessimists like to claim. The headline position varies by jurisdiction and depends heavily on how much of the work the human actually shaped. None of what follows is legal advice on a specific contract. It is a clear-eyed read of where the law stood in May 2026 for an SME selling work that AI helped produce.

What does “who owns the work” actually mean when AI wrote it?

Ownership of AI-generated work breaks into two distinct questions. The first is contractual, who owns the output as between you and the AI vendor. The second is statutory, whether that output is protected by copyright at all. Major vendors assign output rights to the customer. Statutory protection is the harder question, because copyright in the US, EU, and increasingly the UK requires human authorship that the AI cannot provide.

The contractual side is straightforward. Under OpenAI’s May 2025 Business Terms, the customer owns all output and OpenAI assigns its interest. Google, Microsoft, and Anthropic take similar positions. The statutory side is where the difficulty sits. A US federal appeals court ruled in Thaler v Perlmutter in March 2025 that human authorship is a bedrock requirement of US copyright. The Supreme Court declined to review the ruling in March 2026. Wholly AI-generated output is not copyrightable in the US, regardless of what your vendor terms say.

Why does it matter for your business?

It matters because the things you do with copyright in a commercial relationship rely on the work being protected. Licensing a deliverable, restricting a competitor from copying it, registering a brand asset, defending an infringement claim. If the work is not copyrightable, those tools do not apply to the AI-generated portion. The contractual right to the output is not the same as the legal right to stop someone else using a near copy.

The Anthropic settlement in September 2025 sharpens the second exposure. Judge Alsup ruled that training AI on lawfully acquired material can be fair use, but training on pirated material is not. Anthropic paid $1.5 billion to authors whose books had been downloaded from pirate sites, split per title across roughly 500,000 eligible works. For an SME, this means the vendor you use becomes part of your supply chain risk. If the vendor cannot represent that its training data was lawfully acquired, the indemnity clause in the vendor’s terms is the only thing standing between your client work and a third-party copyright claim. Google’s two-pronged indemnification, training data and output, sits at one end of the spectrum. Anthropic’s silence on a blanket output indemnity sits at the other.

Where is the line between AI-assisted and AI-generated?

The line is fact-specific and turns on whether the human or the AI made the expressive choices. A graphic novel where the author wrote the text and used Midjourney for illustrations was registered by the US Copyright Office for the text and the selection and arrangement of images, but not for the images themselves. The same logic applies under EU originality doctrine. Substantial human creative input, protectable. Pure prompt-and-output, not.

The US Copyright Office articulated this in January 2025. Inputting a prompt is not authorship, because the same prompt produces different outputs and the human did not control the expressive elements. Selecting from several AI outputs is also not authorship, because selection of a single option is not itself a creative act. Material modification, editing, arrangement, and curation, can be authorship if the human contribution meets the originality threshold. The pending Allen v Perlmutter case, the Midjourney “Théâtre d’opéra Spatial” work, will test whether 600 iterative prompts and post-generation edits cross the line. As of writing it remains pending in Colorado federal court. The closer parallel for an SME is not the contested art piece but the everyday client report, where the human direction is heavier, the iterations fewer, and the expressive elements clearly shaped by the consultant rather than the model.

When should you ask, and when can you ignore it?

Ask when the work has commercial value you need to defend, when a client contract warrants originality, when a deliverable will be registered as a trademark or design, or when a sector regulator requires disclosure. Ignore the temptation to wait for the law to settle, inaction is its own risk. For ordinary SME work, a disclosure clause and a record of human input is sufficient. Specific situations call for a qualified IP solicitor.

The practical checklist is short. First, keep a working record of the human contribution for any deliverable where AI did substantial drafting, briefs, iterations, edits, the trail that shows the expressive choices were yours. Second, read your AI vendor’s terms for output ownership, indemnification, and training data warranties, and prefer commercial tiers where those terms are stronger. See the free versus paid AI tiers post for the parallel privacy lens. Third, add a clean disclosure clause to client engagement letters where AI is part of the work. Fourth, where the work is commercially material, treat copyright as one tool among trade secret, trademark, and contract, rather than the only one.

This post sits in a 21-post cluster on AI risk, trust, and governance for SMEs. For the proportionate frame, read AI risk and governance for owner-operated businesses and what AI governance actually means without a compliance team. For the daily data exposure that often surfaces the IP question, where your data goes when you paste it into a chatbot.

For the adjacent training-data question, the next post in the section copyright and training data, what business owners need to know addresses the supply-chain side of the Anthropic settlement. For the disclosure question, disclosing AI use to customers sits one place along. None of these replace specific commercial legal advice on your contracts. They map the territory so the conversation with a qualified IP solicitor lands faster and cheaper.

If you are looking at the work your firm has been delivering and the ownership question has started to feel unsettled, book a conversation.

Sources

- UK Government (2026). Report on Copyright and Artificial Intelligence, published March 2026 under Section 136 of the Data (Use and Access) Act 2025, the most recent official UK position. https://www.gov.uk/government/publications/report-and-impact-assessment-on-copyright-and-artificial-intelligence/report-on-copyright-and-artificial-intelligence - Copyright, Designs and Patents Act 1988, Section 9(3), the UK statutory provision on authorship of computer-generated works. https://www.legislation.gov.uk/ukpga/1988/48/section/9 - US Copyright Office (2025). Copyright and Artificial Intelligence Part 2 Report on copyrightability of AI-generated outputs, January 2025. https://www.copyright.gov/ai/ - US Court of Appeals, DC Circuit (2025). Thaler v Perlmutter, the ruling that human authorship is a bedrock requirement of US copyright. https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf - US Copyright Office (2023). Zarya of the Dawn registration decision, the Midjourney graphic novel case that established the AI-assisted versus AI-generated line. https://www.copyright.gov/docs/zarya-of-the-dawn.pdf - European Parliament (2026). Resolution on Copyright and Generative Artificial Intelligence, adopted 10 March 2026. https://www.jonesday.com/en/insights/2026/05/navigating-copyright-in-the-age-of-generative-ai-eu-french-and-uk-developments-and-approaches - Authors Alliance (2025). The UK's curious case of copyright for AI-generated works, analysis of Section 9(3) in the generative AI context. https://www.authorsalliance.org/2025/05/19/the-uks-curious-case-of-copyright-for-ai-generated-works-what-section-93-means-today/ - Jones Walker (2025). Bartz v Anthropic settlement analysis, the $1.5bn ruling on AI training data acquisition. https://www.joneswalker.com/en/insights/blogs/ai-law-blog/why-anthropics-copyright-settlement-changes-the-rules-for-ai-training.html - OpenAI (2025). Business Terms, May 2025 version, the contractual position on output ownership for commercial customers. https://openai.com/policies/may-2025-business-terms/ - Google Cloud (2024). Generative AI indemnification for training data and output, the two-pronged vendor warranty model. https://cloud.google.com/blog/products/ai-machine-learning/protecting-customers-with-generative-ai-indemnification

Frequently asked questions

If AI drafted most of the deliverable, do I still own it?

As between you and the AI vendor, yes. Major providers including OpenAI, Microsoft, Google, and Anthropic assign output ownership to the customer under their business terms. The harder question is whether that output is copyrightable. Where the AI made the expressive choices, the answer in the US, EU, and most likely the UK is that the AI-generated portion is not protected by copyright. Your contract with the client still governs delivery and use rights, but you cannot prevent a competitor copying the AI-generated parts in the same way you could copy human work.

Does the UK's Section 9(3) protect computer-generated work or not?

Theoretically it does, but the question is unsettled. Section 9(3) of the Copyright, Designs and Patents Act 1988 assigns authorship of wholly computer-generated work to "the person by whom the arrangements necessary for the creation of the work are undertaken". It has been considered in one full court ruling, in 2007, decades before generative AI. The UK government published a report in March 2026 confirming it is monitoring whether the provision remains fit for purpose, with no commitment to reform. As of writing, an SME relying on Section 9(3) is relying on untested ground.

Do I need to disclose AI use to my clients?

It depends on the client, the contract, and the sector. The US Copyright Office now requires disclosure of more than de minimis AI content when registering work. The EU AI Act requires clear labelling of AI-generated content where it could be mistaken for human work. UK regulated sectors set their own rules. For most commercial SME work, a short disclosure clause in the engagement letter handles the question cleanly, and is materially cheaper than a contract renegotiation after the fact.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation