The owner of a six-person consultancy is on a Wednesday afternoon doing the firm’s quarterly client review. Three of the five active engagement notebooks have been drafted, summarised, or restructured using free ChatGPT. One has had its client contact list pasted into Claude Free to clean up. Another has had a draft management report run through Gemini for a tone check. Total firm AI spend over the quarter, zero. The thing she has not yet realised is that the price gap between what she’s using and the paid version is around £20 per person per month, and the privacy gap is on a different scale entirely.
This is the calibration many owner-led firms are running off. The cost of the free tier feels accurate to the work (“it’s just first drafts”), so the free tier is what gets used. The paid tier feels excessive for the kind of work that’s being done. Both feelings are off, because the price isn’t what separates the two tiers. Four other things do.
What actually changes between free and paid AI tiers?
Four levers change. Training data opt-out defaults, retention windows, regional data residency, and audit logs. On the free tiers of ChatGPT and Claude, your inputs default to being used for model training unless you actively toggle that off. On the paid commercial tiers, training is excluded by default under enforceable contract terms. Retention shrinks from months or years to thirty days or less, and admin audit logs and regional residency become available.
The training default is the most consequential of the four. An employee on free ChatGPT who pastes a client’s financial summary in for tidying has implicitly authorised OpenAI to retain that information and potentially feed portions of it into future training data, unless they have disabled “Improve the model for everyone” in settings. On Claude Free, the equivalent toggle is “Help improve Claude”, and the retention window if it stays on is up to five years. Both can be turned off, but the default is on.
Google and Microsoft sit slightly differently. Google does not train Gemini on consumer free-tier conversations by default, and Microsoft 365 Copilot inherits Enterprise Data Protection automatically for commercial customers, which means no foundation-model training, encrypted storage, and the EU Data Boundary if you’re on a Microsoft 365 plan in Europe. That is a different posture from OpenAI and Anthropic on the consumer side, and it matters for owner-led firms already living inside Microsoft 365 or Google Workspace.
Why does the privacy difference matter for your business?
Because the work owner-led firms put into free AI tiers is exactly the work that carries data protection duty. Client correspondence, draft contracts, employee records, financial summaries, supplier lists. Under UK GDPR, an organisation processing personal data must ensure the processing has a lawful basis and that processors handle the data only as instructed. The ICO has been explicit that those principles apply in full to AI systems, free or paid.
The Italian Garante’s 2023 emergency suspension of ChatGPT for Italian users, and the regulator’s subsequent finding of three GDPR breaches, established that this is not a theoretical concern. Cyberhaven’s 2026 enterprise data report found that 39.7 percent of AI interactions expose sensitive data and that around 70 percent of ChatGPT usage in surveyed organisations is happening through personal accounts rather than corporate ones. That is the shadow AI pattern, and free-tier reliance is what creates it.
The cost side is where the calibration breaks. ChatGPT Business sits at around £16-20 per seat per month. Claude Team at around £20-24. Microsoft 365 Copilot at around £14-15 if you’re already on a Microsoft 365 plan. Against the UK GDPR exposure (fines up to 4 percent of annual turnover) and the regulatory direction of travel under the EU AI Act, the price is trivial. Against the cost of one client confidentiality breach, it is trivial. The cost is not the obstacle. The default position is.
Where will you actually meet this in practice?
You meet it the first time someone in the team uses a free AI tool to do something they would have done in a paid tool if they’d thought about it. The classic shape is a junior staff member pasting a client document into free ChatGPT to summarise, on a personal account, because the firm hasn’t bought any AI seats yet. The risk only becomes visible if the client asks or a regulator gets curious.
You also meet it when the firm decides to procure AI tools and discovers there are three or four free accounts already in active use, often on different vendors, with no record of what’s been put through them. This is the audit-trail gap that the paid LLM tier decision post addresses from the cost angle. The privacy angle adds a separate point, even after you’ve upgraded to paid, you have no defensible record of what went through the free tier in the months before.
The third place you meet it is in client conversations. Larger clients are starting to ask about AI usage in vendor onboarding questionnaires, and the question is rarely “do you use AI” any more. It is “what AI tools are approved in your firm, what data has been processed through them, and under what commercial terms”. Answering that question with “free ChatGPT” is becoming a procurement disqualifier in regulated industries.
When is free genuinely fine, and when is it not?
Free is fine when the input would be safe to paste into a public Slack channel or send by unencrypted email to an unknown third party. Personal learning, generic templates, public information, throwaway exploration, drafting a tone check on a piece of marketing copy you’ve already published. The risk is low because the data is either public or generic enough that training-data ingestion creates no real exposure.
Free is not fine for anything covered by a confidentiality obligation, a data protection duty, or a client engagement letter. Client correspondence, draft contracts, employee records, payroll or financial information, supplier lists, customer revenue forecasts. The simple test is, would you encrypt this file if you were storing it on a USB stick. If yes, it does not belong in a free AI tier without redaction. Under UK GDPR Article 5, storage limitation and purpose limitation alone make free-tier handling of client data difficult to defend.
The hybrid pattern is what owner-led firms actually settle into. Three to five paid commercial seats for the staff who handle client and internal work, free tier access for the rest. For a ten-person firm, the monthly spend sits at around £100-150. The paid seats carry the contractual position, the training exclusion, the retention floor, and the admin visibility. The free seats do the personal learning and the exploration. The boundary lives in a one-page AI usage policy, with examples that explain which kind of input goes into which kind of tool.
What are the related concepts you’ll meet alongside this?
Three sit very close. The first is the data classification rule, which maps your data into tiers (public, internal, confidential, regulated) and pairs each tier with the AI tool it’s allowed to flow into. The free-versus-paid question is the simplest expression of that rule. The second is the AI usage policy, the one-page document that names the approved tools and the prohibited practices.
The third concept is the audit trail. UK GDPR Article 5(2), the accountability principle, requires that an organisation can demonstrate compliance through documentation. Paid commercial tiers provide the admin dashboards and activity logs that make that demonstration straightforward. Free tiers do not. For a firm in a regulated industry (financial services, healthcare, legal services, accountancy), the audit-trail gap on free tiers is structural rather than incidental.
If you’re trying to work out which tier your firm should be on and where the right line sits for the kind of work you do, book a conversation.



