Free versus paid AI tiers, the privacy difference owners need to understand

A founder at a kitchen table comparing two laptop screens with a printed terms sheet and a notebook of handwritten figures beside her
TL;DR

Free and paid AI tiers cost roughly £0 versus £15-25 per seat per month, but the privacy gap is large. Four levers explain it, training opt-out defaults, retention windows, regional data residency, and audit logs. Free tiers default to training on inputs and retain data for months or years. Paid commercial tiers exclude training by contract. The hybrid pattern, paid seats for client work and free for personal exploration, fits many owner-led firms.

Key takeaways

- Four levers separate free from paid AI tiers, training opt-out defaults, retention windows, regional data residency, and audit logs. The cost gap is small. The privacy gap is large. - Free tiers from OpenAI and Anthropic default to using inputs for model training unless the user actively opts out. Claude Free retains data for up to five years if "Help improve Claude" is enabled. - Paid commercial tiers, ChatGPT Business at around £16-20 per seat and Claude Team at around £20-24 per seat, exclude training by default and operate under enforceable commercial terms. - Google Gemini and Microsoft Copilot behave differently. Both decline to train on consumer data by default, and Microsoft 365 Copilot inherits the EU Data Boundary automatically for commercial customers. - The hybrid pattern that fits many owner-led firms is paid seats for client and internal work, free seats for personal learning. Total spend for a ten-person firm sits at around £100-150 a month.

The owner of a six-person consultancy is on a Wednesday afternoon doing the firm’s quarterly client review. Three of the five active engagement notebooks have been drafted, summarised, or restructured using free ChatGPT. One has had its client contact list pasted into Claude Free to clean up. Another has had a draft management report run through Gemini for a tone check. Total firm AI spend over the quarter, zero. The thing she has not yet realised is that the price gap between what she’s using and the paid version is around £20 per person per month, and the privacy gap is on a different scale entirely.

This is the calibration many owner-led firms are running off. The cost of the free tier feels accurate to the work (“it’s just first drafts”), so the free tier is what gets used. The paid tier feels excessive for the kind of work that’s being done. Both feelings are off, because the price isn’t what separates the two tiers. Four other things do.

What actually changes between free and paid AI tiers?

Four levers change. Training data opt-out defaults, retention windows, regional data residency, and audit logs. On the free tiers of ChatGPT and Claude, your inputs default to being used for model training unless you actively toggle that off. On the paid commercial tiers, training is excluded by default under enforceable contract terms. Retention shrinks from months or years to thirty days or less, and admin audit logs and regional residency become available.

The training default is the most consequential of the four. An employee on free ChatGPT who pastes a client’s financial summary in for tidying has implicitly authorised OpenAI to retain that information and potentially feed portions of it into future training data, unless they have disabled “Improve the model for everyone” in settings. On Claude Free, the equivalent toggle is “Help improve Claude”, and the retention window if it stays on is up to five years. Both can be turned off, but the default is on.

Google and Microsoft sit slightly differently. Google does not train Gemini on consumer free-tier conversations by default, and Microsoft 365 Copilot inherits Enterprise Data Protection automatically for commercial customers, which means no foundation-model training, encrypted storage, and the EU Data Boundary if you’re on a Microsoft 365 plan in Europe. That is a different posture from OpenAI and Anthropic on the consumer side, and it matters for owner-led firms already living inside Microsoft 365 or Google Workspace.

Why does the privacy difference matter for your business?

Because the work owner-led firms put into free AI tiers is exactly the work that carries data protection duty. Client correspondence, draft contracts, employee records, financial summaries, supplier lists. Under UK GDPR, an organisation processing personal data must ensure the processing has a lawful basis and that processors handle the data only as instructed. The ICO has been explicit that those principles apply in full to AI systems, free or paid.

The Italian Garante’s 2023 emergency suspension of ChatGPT for Italian users, and the regulator’s subsequent finding of three GDPR breaches, established that this is not a theoretical concern. Cyberhaven’s 2026 enterprise data report found that 39.7 percent of AI interactions expose sensitive data and that around 70 percent of ChatGPT usage in surveyed organisations is happening through personal accounts rather than corporate ones. That is the shadow AI pattern, and free-tier reliance is what creates it.

The cost side is where the calibration breaks. ChatGPT Business sits at around £16-20 per seat per month. Claude Team at around £20-24. Microsoft 365 Copilot at around £14-15 if you’re already on a Microsoft 365 plan. Against the UK GDPR exposure (fines up to 4 percent of annual turnover) and the regulatory direction of travel under the EU AI Act, the price is trivial. Against the cost of one client confidentiality breach, it is trivial. The cost is not the obstacle. The default position is.

Where will you actually meet this in practice?

You meet it the first time someone in the team uses a free AI tool to do something they would have done in a paid tool if they’d thought about it. The classic shape is a junior staff member pasting a client document into free ChatGPT to summarise, on a personal account, because the firm hasn’t bought any AI seats yet. The risk only becomes visible if the client asks or a regulator gets curious.

You also meet it when the firm decides to procure AI tools and discovers there are three or four free accounts already in active use, often on different vendors, with no record of what’s been put through them. This is the audit-trail gap that the paid LLM tier decision post addresses from the cost angle. The privacy angle adds a separate point, even after you’ve upgraded to paid, you have no defensible record of what went through the free tier in the months before.

The third place you meet it is in client conversations. Larger clients are starting to ask about AI usage in vendor onboarding questionnaires, and the question is rarely “do you use AI” any more. It is “what AI tools are approved in your firm, what data has been processed through them, and under what commercial terms”. Answering that question with “free ChatGPT” is becoming a procurement disqualifier in regulated industries.

When is free genuinely fine, and when is it not?

Free is fine when the input would be safe to paste into a public Slack channel or send by unencrypted email to an unknown third party. Personal learning, generic templates, public information, throwaway exploration, drafting a tone check on a piece of marketing copy you’ve already published. The risk is low because the data is either public or generic enough that training-data ingestion creates no real exposure.

Free is not fine for anything covered by a confidentiality obligation, a data protection duty, or a client engagement letter. Client correspondence, draft contracts, employee records, payroll or financial information, supplier lists, customer revenue forecasts. The simple test is, would you encrypt this file if you were storing it on a USB stick. If yes, it does not belong in a free AI tier without redaction. Under UK GDPR Article 5, storage limitation and purpose limitation alone make free-tier handling of client data difficult to defend.

The hybrid pattern is what owner-led firms actually settle into. Three to five paid commercial seats for the staff who handle client and internal work, free tier access for the rest. For a ten-person firm, the monthly spend sits at around £100-150. The paid seats carry the contractual position, the training exclusion, the retention floor, and the admin visibility. The free seats do the personal learning and the exploration. The boundary lives in a one-page AI usage policy, with examples that explain which kind of input goes into which kind of tool.

Three sit very close. The first is the data classification rule, which maps your data into tiers (public, internal, confidential, regulated) and pairs each tier with the AI tool it’s allowed to flow into. The free-versus-paid question is the simplest expression of that rule. The second is the AI usage policy, the one-page document that names the approved tools and the prohibited practices.

The third concept is the audit trail. UK GDPR Article 5(2), the accountability principle, requires that an organisation can demonstrate compliance through documentation. Paid commercial tiers provide the admin dashboards and activity logs that make that demonstration straightforward. Free tiers do not. For a firm in a regulated industry (financial services, healthcare, legal services, accountancy), the audit-trail gap on free tiers is structural rather than incidental.

If you’re trying to work out which tier your firm should be on and where the right line sits for the kind of work you do, book a conversation.

Sources

- OpenAI (2025). Data Controls FAQ, free tier training and retention defaults. https://help.openai.com/en/articles/7730893-data-controls-faq - OpenAI (2025). ChatGPT Business and Enterprise pricing and privacy terms. https://openai.com/business/chatgpt-pricing/ - OpenAI (2025). Data residency and inference residency for ChatGPT Enterprise. https://help.openai.com/en/articles/9903489-data-residency-and-inference-residency-for-chatgpt - Anthropic (2025). How long do you store my data, retention windows for Free, Pro, Team. https://privacy.claude.com/en/articles/10023548-how-long-do-you-store-my-data - Anthropic (2025). Is my data used for model training, consumer versus commercial tier handling. https://privacy.claude.com/en/articles/10023580-is-my-data-used-for-model-training - Microsoft (2025). Enterprise data protection in Microsoft 365 Copilot, EDP commitments and EU Data Boundary. https://learn.microsoft.com/en-us/microsoft-365/copilot/enterprise-data-protection - Google Workspace (2025). Generative AI in Google Workspace privacy hub, Workspace customer data not used for training. https://knowledge.workspace.google.com/admin/gemini/generative-ai-in-google-workspace-privacy-hub - UK Information Commissioner's Office (2025). Artificial intelligence guidance and resources, UK GDPR principles applied to AI. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/ - Cyberhaven (2026). Sensitive enterprise data flowing into AI tools, 39.7% of AI interactions expose sensitive data. https://www.cyberhaven.com/blog/sensitive-data-flowing-into-ai-tools - IBM (2025). Cost of a Data Breach Report 2025. https://www.ibm.com/reports/data-breach

Frequently asked questions

How is the privacy difference between free and paid AI tiers actually structured?

It runs along four levers. First, training opt-out defaults, free tiers from OpenAI and Anthropic default to using inputs for model training, paid tiers exclude this by default. Second, retention windows, free tiers retain conversation data for months or years, paid tiers retain for days under contract. Third, regional data residency, enterprise paid tiers offer UK or EU storage, free tiers do not. Fourth, audit logs, paid team and enterprise tiers provide admin dashboards and activity logs, free tiers do not.

Is Claude Free safer than ChatGPT Free for client work?

Not by default. Claude Free retains conversations for up to five years if the "Help improve Claude" toggle is enabled, which is the default position. Disabling it shortens retention to thirty days. ChatGPT Free retains chat history unless deleted and may use inputs for training unless the user opts out. Both consumer tiers are difficult to reconcile with UK GDPR for client data. The paid commercial tiers, Claude Team and ChatGPT Business, change the legal position because data is excluded from training under contract.

What does the hybrid model actually cost for a ten-person firm?

Around £100-150 a month, against a free-tier baseline of £0. The pattern is three to five paid seats on a commercial tier, ChatGPT Business at £16-20 per seat or Claude Team at £20-24 per seat, for staff who handle client work, financial data, or employee records. The remaining seats stay on the free tier for personal learning, public information, and generic exploration. Annual cost sits at £1,200-1,800, less than a single junior contingency line and trivial against the regulatory exposure it removes.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation