"Private AI" has become a marketing term, which means it has almost stopped meaning anything. Every vendor now claims their product is "private", "enterprise-grade" and "your data stays yours" — yet the definitions, the storage locations, and the training-data usage vary wildly between OpenAI, Microsoft, Anthropic, Google, and the growing set of self-hosted options.
If you run a UK SMB — especially in law, financial services, healthcare or professional services — "where does my data actually go?" is not a theoretical question. It's the one your clients, your insurers, your regulators, and eventually your auditors will ask. This article gives you an honest comparison so you can answer it.
Strip away the marketing and there are essentially four delivery models you can pick for AI in a UK SMB. Each has a defensible use case — the trap is assuming they're interchangeable.
| Option | Where data lives | Training on your data? | Typical cost (25 users) | Best for |
|---|---|---|---|---|
| Copilot for M365 | Microsoft UK/EU data boundary, same tenant as your M365 | No — contractual commitment | £~7,800/yr + base M365 | Firms already on M365 Business Premium / E3, document-heavy work |
| ChatGPT Enterprise / Team | OpenAI (US, with EU residency option) | No — contractual | £~5,500–7,800/yr | Teams needing GPT-4/5-class reasoning outside M365 |
| Claude for Work | Anthropic (US, EU region available) | No — contractual | £~6,000–7,800/yr | Long-document analysis, careful drafting, complex reasoning |
| Self-hosted private AI | Your infrastructure (Azure/UK DC/on-prem) | No — you control it | £15k–£60k build + £400–£2k/mo run | Privileged data, regulated firms, IP-sensitive work |
Two things jump out. First, all four options say they don't train on your data — but the legal mechanism differs. Microsoft and Anthropic rely on contract plus architectural separation. Self-hosted removes the question entirely. Second, self-hosted is dramatically more expensive up-front, but has a very different long-run cost curve if your team is large or your use is heavy.
When a vendor says your data isn't used to train their models, they typically mean four things — and you should confirm all four in writing before signing:
If any of those four commitments feel vague in a vendor's own documentation, assume the commercial protection is weaker than marketing suggests. Press for plain-English clarification from your account manager, and get it in the MSA — not a blog post.
Most UK SMBs — including most I work with in law and finance — are genuinely well-served by Copilot, Claude for Work, or ChatGPT Enterprise. Here's when a commercial tool is the right answer:
In this zone, Copilot's integration with Outlook, Word, Excel and SharePoint is usually worth more than the marginal sovereignty gain from going private. The operational win beats the architectural purity.
Self-hosted or dedicated-tenant AI becomes the right call when one or more of the following applies:
In those scenarios, a self-hosted deployment isn't over-engineering — it's the minimum viable answer to a client, insurer or regulator question.
When I deploy a private AI for a London SMB, it's usually one of three architectures:
Lowest friction if you're already on Microsoft. You get GPT-4/5-class models running inside your Azure subscription, in UK South or UK West region. Contractually, Microsoft doesn't see your prompts. Practically, this is "dedicated tenancy" not "self-hosted", but it passes the sovereignty test for the vast majority of UK regulators.
Models like Llama 3, Mistral, or Qwen running on GPU VMs in UK-regional Azure or AWS. You operate it. You patch it. No vendor sees the prompts. This is "private" in the strictest sense, and the right answer when a client contract explicitly forbids third-party model providers.
For firms that already have a server room and a compliance reason to avoid public cloud entirely, a dedicated GPU box running an open-weight model is now commercially viable. Build cost is higher (£25k–£60k typical); running cost is effectively electricity plus maintenance.
If a single one of your largest client contracts prohibits "third-party AI processing", a private deployment probably pays for itself in the first year — not in compute, but in the client revenue it preserves.
Private AI sales pitches rarely discuss what you give up. Let's be direct:
The worst decision is the one driven by whoever shouted loudest at the last management meeting. The second-worst is signing an enterprise contract because your account manager offered a discount. Do the five-question version above; most firms reach a defensible answer in an afternoon.
No. OpenAI contractually commits that Enterprise and Team prompts and outputs are not used to train their models. Retention is typically 30 days; zero-retention is available on request. Protection is contractual, not architectural — important if a contract or regulator requires stronger isolation.
Yes if configured. Set your tenant region to United Kingdom and enable EU Data Boundary. Copilot processing then stays within the EU, with UK South/West as primary for UK tenants.
When a client contract forbids third-party AI processing, when a regulator demands physical-location evidence, when handling legally privileged material, or when your IP is the asset. Most SMBs don't need it; some can't operate without it.
Typical build for a 25–100 user private AI on Azure UK is £15,000–£40,000; monthly run costs £400–£2,000. On-premise appliance builds run £25,000–£60,000 plus electricity and maintenance.
Most 5–50 fee-earner firms start with Copilot for M365 for general productivity, plus Azure OpenAI in their own tenant for privileged contract review. Client data stays within UK/EU, ring-fenced from training, and fully audited.
Free 20-minute strategy call. I'll walk you through the decision flow with your actual contracts and client mix — no slide deck, no sales pitch.
Free 20-min Strategy Call