Artificial intelligence is everywhere right now, and every small business owner is asking the same question: how do we use this? It's a legitimate question. AI can genuinely save time on invoicing, customer emails, data entry, and dozens of other repetitive tasks. The efficiency gains are real.
But there's a risk hiding underneath the convenience that most small business owners haven't thought through carefully. And by the time you realise it, your data is already gone.
The real cost of AI tools
When you upload information to a public cloud AI platform, you're not just getting an answer back. Your proprietary information goes with it: client names, pricing structures, project details, internal processes, employee records. All of it lands on servers you don't control, managed by companies whose terms of service grant them broad licences to use what you feed them.
Read the fine print. Most major AI platforms include language that gives the company rights to your inputs for purposes including improving their models. That's not a conspiracy theory — it's in the terms you agreed to when you signed up.
The compliance and liability angle
If your business handles client data, medical information, financial records, or anything else that falls under a regulatory framework — HIPAA, PCI-DSS, state privacy laws — uploading that data to a public AI tool may put you in violation of your own compliance obligations. Most small businesses don't realise they've just created a liability for themselves and their clients by doing something that felt like a productivity shortcut.
If a client ever asks where their data went, "I pasted it into ChatGPT" is not an answer that instils confidence.
The hidden risk: you can't actually delete it
Here's the part that should concern you most: data doesn't reliably disappear from cloud systems once it's there. Retention policies are opaque. Backups exist. Data gets copied across infrastructure. Even if a platform offers a "delete your data" option, you have no way to verify what that actually means in practice.
Once your proprietary information leaves your network, you've lost control of it. That's not a hypothetical risk — it's the nature of cloud systems.
If you don't believe us, look at what's happened to the AI companies themselves
These aren't small startups with weak security teams. These are some of the best-funded technology companies in the world. And they can't keep their own data secure.
- Amazon mandated that its engineers use an AI coding tool called Kiro at 80% weekly usage. Within weeks, they suffered multiple outages. On March 5, 2026, a six-hour meltdown knocked out checkout, account access, and product pricing — wiping out an estimated 6.3 million orders. An AI agent also deleted an entire production environment. (Business Insider)
- OpenAI's ChatGPT had a bug in March 2023 that exposed chat history titles and payment information from ChatGPT Plus subscribers for approximately nine hours. Then in November 2025, user data including names and email addresses leaked through a third-party analytics vendor called Mixpanel. (Mashable) (Forbes)
- Anthropic accidentally shipped 512,000 lines of Claude Code's source code inside a public software package on March 31, 2026. It was a human error — someone included an internal file in a routine update. Within hours, thousands of copies had been uploaded to GitHub and couldn't be recalled. (The Guardian) (Axios)
- Google's Gemini Enterprise had a critical vulnerability called GeminiJack that allowed silent, zero-click exfiltration of Gmail, Calendar entries, and corporate documents. An attacker could embed instructions inside a document that caused Gemini to quietly collect and send out sensitive files — without the user ever knowing it happened. (Dark Reading)
- Microsoft 365 Copilot had a code bug that caused it to read and summarise confidential emails marked with confidentiality labels, bypassing the company's own data loss prevention controls. A separate attack technique called Reprompt was discovered that silently siphons data from Copilot after the chat window closes, undetectable by standard monitoring tools. (TechCrunch) (Varonis)
- Meta's LLaMA model weights leaked to the public within a week of the model's initial release in 2023, spreading across AI communities and torrent sites before Meta could respond. A separate critical vulnerability in the Llama Stack framework in 2024 allowed remote code execution on AI inference servers. (The Verge)
The point here isn't that AI companies are careless. The point is that no system is impenetrable, and even the most well-resourced technology companies in the world experience breaches, leaks, and failures. If you hand your proprietary data to any of these platforms, you are accepting that risk.
What you should actually do
You don't have to avoid AI entirely. But you do need to be deliberate about it. Here's what we recommend:
Audit what data you're feeding to AI tools right now. Walk through your workflows and ask honestly: what information is going into these tools? Client lists? Contracts? Financial data? Identify the exposure before you can address it.
Check your vendor contracts for data handling clauses. If you've signed a business associate agreement, a confidentiality agreement, or a service contract with your clients, review whether using public AI tools with their data creates a violation. When in doubt, ask your attorney.
Keep sensitive work offline or in local tools. There is AI software that runs entirely on your own hardware, with no data leaving your network. It's worth knowing that option exists.
Establish a written policy for your office. Decide which AI tools are approved for which purposes, and what data is off-limits. "We can use AI for drafting marketing copy, but not for anything involving client data" is a clear, enforceable rule. Unwritten policies don't protect you.
Ask your IT partner what their data policies are. If you're using managed IT services, your provider should be able to tell you exactly how they handle your data and what guardrails they have in place around AI tools.
The bottom line
AI is a powerful tool, and it belongs in your business — in the right places, under the right conditions. But using it indiscriminately, without understanding where your data goes and what happens to it, is a risk you're taking with your clients' trust, your competitive advantage, and potentially your legal standing.
Make informed choices. Your data is worth protecting.
Relentless IT provides managed IT services for small businesses in northwest Ohio. This article is intended as general guidance. For advice specific to your business and compliance situation, consult your IT provider and legal counsel.