AI isn’t just knocking on your business’s door — your team has already let it in.
From Grammarly to ChatGPT, free and freemium AI apps are now everyday work tools. But here’s the catch: many of these platforms harvest the data your employees enter. Emails, contracts, product strategies — all could be siphoned off into external AI models without you even realising.
The risk isn’t theoretical. It’s happening, right now, inside businesses just like yours.
Here’s a quick snapshot of popular free AI services that collect and retain user input to show what we mean, we’ve created a much more comprehensive list at the bottom of this article for more detail.
ChatGPT Free (OpenAI): User prompts are stored and can be used to train future models, unless settings are manually adjusted.
Grammarly Free: Captures text as you type, transmitting sensitive content to their servers — and their policies allow analysis for service improvements.
Canva Magic Write: AI content generation inside Canva stores prompts and outputs for internal analysis and model training.
Otter.ai Free: Records and transcribes meetings, retaining transcripts on their servers; free users have limited control over storage duration.
Notion AI Free: Content generated and inputs provided may be reviewed for service improvements.
Critically, in many cases, the ability to opt out of data collection only becomes available through paid subscriptions — and even then, requires manual configuration.
Out of the box, your data is vulnerable.
The risk isn’t just in the apps your team knowingly uses — it’s also in the plugins they quietly install.
Browser extensions, AI writing helpers, meeting recorders, CRM add-ons — many are downloaded in seconds, bypassing IT approval entirely.
These plugins often have direct access to your emails, documents, and customer databases. And because they’re so easy to install and so rarely audited, they could be the biggest unmonitored leaks in your business right now.
Without a clear strategy to govern AI plugins and extensions, you’re not just at risk — you’re flying blind.
When your team uses free AI tools and plugins without governance:
Confidential data leaks externally without a single phishing email or cyberattack
Intellectual Property (IP) — your ideas, product designs, client strategies — can be absorbed into public AI models
Privacy compliance risks soar, especially under NZ Privacy Act 2020, Australian Privacy Principles (APPs), and upcoming regulatory updates
Trust with your clients can erode if sensitive information is compromised
You may have the best cybersecurity tools in place — but if your staff are feeding sensitive information into AI platforms, it’s like locking your front door while leaving the windows wide open.
It’s tempting to focus only on the exciting parts of AI adoption: faster work, smarter tools, happy teams.
But without a governance strategy, you’re building on quicksand.
An effective AI governance strategy includes:
Clear AI usage policies: what’s allowed, what’s forbidden, and which approved tools to use
Training and awareness: ensuring staff understand the risks of freemium and shadow AI apps
Approved tools list: providing safe, secured AI platforms that meet your compliance standards
Plugin and extension controls: restricting or managing browser extension use across devices
Monitoring and auditing: tracking AI app and plugin usage across your environment
Ongoing review: updating governance as the AI landscape evolves
Governance doesn’t mean saying “no” to AI — it means enabling your people to use AI safely and smartly.
At Optimus Systems, we help businesses like yours embrace AI without putting your IP, data, and reputation at risk.
Our AI Governance & Security services cover:
Risk assessments of current AI, plugin, and app usage
Tailored AI usage policies and staff education
Implementation of secured, compliant AI platforms within Microsoft 365 and beyond
Configuration of plugin management tools to control extension risks
Ongoing governance frameworks to evolve with your business
AI can be your greatest enabler — but only if it’s secured and steered in the right direction.
If you’re unsure what AI tools, plugins, or extensions your team are already using — or how exposed your business might be — let’s talk.
We’ll help you turn AI risk into strategic advantage.
Free / Plus – Prompts are stored for 30 days and used to teach the model unless you toggle it off.
Enterprise / API – Never used for training; logs are retained only for abuse-monitoring.
Opt-out toggle? Yes → Settings ▸ Data controls ▸ “Improve the model for everyone”
What this means – Drop in sensitive client data and it could shape future answers if you forget the toggle.
Risk level → Manageable
Consumer Bing Copilot – Searches can feed Microsoft’s consumer models.
Business / Enterprise – Your documents stay inside your tenant; models never learn from them.
Opt-out toggle? Enterprise: N/A (training already off).
What this means – Safe for internal docs, but don’t paste secrets into the consumer-grade chat.
Risk level → Low
Free – Prompts saved to “Gemini Apps Activity” and improve Google models.
Workspace Enterprise – Admins can disable logging.
Opt-out toggle? Yes → Google Account ▸ Data & privacy ▸ Gemini Apps Activity
What this means – Leave the toggle on and your creative brief may resurface in the model’s memory.
Risk level → Manageable
Free – Inputs may be sampled for research but are stripped of IDs.
Claude Pro / Enterprise – No training on customer data unless you explicitly opt-in.
Opt-out toggle? Enterprise: already off.
What this means – Safe for IP, but remember Anthropic still stores logs briefly for abuse checks.
Risk level → Low
Free / Premium – Uses your text to refine suggestions unless you disable “Product Improvement”.
Business (old “Enterprise”) – Improvement toggle is off and invisible.
Opt-out toggle? Yes → Account ▸ Privacy
What this means – Great for quick emails; don’t use it for board minutes unless the toggle is off.
Risk level → Manageable
Free / Plus – Data routed through sub-processors under zero-training contracts.
Enterprise – Extra zero-retention logs.
Opt-out toggle? Not required.
What this means – Your workspace content feeds the answer but doesn’t feed the model.
Risk level → Low
All plans – Personal files analysed for feature-improvement only if you leave “Content analysis” on.
Opt-out toggle? Yes → Account ▸ Content analysis
What this means – Turn it off before uploading NDA artwork.
Risk level → Manageable
Free / Standard – Prompts & images are public and train the model.
Pro / Mega with “Stealth” add-on – Private room; prompts excluded from training.
Opt-out toggle? Only by paying for Stealth.
What this means – Anything you type in the free Discord is instantly searchable by others.
Risk level → High
Free / Pro – “Canva Shield” lets you disable model training on your designs.
Enterprise – Disabled by default; indemnity coverage.
Opt-out toggle? Yes → Privacy preferences ▸ Allow Canva to use my content
What this means – Good for marketing drafts; lock it down for unreleased product mock-ups.
Risk level → Manageable
Free / Pro – Transcripts may be reviewed to improve accuracy; no user toggle.
Business – Contractual opt-out available.
Opt-out toggle? Only through enterprise agreement.
What this means – Recording client meetings? Use Business or risk your call feeding Otter’s models.
Risk level → High
All tiers (when enabled) – Meeting audio is not kept for model training.
Opt-out toggle? Feature can be disabled account-wide.
What this means – Summaries are helpful; transcripts are deleted unless hosts save them.
Risk level → Low
DeepL Write / Free API – Text may be retained and used for quality-improvement.
DeepL Pro – Zero-retention; no training on your text.
Opt-out toggle? Upgrade to Pro.
What this means – Use Pro for legal contracts and anything confidential.
Risk level → High
Free – As of 3 Feb 2025 Miro collects AI-interaction data from free users unless you disable it.
Starter / Business / Enterprise – Admin can turn AI off organisation-wide.
Opt-out toggle? Yes → Organisation ▸ Feature activation ▸ Miro AI
What this means – Brainstorm safely on paid plans; free boards feed Miro’s AI quality training.
Risk level → High
All tiers – Prompts routed through OpenAI but not used to train OpenAI models; logs retained by HubSpot for improvement.
Opt-out toggle? Admins can disable generative AI at Settings ▸ Account management ▸ AI
What this means – Good for marketing drafts; still avoid health or payment data (HIPAA/PCI).
Risk level → Manageable
Strategy
Cybersecurity
Infrastructure
Support
Case studies
Insights
Strategy review
About us
Our team
Our values
Get in touch
Schedule a call
Find a local office
Copyright © 2024 Optimus Systems Limited. All Rights Reserved.
Privacy Policy
Company Terms