The Dark Side of Chatbots: Who’s Really Listening to Your Conversations?

Monday April 14, 2025

Chatbots like ChatGPT, Gemini, Microsoft Copilot and the recently released DeepSeek have transformed the way we interact with technology. They can help with almost any task – from drafting emails and generating content to creating your weekly grocery list while sticking to your budget.

But as these AI-powered tools become more embedded in our daily lives, it’s getting harder to ignore the growing concerns around data privacy and security. What exactly happens to the information you share with these bots, and what risks might you be exposing yourself to?

These bots are always on, always listening, and always collecting data about YOU. Some are more discreet than others, but make no mistake – they’re all doing it.

So the real question is: How much of your data are they collecting, and where does it end up?

When you interact with AI chatbots, the information you provide doesn’t just disappear. Here’s how these tools typically handle your data:

Data Collection: Chatbots process the text you input to generate responses. This could include personal details, sensitive information or confidential business content.

Data Storage: Depending on the platform, your interactions may be stored temporarily – or for much longer. For example:

  • ChatGPT (OpenAI): Collects your prompts, device and location data, and usage activity. This may be shared with “vendors and service providers” to improve their services.
  • Microsoft Copilot: Similar to OpenAI, but also collects your browsing history and app interactions. This information may be used to personalise ads or train AI models.
  • Google Gemini: Stores conversations to “provide, improve, and develop Google products and machine learning.” Human reviewers might view your chats to enhance the platform. Data can be retained for up to three years, even if you delete your history. While Google claims it won’t use this data for targeted advertising, privacy policies can change.
  • DeepSeek: Arguably the most invasive, DeepSeek collects your prompts, chat history, location data, device information – and even your typing patterns. This information is used to train models, improve UX, and target ads. All data is stored on servers located in the People’s Republic of China.

Data Usage: Collected data is used to improve chatbot responses and train future AI models. However, this raises questions about informed consent and the potential for misuse.

The Risks You Should Be Aware Of

Using AI chatbots comes with some serious risks:

  • Privacy Concerns: Information shared with a chatbot may be visible to developers or third parties. For example, Microsoft Copilot has faced criticism over potentially exposing sensitive data through overly permissive access rights. (Concentric)
  • Security Vulnerabilities: Chatbots can be targeted by cybercriminals. Microsoft Copilot, for instance, has been shown to be vulnerable to exploits like spear-phishing and data theft. (Wired)
  • Compliance Issues: If a chatbot processes or stores data in ways that don’t align with Australian privacy laws – such as the Privacy Act 1988 and the Australian Privacy Principles (APPs) – it could expose individuals and organisations to significant legal and reputational risks. For businesses handling sensitive or regulated data, especially in sectors like finance, healthcare, or government, non-compliance can lead to investigations by the Office of the Australian Information Commissioner (OAIC). In fact, some Australian organisations have restricted or banned the use of ChatGPT and similar tools due to concerns around uncontrolled data storage, offshore processing, and a lack of visibility into how information is used.

How to Protect Yourself

Here’s how to stay safe while using AI chatbots:

  • Avoid Sharing Sensitive Info: Don’t submit confidential or personally identifiable information unless you know how the platform handles it.
  • Check Privacy Settings: Review each platform’s privacy policy. Some (like ChatGPT) allow you to limit data sharing or retention.
  • Use Privacy Tools: Platforms such as Microsoft Purview provide governance controls to help businesses manage and protect data shared with AI.
  • Stay Updated: Privacy policies and practices can change – keep an eye on updates from the platforms you use.

The Bottom Line

AI chatbots can save time and boost productivity – but they also come with privacy and security trade-offs. Understanding how your data is collected and used is key to staying safe.

Want to ensure your business stays secure in an evolving digital landscape?
Start with a FREE Network Assessment to identify vulnerabilities and protect your data from cyber threats.

👉 Click here to book your FREE Network Assessment now!

author-image

Craig Boyle

MSP Blueshift supports a range of different businesses who depend on their technology to deliver goods and services to their clients. From architects to retail chains, we’re passionate about streamlining IT systems and processes to move business forward.

Related Insights

View more
MSP Blueshift Pty Ltd
4.7
Based on 119 reviews