Should you trust ChatGPT with your data?

In the era of operational efficiency, Chat GPT is fast becoming a staple in the tech startup's tech stack. But are you aware of the data risks that come with using Chat GPT? And do you know what Open AI is doing with your data?

In this article, we discuss: 

✅ How does Open AI use your data?
✅ Chat GPT's key data breaches and incidents so far 
✅ 3x Top Risks you need to understand before using Chat GPT
✅ x3 Actionable Tips on how to use Chat GPT safely and securely

What can we learn about ChatGPT from Open AI’s Privacy Policy?


What data breaches has ChatGPT been involved with to date? 

Despite only being available for public usage since 2022, ChatGPT has already encountered its share of data breach issues. Below are three breaches OpenAI has recently had to navigate.

An open-source library breach

Since ChatGPT logs everything someone types into the generator, there have been exploits where people can see the chat histories of other users in the Redis open-source library. This allowed people to see what certain users were asking the AI tool, which in some cases can be deeply personal things like counselling and mental health support.

A ban in Italy

The Italian Data Regulator banned ChatGPT, which is the equivalent of if ICAO were to ban this in the UK. The Italian Data Regulator did this because they were concerned about how the tool uses the personal data of users, and they were concerned for good reason. That concern led to the concern that there are no age limits or controls. When this ban was enacted, ChatGPT made some changes, and since those changes were made, the Italian ban has since been lifted.

A leak of a sensitive IP

When Samsung Electronics allowed ChatGPT to be used at their offices, its IP was public. Except it wasn’t public in the sense that someone just leaked it out to a mailing list. Instead, this confidential information about Samsung’s semiconductor facility measurements, defects, and yields related to this was repurposed on ChatGPT as learning data for Americans. This allowed competitors to see what Samsung was doing so they could either replicate it or pivot.


What are the top 3 risks relating to employees using ChatGPT?

With a variety of types of breaches that have occurred, you may be left wondering: What exactly are the top 3 risks for your company if you let your employees use ChatGPT? Well, below, we’ve broken that down into three categories for you to see.

Privacy and security

One of the biggest risks associated with ChatGPT is illustrated by the above recent data breaches, and that risk is around privacy and security. To generate responses, ChatGPT is pulling data from the web, but it isn’t clear where this data is being pulled from. If your employees are not careful with what they are putting in and are putting in personal and sensitive information about themselves or your company, there’s a good chance that information will go into OpenAI’s library of information to pull from later, no longer giving you control over your private information.


A second concern is that OpenAI provides you with results that aren’t always 100% true. ChatGPT is skilled at stringing together different pieces of information that are found on the internet that sound correct. The issue is that while the sentences are logically correct, the context of the sentences are not always correct. Even worse, though, is that ChatGPT may even give you a logically crafted answer with made-up data or information. When your employees use OpenAI, if they are taking the data that they receive from ChatGPT as 100% true, there’s a good chance your company is working with incorrect data and information that doesn’t actually exist.


Humans are known to have biases, even when we try not to; it’s a fact of life. ChatGPT was built by humans who control the tool. These humans have curated and chosen the data that goes into OpenAI. If humans are known to have biases and are the ones who built ChatGPT, that means that ChatGPT will have biases built into the information you are seeing. The examples, data, and information your employees receive from this are what ChatGPT wants your employees to see.

What are some actionable tips to use ChatGPT safely and securely?

Just because something has a risk doesn’t necessarily mean you shouldn’t keep using it. All a risk means is that you should take extra precautions and implement some additional safety measures. With your company, you’ll want to do more than just write a policy for your employees. As much as we don’t want to believe it, many employees often don’t read policies because they are too long and thorough. In addition to having a written policy on your employees' usage of ChatGPT, hold workshops or different pieces of training on OpenAI where your employees can hear real examples and learn how best to use it. Beyond that, follow these tips below.

Never put sensitive personal information about yourself or your company into ChatGPT.

Train your employees and yourself to always use pseudonyms or code words to conceal the specific person or company you are talking about. When asking a question, you can put things like "Employee A" or "Company A" and "Employee B" or "Company B." That way, ChatGPT cannot save anything about your company. Along the same lines, be careful with the personal information you input or ask the tool for as well. You wouldn’t want a personal search you input into the bot to be publicly released. Only put personal information in there that you are okay sharing with the world.

Never believe anything ChatGPT tells you is 100% true.

There are some things your employees should and shouldn’t use ChatGPT for. Your employees can use it for things like rewriting the tone of an email, shortening a memo, or getting ideas on how to structure a speech at your next company all-staff meeting. What your employees shouldn’t use Chat GPT for are fact-based items like how to disseminate a fact about something, examples of similar court cases to the case you are tackling, and other things where having truthful, factual information is important.

Consider your existing approaches to combating bias.

Since Open AI is subject to giving you biased data and information, think about what you already do to combat bias and discrimination. Then factor these into your AI usage. One thing you can do to combat this is to minimise the amount of data you put into your request. Provide ChatGPT with only the bare minimum amount of information. Like how your employees shouldn’t believe everything ChatGPT says as the truth, your employees also shouldn’t be using AI to decide who to hire from a pool of candidates. If your employees are asking specific questions like what marketing company your company should partner with, the answers they are getting are ones built on the tool’s bias.

Bottom line: Should you trust ChatGPT with your data? 

At the end of the day, ChatGPT may need a better privacy policy, may have gone through some data breaches, and may have some risks, but there are ways that your company can use OpenAI effectively. Every company’s purpose is different, so think about what your business can use ChatGPT for and then implement controls and policies around this in your workspace.

Saying your employees cannot use ChatGPT or shutting it down entirely in your workplace is not realistic. OpenAI is evolving and is the way the world is headed. Rather than take an extreme stance against it, it’s better to find ways to work with and around it to mitigate the risks we outlined above.

When you set up controls around ensuring no employee inputs personal and sensitive company information into it, that employees know the data from ChatGPT is often made up, and around combating bias that is built into the system, your company can end up using ChatGPT effectively and, most importantly, safely.