Is ChatGPT Safe? Experts Warn Against Sharing Sensitive Data

Since its launch on November 30, 2022, ChatGPT has attracted the attention of many because of its intelligence. How could it not be? By creating an automatic dialogue system, ChatGPT can provide information and answer all questions via chat with a human-like response quality.

With its ability to produce high-quality text, ChatGPT can significantly bring positive changes to the activities of its users.

However, behind the presence of ChatGPT, with all its advantages that can assist many people, it can also pose a threat and danger to many brands and technology companies.

Companies That Ban The Use Of ChatGPT

Most recently, Amazon has joined other companies such as Walmart Global Tech, Microsoft, JP Morgan, and Verizon, which have previously banned or restricted their workers from using or inputting confidential information to ChatGPT.

Amazon itself warns all staff and employees not to use ChatGPT to write or share code registered with ChatGPT for completion.

This was conveyed directly by an Amazon lawyer on the company’s internal Slack channel when answering employee questions about whether there were official instructions for using ChatGPT on work devices.

In their briefing, the lawyer said the staff was asked to follow existing company secrets and not interact with and provide any Amazon confidential information to ChatGPT after the company found some of its responses similar to Amazon’s internal situation.

The attorney also emphasized that any input their employees provide to the AI could serve as repeated training data for ChatGPT, The company does not want the results to contain or resemble confidential company information.

Possible Risks of Sharing Important Data With ChatGPT

Amazon’s concern is certainly not without basis. Although ChatGPT claims that it does not store any information that users input in conversations, it does “learn” from each conversation. This means there is no security guarantee in these communications when users enter information into conversations with ChatGPT through the internet.

The question of transparency about the data shared with ChatGPT has also been questioned by a computational linguistics lecturer from the University of Washington who said that OpenAI, the company that created ChatGPT, is far from being transparent about its data usage. If the data is used for training data, it is possible that people could later get certain big company secrets simply by asking ChatGPT.

The UK’s National Cyber Security Center (NCSC) added the possible risks of sharing data with ChatGPT by saying that even if entering information into a query does not result in potentially private data being entered into the large language model (LLM), the question will still be visible to the organization or company providing the LLM, in ChatGPT’s case it is OpenAI.

The NCSC added that queries stored online also have the potential to be hacked, leaked, or even become publicly accessible.

Data Leaks That Have Occurred

The problem of confidentiality and privacy of user data that ChatGPT collects has occurred recently when users can view conversations that are not in their history on March 20.

On March 22, OpenAI CEO Sam Altman quickly confirmed an error in ChatGPT that allowed some users to view other users’ conversation titles. Altman said the company felt ‘horrible’, but significant errors had been successfully corrected.

Use of ChatGPT in the Workplace

According to research by the career-based social network app, Fishbowl, at least 68% of workers who use ChatGPT use it without their employer knowing.

Meanwhile, research conducted by Cyberhaven Labs on 1.6 million workers in cross-industry companies using Cyberhaven products revealed that 8.2% of workers had used ChatGPT at least once at work.

Previous research conducted by Cyberhaven in February revealed that 11% of what employees paste into ChatGPT is sensitive data. Cyberhaven noted that on average, per 100,000 employees, from February 26 to March 4, 2023, there were 199 incidents of uploading internally-specific confidential data to ChatGPT and 173 incidents of uploading client data, as well as 159 incidents of uploading source code.

It can be concluded that although some companies have tried to block access to ChatGPT, usage continues to increase exponentially,, and worker keeps finding ways to access the AI chatbot, one of which is by using a proxy to get around many of these network-based security tools despite the possibility of risks of sharing company data as explained above.

The privacy risks inherent in using ChatGPT should remind users to be more careful about what information they share with the AI chatbot. And keep in mind that despite its potential benefits, Open AI is a private, for-profit enterprise whose commercial interests and imperatives do not necessarily align with the needs of the larger society.

Exit mobile version