Is ChatGPT a Safe Cyber Space for Businesses? Is ChatGPT a Safe Cyber Space for Businesses?
ChatGPT has captured people’s attention and made them curious about how the chatbot could change how they search for information and... Is ChatGPT a Safe Cyber Space for Businesses?

ChatGPT has captured people’s attention and made them curious about how the chatbot could change how they search for information and do other online tasks. However, as they start using the internet in new ways, are they overlooking possible safety risks?

ChatGPT Confidently Makes Incorrect Statements

One of the most-talked-about dangers of ChatGPT is its “hallucination” problem. That’s the term artificial intelligence researchers use to describe what happens when the chatbot gives an answer not directly based on the data that trained it. 

Much of what the chatbot says may seem right — especially to people without firsthand knowledge of the topic. The problem is that a hallucinated answer typically has wholly or partially wrong content. That’s a big deal if you use ChatGPT for any business-related task. 

Real-life interactions with ChatGPT showed some users had to challenge the chatbot on incorrect information before it admitted mistakes. Sometimes, the chatbot cites nonexistent studies or other sources, too. 

That’s why it’s best to think carefully about the pros and cons before using ChatGPT for advice about any topic for which an error would have serious consequences. Examples include money, health, and legal matters. You may get an answer in a matter of seconds, but if your company does not have team members to verify the content’s validity, trusting it could be risky. 

ChatGPT Poses Cybersecurity Risks

Cybersecurity experts have also warned that online criminals have already started harnessing ChatGPT for their benefit. For example, Meta said that since March 2023, the company’s team had found approximately 10 malware families that use ChatGPT to lure victims. 

That’s not surprising, especially since cybercriminals focus on themes or topics most likely to catch the public interest on a massive scale. As the COVID-19 pandemic caused a worldwide public health threat, many types of malware came disguised as information about vaccines or new safety measures. 

A May 2023 study from cybersecurity firm Check Point Research revealed a similar malware-based theme. Hackers set up websites appearing to be associated with ChatGPT that direct people to download malware. The researchers found that one in 25 ChatGPT-related new domains registered in 2023 has malware. 

Something many people probably don’t realize — and that hackers are capitalizing on — is that you don’t have to download anything to start working with ChatGPT. It’s a browser-based tool. You can also get plug-ins that let you use ChatGPT to browse the internet. 

Some GPT users may try to access it through Google searches. However, that’s a common way for people to end up on misleading sites that look incredibly realistic. 

ChatGPT Threatens the Privacy of Confidential Information

Even with its numerous shortcomings, ChatGPT can save people significant time when used strategically. Consider the administrative tasks associated with a K-12 school. Many such organizations automate workflows related to sending reminders, filing forms, and more. ChatGPT can assist with tasks such as composing letters to welcome new students or distributing them to parents before a school trip.

However, no matter how people use the tool at workplaces, they must not enter confidential information or details they wouldn’t want to be shared with the world. In March 2023, the company behind ChatGPT fixed a bug that temporarily allowed some users to see titles from other active users’ chat histories. Additionally, representatives said the same bug revealed the payment details of 1.2% of ChatGPT Plus users who used the tool during a nine-hour span. 

The company’s tech team temporarily took ChatGPT offline to address these matters. However, people soon heard about the issue, and some were worried. 

Samsung executives recently banned employees from using the tool after they heard about an employee uploading confidential code to the chatbot. Leaders’ primary concern was that data fed to chatbots is stored on external servers, where it’s difficult to find and delete. Additionally, as the March bug showed, data leaks can happen. 

People Must Use ChatGPT Cautiously

It’s easy to find plenty of examples of amazing things ChatGPT can do. However, these examples show the tool is not always safe or reliable. If you’re thinking about using the chatbot for business reasons, take the time to establish guidelines that will keep your company safe without overly restricting how people can engage with it while at work. 

For example, you might decide that no one can include client names, trade secrets, or specific project details in any ChatGPT prompts. Another best practice is to provide an official website where people can use ChatGPT and discourage them from doing Google searches that could lead to fake sites. 

Another practical tip is to tell all employees that the rules surrounding ChatGPT and similar chatbot tools could change anytime. As people use them in new ways, additional risks could emerge. 

Finally, consider having companywide training sessions if you plan to use ChatGPT extensively at your workplace. The topics could cover productivity-related ideas, such as how to format questions to get the most useful responses. They should also thoroughly address cybersecurity threats and what people can do to keep themselves and the business safe.

April Miller

April Miller

April Miller is a staff writer at ReHack Magazine who specializes in AI, machine learning while writing on topics across the technology sphere. You can find her work on ReHack.com and by following ReHack's Twitter page.