ChatGPT users should be wary that their personal data might've been leaked online, following the dump of more than 100,000 ChatGPT account credentials on the dark web. As reported by The Hacker News and according to Singapore-based cybersecurity company Group-IB, the credentials for users that logged into ChatGPT ranges from its launch (in June 2022) through May 2023, meaning that it's still an ongoing event. The U.S., France, Morocco, Indonesia, Pakistan, and Brazil seem to have contributed the most users towards the stolen credentials.
"The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023," a Group-IB specialist said. "The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year."
In this case, 26,802 available logs means that the dark web marketplace has already absorbed the user credentials — they've found their (likely) malicious buyer.
"Logs containing compromised information harvested by info stealers are actively traded on dark web marketplaces," Group-IB said. "Additional information about logs available on such markets includes the lists of domains found in the log as well as the information about the IP address of the compromised host."
The majority of the dumped credentials were found within logs connected to multiple information stealer malware families. The Raccoon info stealer, a particular popular malware "distribution" within the family, was used to compromise exactly 78,348 accounts. (It becomes easy to know exact numbers when you know what to look for in for each malware type.)
Raccoon seems to be the AAA-equivalent of the info stealer malware world, and a showcase of how the dark web is a parallel world to ours. Users can purchase access to Raccoon on a subscription-based model; there's no coding or particularly skillful knowledge required. This ease of deployment is part of the reason for the increasing counts of cybercrime-related offenses. Raccoon, like others, comes bundled with increased capabilities. These subscription-based info stealers don't just steal credentials; they also allow malicious users to automate follow-up attacks.
Other malware pieces were used to steal user credentials, of course; it's a field of black-hat-designed tools out there. But their numbers are much less impressive. A distant second to Raccoon was Vidar, which was used to access 12,984 accounts, while third place went to the 6,773 credentials captured through the RedLine malware.
That these credentials offer access to ChatGPT accounts should give pause to anyone using the service. Remember that it's not just access to your personal information. Since the majority of users store their chats in the OpenAI application, malicious users also get access to those. And that's where the real value is: in the business planning, the app development, malware development (uh), and writing happening within those chats. Both personal and professional content can be found within a ChatGPT account, from company trade secrets that shouldn't be there in the first place to personal diaries. There are even classified documents, it seems.
"Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT's standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials."
It's quite the informational heist. So remember: all passwords matter. But perhaps the security of your ChatGPT window (both at home and at work) matters more than others. Be mindful of the plugins you might install onto ChatGPT, use strong passwords, activate two-factor authentication (2FA), and remember the cybersecurity best practices that'll decrease the likelihood of you being successfully targeted.