There's no doubt about it: ChatGPT,eroticism, spirituality, and resistance in black women’s writings the AI chatbot from OpenAI, is extremely popular and has seemingly single-handedly thrust chatbots and AI language models into the mainstream.
But with this popularity come some side effects. For one: ChatGPT accounts are now a prime target for hackers.
In a new recently released report, researchers at the cybersecurity firm Group-IB share that they have found over 101,000 compromised ChatGPT login credentials for sale on dark web marketplaces over the past year.
ChatGPT crossed100 million users in February, just months after it first launched to the public. However, as the AI chatbot's popularity has grown over the months, so has the number of stolen login credentials for ChatGPT accounts. Group-IB says it found more than 26,800 ChatGPT credentials last month, a peak since they began tracking the data.
This Tweet is currently unavailable. It might be loading or has been removed.
Group-IB researchers say the majority of these stolen ChatGPT credentials have been accessed thanks to the popular Raccoon malware. Raccoon works just as basic malware does, stealing info from a target's computer after the user downloads the software, which is often disguised as an app or file that the user actually wants. However, Raccoon is easy to use and is available as a dependable, maintained subscription service, which makes it a popular choice among hackers.
There are a number of potential security concerns unique to having a ChatGPT account compromised by hackers. For one, OpenAI released a feature a few months ago that saves a user's chat history. Many companies, like Google, warntheir employees not to input sensitive information into ChatGPT because that data could be used to train the AI language models. However, the fact that they need to warn employees about this means that it does happen. If a hacker has access to a user's ChatGPT history, they can see all that sensitive information that's previously been input into ChatGPT.
"Many enterprises are integrating ChatGPT into their operational flow," said Group-IB's Head of Threat Intelligence Dmitry Shestakov in a statement. "Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials."
In addition, if a user reuses their password for multiple different platforms, a hacker who has access to their ChatGPT account could also soon access their other accounts as well. And, if the target is paying for ChatGPT's premium plan, ChatGPT Plus, they may also be unwittingly paying for others to use the paid-for service as well.
ChatGPT users should be cautious of unauthorized access to their accounts and make sure they don't reuse their account password for other platforms.
Topics Artificial Intelligence Cybersecurity ChatGPT OpenAI
(Editor: {typename type="name"/})
NYT Connections hints and answers for April 25: Tips to solve 'Connections' #684.
Dad trolls son at Cavs game to make him get better grades
Google is killing SMS support for Hangouts
This app is absolutely exploding right now and it's all because of Snapchat
Best smartwatch deal: Save 44% on CMF Watch Pro for $38.90 at Amazon
This 'Call of Duty: WWII' rumor is more legit than you realize
This resistance group is sending Trump notes on toilet paper
Hugh Grant dances to Drake in 'Love Actually' reunion
Trump's new tariff plan spares some smartphones, laptops
Secret apps to hide private sexy pictures on your phone
Today's Hurdle hints and answers for May 5, 2025
Girl uses Tinder to make money and you'll wish you thought of it first
接受PR>=1、BR>=1,流量相当,内容相关类链接。