Krishnan says the creator of the system published a video appearing to show the chatbot operating and generating a scammy email. They were also trying to sell access to the system for $200 per month, or a yearly cost of $1,700. Krishnan says that in conversations with the developer behind FraudGPT, they claimed to have a few hundred subscribers and pushed for a sale, while the WormGPT creator appeared to have received payments into a cryptocurrency wallet address they shared. “All these projects are in their infancy,” Krishnan says. He adds, “we haven’t got much feedback” into whether people are purchasing or using the systems.
While those touting the chatbots claim they exist, it is hard to verify the makeup and legitimacy of the systems. Cybercriminal scammers are known to scam other scammers, with previous research showing that they frequently try to rip each other off, don’t provide what they claim they are selling, and offer bad customer service. Sergey Shykevich, a threat intelligence group manager at security firm Check Point, says there are some hints that people are using WormGTP. “It seems there is a real tool,” Shykevich says. The seller behind the tool is “relatively reliable” and has a history on cybercrime forums, he says.
There are more than 100 responses to one post about the WormGPT, Shykevich says, although some of these say the seller isn’t very responsive to their inquiries and others “weren’t very excited” about the system. Shykevich is less convinced about FraudGPT’s authenticity—the seller has also claimed to have systems called DarkBard and DarkBert. Shykevich says some of the posts from the seller were removed from the forums. Either way, the Check Point researcher says there’s no sign that any of the systems are more capable than ChatGPT, Bard, or other commercial LLMs.
Kelley says he believes claims about the malicious LLMs created so far are “slightly overexaggerated.” But he adds, “this is not necessarily different from what legitimate businesses do in the real world.”
Despite questions about the systems, it isn’t a surprise that cybercriminals want to get in on the LLM boom. The FBI has warned that cybercriminals are looking at using generative AI in their work, and European law enforcement agency Europol has issued a similar warning. The law enforcement agencies say LLMs could help cybercriminals with fraud, impersonation, and other social engineering faster than before and also improve their written English.
Whenever any new product, service, or event gains public attention—from the Barbie movie to the Covid-19 pandemic—scammers rush to include it in their hacking artillery. So far, scammers have tricked people into downloading password-stealing malware through fake ads for ChatGPT, Bard, Midjourney, and other generative AI systems on Facebook.
Researchers at security firm Sophos have spotted the operators of pig butchering and romance scams accidentally including generated text in their messages—“As a language model of ‘me’ I don’t have feelings or emotions like humans do,” one message said. And hackers have also been stealing tokens to provide them with access to OpenAI’s API and access to the chatbot at scale.
In his WormGPT report, Kelley notes that cybercriminals are often sharing jailbreaks that allow people to bypass the safety restrictions put in place by the makers of popular LLMs. But even unconstrained versions of these models may, thankfully, not be that useful for cybercriminals in their current form.
Shykevich, the Check Point researcher, says that even when he has seen cybercriminals try to use public models, they haven’t been effective. They can “create ransomware strains, info stealers, but no better than even an average developer,” he says. However, those on the cybercrime forums are still talking about making their own clones, Shykevich says, and they’re only going to get better at using the systems. So be careful what you click.
Update: 4:15 pm ET, August 7, 2023: A previous version of this article misspelled Daniel Kelley’s surname. We regret the error.