A properly configured botnet powered by ChatGPT would be difficult to detect. It would be better suited to fooling users and more effectively fooling the algorithms used to prioritize content on social networks.
“They deceive both the platform and the users,” Menczer points out about the botnet operated by ChatGPT. And when a social media algorithm recognizes that a post contains many engagements, Even if the content is from other bot accounts, more people will see it. “That’s exactly why these bots behave the way they do,” Menczer points out. And most likely, governments that want to conduct disinformation campaigns are already developing or implementing such tools, he adds.
Researchers have long feared that the technology behind ChatGPT poses a misinformation risk, and OpenAI even delayed the release of a predecessor to the system due to these fears. So far, however, there are few concrete examples of large-scale abuse of large language models. However, some political campaigns are already using AI, and prominent politicians are sharing fake videos designed to discredit their opponents.
William Wang, a professor at the University of California at Santa Barbara, finds it exciting to be able to examine the actual use of ChatGPT by criminals. “His findings are very interesting,” he says of Fox8’s work.
Wang believes that many websites containing spam are currently generated automatically, making it increasingly difficult for humans to identify this material. And as AI continues to improve, it’s only getting harder. “The situation is very serious,” he warned.
In May this year, Wang’s lab developed a technique to automatically distinguish ChatGPT-generated text from actual human handwriting but notes that it’s expensive to implement because it uses OpenAI’s application programming interface (API) and points out that the underlying AI will be constantly updated to improve. “It’s kind of a cat-and-mouse problem,” Wang shares.
X could be a proving ground for these kinds of tools. Malicious bots appear to have become much more prevalent since Elon Musk took over Twitter, Menczer explains, despite the tech mogul’s promise to eradicate them. And researchers found it harder to study the issue because using the API brought a sharp price increase.
Apparently, someone at X took down the Fox8 botnet after Menczer and Yang published their article in July. Menczer’s group used to update Twitter about discoveries on the platform, but with X, that’s no longer the case. “They don’t respond,” Menczer notes. “They have no staff.”