A ChatGPT-style tool designed to assist cybercriminals will let hackers develop sophisticated attacks on a significantly larger scale, researchers have warned.
The creators of WormGPT have branded it as an equivalent to the popular AI chatbot developed by OpenAI to produce human-like answers to questions. But unlike ChatGPT, it does not have protections built in to stop people misusing the technology.
The chatbot was discovered by cybersecurity company Slash Next and reformed hacker Daniel Kelley, who found adverts for the malware on cybercrime forums.
While AI offers significant developments across healthcare and science, the ability of large AI models to process massive amounts of data very quickly means it can also aid hackers in developing ever more sophisticated attacks.