While such activity so far does not appear to be the norm across the ransomware ecosystem, the findings represent a stark warning.
âThere are definitely some groups that are using AI to aid with the development of ransomware and malware modules, but as far as Recorded Future can tell, most arenât,â says Allan Liska, an analyst for the security firm Recorded Future who specializes in ransomware. âWhere we do see more AI being used widely is in initial access.â
Separately, researchers at the cybersecurity company ESET this week claimed to have discovered the âfirst known AI-powered ransomware,â dubbed PromptLock. The researchers say the malware, which largely runs locally on a machine and uses an open source AI model from OpenAI, can âgenerate malicious Lua scripts on the flyâ and uses these to inspect files the hackers may be targeting, steal data, and deploy encryption. ESET believes the code is a proof-of-concept that has seemingly not been deployed against victims, but the researchers emphasize that it illustrates how cybercriminals are starting to use LLMs as part of their toolsets.
âDeploying AI-assisted ransomware presents certain challenges, primarily due to the large size of AI models and their high computational requirements. However, itâs possible that cybercriminals will find ways to bypass these limitations,â ESET malware researchers Anton Cherepanov and Peter Strycek, who discovered the new ransomware, wrote in an email to WIRED. âAs for development, it is almost certain that threat actors are actively exploring this area, and we are likely to see more attempts to create increasingly sophisticated threats.â
Although PromptLock hasnât been used in the real world, Anthropicâs findings further underscore the speed with which cybercriminals are moving to building LLMs into their operations and infrastructure. The AI company also spotted another cybercriminal group, which it tracks as GTG-2002, using Claude Code to automatically find targets to attack, get access into victim networks, develop malware, and then exfiltrate data, analyze what had been stolen, and develop a ransom note.
In the last month, this attack impacted âat leastâ 17 organizations in government, health care, emergency services, and religious institutions, Anthropic says, without naming any of the organizations impacted. âThe operation demonstrates a concerning evolution in AI-assisted cybercrime,â Anthropicâs researchers wrote in their report, âwhere AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.â
