It has been reported that ESET has identified what its researchers are describing as the ‘first known AI-powered ransomware,’ which uses one of OpenAI’s newly released small open-weights models to write Lua scripts on a system for malicious purposes, (see story, here). Here, Nathan Webb, principal consultant at Acumen Cyber, responds to the news.
“This is possibly the first instance of an AI-powered piece of ransomware observed in the wild. PromptLock, as researchers have called it, uses one of OpenAI’s new open-weight versions of ChatGPT.
Rather than come with a payload, the malware uses ChatGPT to write Lua scripts on the fly, which gives it information about the local system, and allows it to view files, exfiltrate data, and ultimately encrypt the system. The researchers claim, however, that file destruction has yet to be implemented, which to them suggests it’s acting as a proof-of-concept, rather than a fully-fledged piece of malware.
The use of Lua here suggests that attackers are trying to make the ransomware platform-agnostic, so that they can target a wider range of systems and environments, especially those not traditionally targeted due to their low market share, like Apple devices, and consumer Linux devices.
For users to start defending themselves, restrictions and detections need to be implemented around OS-tools, and especially around script interpreters for programming languages like Python and Lua. A model running locally can respond dynamically on a system-by-system basis, and could prove to be an extremely dangerous and effective form of ransomware, as it could write scripts to hide its activity and adjust its behaviour on the fly.
EDR vendors could respond by updating their own AI detection, using machine learning to observe and deobfuscate scripts, to determine their function dynamically, and to help filter legitimate scripts from malicious ones.
While some AI models may currently be too demanding to run efficiently, quantization of models and LoRA training significantly contributes to reducing the computing power required to run models, and there may be further iterations upon this.”
For more cybersecurity news, click here