Hackers are moving beyond using AI for efficiency and are now deploying malware that can rewrite itself mid-attack, according to new threat research from Google.
In a fresh update on adversarial AI activity, Google Threat Intelligence Group (GTIG) says hackers are breaking new ground, as they embed large language models (LLMs) inside malware to alter behavior and evade defenses in real time.
“Adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations. This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution.”
The threat unit says it has identified malware families using LLMs to generate malicious functions on demand, rather than shipping hard-coded payloads — a design that makes detection significantly harder.
“For the first time, GTIG has identified malware families, such as PROMPTFLUX and PROMPTSTEAL, that use Large Language Models (LLMs) during execution. These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand.”
Google highlights a wider pattern of adversaries experimenting with LLMs to change their code on the fly, develop phishing lures and bypass safeguards by pretending to be researchers or students in “capture-the-flag” exercises.
GTIG points to live hostile activity, including Russian state actor APT28 using malware that queries open-source LLMs to generate commands during intrusions, marking what it calls the first observed operational case of this technique.
“APT28’s use of PROMPTSTEAL constitutes our first observation of malware querying an LLM deployed in live operations. PROMPTSTEAL novelly uses LLMs to generate commands for the malware to execute rather than hard-coding the commands directly in the malware itself. It masquerades as an “image generation” program that guides the user through a series of prompts to generate images while querying the Hugging Face API to generate commands for execution in the background.”
Google says it has disabled assets linked to the malware and is applying the intelligence to harden models, block similar behavior, and share defensive guidance across the industry.
The report warns that while many of the implementations remain early-stage, the shift toward adaptive malware is underway, and attackers are expected to continue integrating AI into intrusion activity.
“We are only now starting to see this type of activity, but expect it to increase in the future.”
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

