A recent report from Google's Threat Intelligence Group (GTIG) indicates a growing trend of state-sponsored cyber actors leveraging artificial intelligence in their attack methodologies. Groups linked to governments in Iran, North Korea, China, and Russia are now reportedly integrating advanced AI models, such as Google's Gemini, to expedite and enhance their cyber campaigns. This includes the creation of sophisticated phishing schemes and the development of new malware strains.
The quarterly AI Threat Tracker report, released by GTIG, outlines how government-backed attackers have incorporated AI across the entire attack lifecycle. This spans initial intelligence gathering, crafting social engineering tactics, and ultimately, developing malicious software. These activities became evident following GTIG's observations during the final quarter of 2025.
AI Enhances Reconnaissance and Social Engineering
GTIG researchers noted in their report that for nation-state threat actors, large language models have become indispensable tools for conducting technical research, identifying targets, and rapidly generating highly convincing phishing lures.
Several examples illustrate this evolving threat. The Iranian threat actor APT42 reportedly utilized Gemini to enhance its reconnaissance and targeted social engineering efforts. This group focused on generating email addresses that appeared legitimate for target organizations and subsequently investigated plausible pretexts for engaging with these targets. APT42 developed elaborate personas and scenarios to maximize target engagement, employing natural, native phrases and translating across languages to bypass common phishing indicators like poor grammar or awkward syntax.
Similarly, the North Korean government-backed entity UNC2970, known for targeting the defense sector and impersonating corporate recruiters, leveraged Gemini to profile high-value individuals. Its reconnaissance involved seeking details about major cybersecurity and defense firms, mapping specific technical job roles, and compiling salary data. GTIG observed that such activity blurs the line between legitimate professional research and malicious reconnaissance, as the actor accumulates the necessary details to construct highly credible phishing personas.
Emerging AI-Powered Malware
Beyond simply assisting with attack planning, AI is increasingly found within the malware itself. GTIG identified malware samples, internally dubbed HONESTCUE, which utilize Gemini's API to offload functionality generation. This malicious software is designed to evade traditional network-based detection and static analysis through a multi-layered obfuscation strategy. HONESTCUE acts as a downloader and launcher framework, sending prompts to Gemini's API and receiving C# source code in return. Its fileless secondary stage then compiles and executes payloads directly in memory, leaving no persistent artifacts on disk.
In a separate finding, GTIG uncovered COINBAIT, a phishing kit whose development was likely accelerated by AI code generation tools. This kit, which mimics a major cryptocurrency exchange to harvest credentials, was constructed using the AI-powered platform Lovable AI.
Abuse of Public AI Platforms
A novel social engineering strategy, dubbed 'ClickFix' campaigns, first appeared in December 2025. Google observed threat actors exploiting the public sharing functionalities of generative AI services like Gemini, ChatGPT, Copilot, DeepSeek, and Grok to host deceptive content. These campaigns distributed ATOMIC malware targeting macOS systems. Attackers manipulated AI models to generate realistic instructions for common computer tasks, then embedded malicious command-line scripts within these 'solutions.' By creating shareable links to these AI chat transcripts, threat actors leveraged trusted domains for their initial attack stage.
Model Extraction Attempts and Underground Markets
Operational misuse of AI is not the only concern. Google DeepMind and GTIG also detected a rise in model extraction attempts, sometimes called 'distillation attacks,' aimed at stealing intellectual property from AI models. One campaign targeting Gemini's reasoning capabilities involved over 100,000 prompts, designed to compel the model to output its reasoning processes. The wide array of questions suggested an attempt to replicate Gemini's logical abilities in various tasks and non-English languages. While direct attacks on cutting-edge AI models by advanced persistent threat (APT) actors were not observed, GTIG did identify and disrupt frequent model extraction attempts from private sector entities and researchers globally, seeking to clone proprietary logic.
Observations from English and Russian-language underground forums by GTIG indicate a steady demand for AI-enabled tools and services. However, nation-state hackers and other cybercriminals often struggle to develop their own custom AI models, instead relying on commercial products accessed via stolen credentials. One toolkit, 'Xanthorox,' was advertised as a custom AI for autonomous malware generation and phishing campaign development. GTIG’s investigation revealed Xanthorox was not a bespoke model but instead powered by several commercial AI products, including Gemini, accessed through illicitly obtained API keys.
Google's Proactive Defense Measures
In response to these findings, Google has implemented various countermeasures against identified threat actors. This includes disabling accounts and assets associated with malicious activities. The company has also integrated this intelligence to enhance both its classifiers and models, allowing them to refuse assistance with similar attacks in the future. The report states a commitment to developing AI responsibly, which involves taking proactive steps to disrupt malicious activities by discontinuing projects and accounts linked to bad actors, while continuously refining models to reduce their susceptibility to misuse.
Implications for Cybersecurity
GTIG emphasized that despite these advancements, no APT or information operations actors have achieved truly breakthrough capabilities that fundamentally alter the overall threat landscape. These findings, however, underscore the increasingly critical role of AI in cybersecurity, as both defenders and attackers race to harness the technology's capabilities. For enterprise security teams, particularly those in the Asia-Pacific region where Chinese and North Korean state-sponsored hackers remain highly active, the report serves as a crucial reminder to strengthen defenses against AI-augmented social engineering and reconnaissance operations.
This article is a rewritten summary based on publicly available reporting. For the original story, visit the source.
Source: AI News