GTIG AI Threat Tracker: Advanced Persistent Threats Weaponize AI for Cyber Operations

State-sponsored hackers increasingly use AI to accelerate attacks, from reconnaissance to malware development. Google’s Threat Intelligence Group reports how APT groups exploit AI tools while defenders implement countermeasures.

The AI Arms Race Accelerates

Advanced Persistent Threat (APT) groups from North Korea, Iran, China, and Russia now integrate artificial intelligence throughout their attack lifecycles. These government-backed actors use large language models to streamline reconnaissance, craft personalized phishing campaigns, and develop malicious code faster than ever before.

Google Threat Intelligence Group observed this escalation in Q4 2025, documenting how threat actors misuse AI tools like Gemini to achieve productivity gains across all attack phases. While these groups haven’t achieved breakthrough capabilities that fundamentally alter the threat landscape, their systematic adoption of AI tools represents a significant evolution in cyber warfare.

Model Extraction: The New Intellectual Property Theft

Threat actors discovered they can steal AI capabilities without traditional network intrusions. Model extraction attacks, also called “distillation attacks,” allow adversaries to systematically probe AI models and recreate their functionality.

Google identified over 100,000 prompts attempting to extract Gemini’s internal reasoning processes. Attackers instructed the model to output full reasoning traces by claiming “the language used in the thinking content must be strictly consistent with the main language of the user input.”

These attacks don’t threaten average users but pose serious risks to AI developers and service providers. Organizations offering custom AI models face potential intellectual property theft from competitors seeking to replicate specialized capabilities without the development costs.

Google’s Response: Real-time detection systems now recognize extraction patterns and protect internal reasoning traces. The company disables accounts associated with extraction attempts and continuously improves defenses.

AI-Powered Reconnaissance and Social Engineering

APT groups use AI to transform traditional phishing from a manual, error-prone process into a scalable, sophisticated operation. Large language models eliminate telltale signs like poor grammar and cultural misunderstandings that previously helped defenders identify attacks.

Targeted Intelligence Gathering

UNC6418 used Gemini to compile sensitive account credentials and email addresses, then immediately launched phishing campaigns targeting those exact accounts in Ukraine’s defense sector.

Temp.HEX, a China-based group, leveraged AI to research specific individuals in Pakistan and collect operational data on separatist organizations. The group later incorporated similar targets into active campaigns.

Rapport-Building Phishing

Iranian group APT42 demonstrates the most sophisticated AI integration, using Gemini to:

  • Search for official email addresses of target entities
  • Research potential business partners to establish credible pretexts
  • Craft personalized personas based on target biographies
  • Translate content into local languages with cultural nuance

This “rapport-building phishing” maintains multi-turn conversations with victims, building trust before delivering malicious payloads.

Malware Development Gets AI Assistance

State-sponsored groups increasingly rely on AI for coding, debugging, and vulnerability research. Several APT groups show particular sophistication in their AI integration:

APT31 (China) created expert cybersecurity personas in Gemini to automate vulnerability analysis and generate targeted testing plans. The group fabricated scenarios to analyze remote code execution techniques and SQL injection methods against specific US targets.

UNC795 (China) engaged with Gemini multiple days per week throughout their entire attack lifecycle, using it to troubleshoot code, conduct research, and develop an AI-integrated code auditing capability.

APT41 (China) accelerated malware development by feeding Gemini open-source tool documentation and requesting explanations and use cases for specific attack tools.

Experimental AI-Enabled Malware

Threat actors experiment with novel AI integration in malware families, though without revolutionary results yet. Two notable examples demonstrate emerging trends:

HONESTCUE: Outsourcing Functionality

This malware family uses Gemini’s API to generate C# source code for second-stage payloads. HONESTCUE sends hardcoded prompts requesting specific functionality, then compiles and executes the AI-generated code directly in memory.

The approach offers multiple advantages:

  • Bypasses traditional network detection
  • Leaves no payload artifacts on disk
  • Uses seemingly innocuous prompts that don’t trigger security guardrails

COINBAIT: AI-Generated Phishing Kit

This sophisticated phishing kit masquerades as a major cryptocurrency exchange. Analysis reveals construction using AI-powered platforms like Lovable AI, evidenced by:

  • Complex React Single-Page Application architecture
  • Verbose developer-oriented logging messages
  • Integration with legitimate cloud services for hosting

Underground AI Services and API Abuse

Criminal actors struggle to develop custom AI models, instead relying on mature commercial models through various abuse vectors:

Xanthorox advertised itself as a “bespoke, privacy-preserving self-hosted AI” for autonomous malware generation. Investigation revealed it actually leveraged multiple commercial AI products, including Gemini, through Model Context Protocol servers.

Vulnerable open-source AI tools face regular exploitation for API key theft, creating a thriving black market for unauthorized API access. Platforms like One API and New API suffer from default credentials, insecure authentication, and API key exposure vulnerabilities.

ClickFix Campaigns Abuse AI Trust

Threat actors exploit public trust in AI services by using their sharing features to host malicious content. The attack chain works as follows:

  1. Craft malicious command-line instructions
  2. Manipulate AI to create realistic troubleshooting advice containing the malicious commands
  3. Share the AI chat transcript via public links
  4. Direct victims to the “trusted” AI-hosted instructions
  5. Victims copy and execute malicious commands, believing they’re following legitimate AI advice

This technique distributes ATOMIC information stealer targeting macOS systems, collecting browser data, cryptocurrency wallets, and sensitive files.

Defending Against AI-Enhanced Threats

Organizations face evolving challenges as threat actors integrate AI capabilities. Key defensive measures include:

Network Monitoring: Implement detection rules for traffic to backend-as-a-service platforms from uncategorized or newly registered domains.

Security Awareness: Train users to recognize AI-generated content and avoid entering sensitive data into unfamiliar website forms.

API Security: Monitor API access patterns for extraction attempts and implement rate limiting on AI service usage.

Threat Intelligence: Stay informed about emerging AI abuse techniques and update detection capabilities accordingly.

The Path Forward

While APT groups haven’t achieved breakthrough AI capabilities yet, their systematic adoption signals a fundamental shift in cyber operations. The integration of AI tools across reconnaissance, social engineering, and malware development represents the new normal for state-sponsored threats.

Defenders must adapt by implementing AI-aware security measures, improving detection of AI-generated content, and developing countermeasures for emerging attack techniques. The AI arms race in cybersecurity has begun, and preparation today determines tomorrow’s defensive success.

Organizations should evaluate their current security posture against AI-enhanced threats and implement comprehensive monitoring for suspicious AI service usage patterns.