
A recent cyberattack attributed to North Korea shows a disturbing new trend in the use of generative AI for espionage. On July 17, 2025, Genians Security Center (GSC) uncovered a spear-phishing campaign in which attackers used AI-generated deepfake images of South Korean military ID cards to gain unauthorized access. The hackers, aligned with North Korean interests, used ChatGPT to create realistic ID mockups and deliver them via phishing emails disguised as official government requests.
These emails contained compressed files with malicious Windows shortcut (LNK) files, which ran obfuscated PowerShell scripts and batch files to download additional payloads. The attackers disguised malware as Hancom Office updates and used AutoIt scripts to maintain persistence, evade antivirus detection, and establish connections to South Korean command-and-control (C2) servers. Techniques like string slicing, encoded environment variables, and multi-stage payload delivery highlight the growing sophistication of North Korean cyber tactics.
The incident ties into earlier campaigns involving fake security alerts and credential theft emails, all showing signs of shared infrastructure and toolkits. Analysts emphasize the need for Endpoint Detection and Response (EDR) solutions that can detect obfuscated scripts and track malicious behavior across time delays and script execution chains.
This case underscores how generative AI, while powerful, is increasingly being misused in state-sponsored cyber operations. Organizations in defense, research, and government sectors should be especially vigilant as deepfake-enabled phishing and social engineering tactics continue to evolve.