“The real risk with AI is not that it will get too smart, but that it will follow the wrong instructions too well.” – Stuart Russell, AI researcher and author of Human Compatible*
AI weaponised for deception
A suspected North Korean hacking group has reportedly exploited ChatGPT to create a deepfake of a South Korean military ID document. The fake ID was used to make a phishing attempt appear more credible, according to cybersecurity firm Genian’s research. The attack demonstrates how generative AI tools can be manipulated for cyber deception.
The Kimsuky connection
The hackers, identified as the Kimsuky group, are believed to be linked to North Korea’s intelligence apparatus. This state-sponsored cyber espionage unit has previously targeted South Korean government agencies and researchers. The US Department of Homeland Security had earlier labelled Kimsuky a globally active intelligence-gathering mission under Pyongyang’s control.
Exploiting AI safeguards
Although ChatGPT normally restricts users from generating fake identification, researchers found that attackers managed to bypass these safeguards through prompt manipulation. This incident highlights how even ethical AI boundaries can be twisted by skilled threat actors to serve malicious purposes.
Broader pattern of AI misuse
Experts note that this is not an isolated incident. North Korean agents have also been caught using AI systems like Anthropic’s Claude to fabricate résumés, complete coding tests, and secure remote jobs in global tech firms. These activities are part of a broader strategy to fund the regime and evade international sanctions.
Cybersecurity wake-up call
The growing misuse of AI in state-led espionage emphasises the urgent need for global cooperation on AI governance. Attackers are learning to combine social engineering and generative AI in unprecedented ways, making traditional cybersecurity defences increasingly vulnerable.

COMMENTS