Mandiant has noted AI is leading to a “more pliable reality” that can exploit “the general public’s inability to differentiate between what is authentic and what is counterfeit”
The company reports that adoption of AI for social engineering has been limited so far, but that is bound to change, including use by hackers for reconnaissance and target selection. The useful Mandiant blog post can be found here.
Intelligence agencies sponsored by the state could use machine learning and data science tools to analyze huge volumes of stolen and open-source data, thus enhancing data processing and analysis. This would allow espionage actors to use the collected data more efficiently and swiftly. They can also identify patterns to refine their tradecraft techniques for identifying foreign individuals for intelligence recruitment in traditional espionage operations and designing effective social engineering campaigns.
At Black Hat USA 2016, an AI tool named the Social Network Automated Phishing with Reconnaissance system (SNAP_R) was introduced by researchers, suggesting high-value targets and analyzing users’ past Twitter activity to create personalized phishing material based on a user’s old tweets.
It has been shown by Mandiant how threat actors can leverage neural networks to generate counterfeit content for information operations.
Signs of North Korean cyber espionage actor APT43’s interest in Large Language Models (LLMs) were identified by Mandiant, with evidence suggesting the group has accessed widely available LLM tools. The group could potentially utilize LLMs to facilitate their operations, though the specific purpose remains uncertain.
In 2023, security researchers at Black Hat USA emphasized how prompt injection attacks integrated into LLMs could potentially support various stages of the attack lifecycle.
Lure Material
Threat actors can also use LLMs to produce more persuasive material tailored for a target audience, regardless of the threat actor’s ability to understand the target’s language. LLMs can aid malicious operators in creating text output that mimics natural human speech patterns, leading to more effective material for phishing campaigns and successful initial compromises.
Open source reports suggest that threat actors could use generative AI to create effective bait material to increase the likelihood of successful compromises, and that they may be using LLMs to enhance the complexity of the language used for their preexisting operations.
Mandiant has observed evidence of financially motivated actors using manipulated video and voice content in business email compromise (BEC) scams, North Korean cyber espionage actors using manipulated images to defeat know your customer (KYC) requirements, and voice changing technology used in social engineering targeting Israeli soldiers.
Media reports reveal that financially-motivated actors have developed a generative AI tool, WormGPT, which allows malicious operators to create more persuasive BEC messages. Moreover, this tool can aid threats in writing customizable malware code.
In March 2023, numerous media outlets reported how a Canadian couple was scammed out of $21,000 when someone using an AI-generated voice impersonated their son’s attorney.
Additionally, Mandiant have seen financially motivated actors advertising AI capabilities, including deepfake technology services, in underground forums to potentially increase the effectiveness of cybercriminal operations, such as social engineering, fraud, and extortion. These malicious operations may seem more personal in nature through the use of deepfake capabilities.