A glowing blue digital illustration of a brain made up of interconnected circuits, resembling a modern, technological representation of artificial intelligence or a neural network, set against a dark blue background—highlighting the importance of cybersecurity in defending against evolving cyberthreats.

Microsoft, OpenAI Warning of State-Level Hackers Using AI for Cyberattacks

14th of February 2024: Newsroom: Artificial Intelligence/Cyber Attack

To complement their ongoing cyber attack operations, nation-state actors from China, North Korea, Iran, and Russia are experimenting with large language models ( LLMs) and artificial intelligence ( AI ).

The results come from a report that Microsoft and OpenAI jointly published, according to which they stopped five state-affiliated actors ‘ attempts to carry out malicious cyber activities by terminating their assets and accounts.

According to a report from Microsoft that was shared with The Hacker News, language support is an inherent feature of LLMs and is appealing to threat actors who are constantly concentrating on social engineering and other techniques that rely on deceptive, false communications that are tailored to their targets ‘ jobs, professional networks, and interpersonal relationships.

The adversarial exploration of AI technologies has gone beyond various stages of the attack chain, including reconnaissance, coding assistance, and malware development, even though no significant or novel attacks using the LLMs have been found to date.

According to the AI firm,” these actors typically sought to use OpenAI services for querying open-source information, translating, identifying coding errors, and performing simple programming tasks.”

Cybersecurity

For instance, it is claimed that the Russian nation-state organization Forest Blizzard ( also known as APT28 ) used its resources to support scripting tasks and conduct open-source research into satellite communication protocols and radar imaging technology.

The following is a list of some other well-known hacking teams:

    LLMs have been used by North Korean threat actor Emerald Sleet ( also known as Kimusky ) to find experts, think tanks, and organizations specializing in defense-related issues in the Asia-Pacific region, comprehend flaws that are readily available, assist with simple scripting tasks, create content that could be used in phishing campaigns, among other things.
    Iranian threat actor Crimson Sandstorm ( also known as Imperial Kitten ) has used LLMs to produce phishing emails, code snippets for web and app development, and research common malware evasion techniques.
    Chinese threat actor Charcoal Typhoon ( also known as Aquatic Panda ) has used LLMs to identify post-compromise behavior strategies, research various businesses and vulnerabilities, write scripts, and produce content that may be used in phishing campaigns.
    Chinese threat actor Salmon Typhoon, also known as Maverick Panda, used LLMs to translate technical documents, gather data on regional threat actors and various intelligence agencies, fix coding mistakes, and devise covert strategies to avoid detection.

In order to reduce the risks associated with the malicious use of AI tools and APIs by cybercriminal syndicates, advanced persistent manipulators ( APMs), and nation-state-advanced persistent threats, according to Microsoft, it is also developing a set of principles and guardrails and safety mechanisms around its models.

According to Redmond, these principles include transparency, collaboration with other stakeholders, and identification and action against malicious threat actors ‘ use of notification to other AI service providers.

This article piqued your interest? To read more of the exclusive content we post, follow us on LinkedIn and Twitter.
Skip to content