Illustration of a brain in the center with a black and white binary code background. The brain is depicted in white with bold, black outlines, while the binary digits (ones and zeros) are scattered around, symbolizing a connection between brain function and binary data, underscoring CMMC's role in safeguarding DOD cyber operations.

Ransomware gangs are already using AI, according to NCSC.

 

Ransomware gangs

The National Cyber Security Centre (NCSC ) of the UK has issued a warning that malicious attackers are already utilizing artificial intelligence and that the volume and impact of threats, including ransomware, will rise over the next two years.

The NCSC, which is a division of GCHQ, the UK’s intelligence, security, and cyber agency, evaluates how AI has made it possible for relatively inexperienced hackers to” carry out more effective access and information gathering operations… by lowering the barrier of entry to novice cybercriminals, hacker-for-hire and hacktivists.”

Scammers and other cybercriminals have been engaging in scams and cyberattacks for decades, but they frequently struggle to trick their victims because of poor grammar and obvious spelling errors in their emails and texts, especially if the attackers were not native speakers of the language being used to target victims.

It’s interesting to note that other security researchers have questioned the potential value of current artificial intelligence technology in the creation of attacks by cybercriminals. According to a study published in December 2023, phishing emails were equally effective whether they were sent by chatbots with artificial intelligence or humans.

However, it is evident that using publicly accessible AI tools to create convincing text, images, audio, and even deepfake video that can be used to deceive targets has become practically child’s play.

Additionally, the technology can be used by malicious hackers to identify high-value data for examination and exfiltration, increasing the impact of security breaches, according to the NCSC report,” The Near-Term Impact of AI on the Cyber Threat.”

Incredibly, the NCSC issues a warning that by 2025,” Generative AI and large language models ( LLMs)” will make it challenging for everyone to determine whether an email or password reset request is genuine or to spot attempts at phishing, spoofing, or social engineering, regardless of their level of cyber security understanding.”

That is genuinely terrifying.

2025 is less than a year away, in case you had n’t noticed.

Fortunately, there are some positive developments regarding artificial intelligence.

Through improved detection of threats like malicious emails and phishing campaigns, AI can also be used to increase an organization’s security resilience, making them easier to combat.

AI can be used for both good and bad, as with many technological advancements.


Editor’s Note: The views expressed in this article by a guest author are solely those of the contributor and do not necessarily reflect Tripwire.

The Threat Intelligence Engine in Real-Time CWPP: SentinelOne Cloud DetectionRansomware -Understanding Threats and Protecting Organization(Opens in a new browser tab)

(Opens in a new browser tab)

 

ExecBrief from PinnacleOne: Safe, Secure, and Reliable AI

Declaring war on AI religion, mobile muddles, and ransomware gangs is the topic of the Smashing Security

Why Cybersecurity Must Be Democratic

Skip to content