Ransomware attacks have profoundly reshaped the landscape of cybercrime over the past decade, eclipsing even traditional bank robberies in scale and impact. With time, ransomware has evolved from simple technical breaches to multifaceted business enterprises encompassing evasion tactics, business analytics, segregation of duties and money laundering. Today, data encryption is giving way to data exfiltration and blackmail. Threat actors are becoming much better at stolen data analysis and blackmail, making businesses compromise their ethics to pay the ransom.
Artificial intelligence (AI) is not new in the realm of ransomware. We’ve witnessed relatively sophisticated efforts at pseudo-intelligent decision-making in ransomware attacks dating back to the early 2010s. During this period, threat actors began automating decisions regarding which data would be the most sensitive. For instance, on personal computers, they would prioritize financial documents and photo albums, counting on the assumption that these are the most cherished data people are hesitant to lose. Similarly, in corporate environments, they would focus on encrypting particular folders and files with the most vital business data.
A leap in AI technology
The past year witnessed a leap in AI technology, making various AI engines accessible to consumers. While these engines possess a core set of ethics, there’s potential for compromise or the creation of customized AI tailored for cybercrime. The implications of this advancement in AI extend to the ransomware landscape.
Though AI’s role in ransomware remains limited in 2023, it’s vital to envision its potential application. While not all of the following initiatives are currently active, the emergence of AI-driven ransomware looms as a more formidable threat in the near future.
Let’s start with victim targeting. Today, many ransomware gangs are no longer content with simply filling our mailboxes with phishing emails—they are meticulously studying potential targets, spanning from company executives and IT departments to other key employees. The focus is not solely on the ransom itself but also on gauging a company’s inclination to actually pay the ransom when confronted with such dire circumstances. Consequently, corresponding phishing campaigns transcend the realm of normal phishing or spear phishing, becoming notably more intricate. Targets are primed for attack through an intricate web of communications, including phone calls and even video calls with prospective victims. The assistance of AI in such attacks is of immense value, as it can be entrusted with identifying desirable targets and orchestrating sophisticated strategies that surpass mere phishing attempts. These strategies involve intricate communication components, fostering additional trust and facilitating access to networks or systems where valuable data are stored.
Deploying a malicious payload onto a targeted computer is a very complex task. It’s not a static executable that can be easily detected based on signatures. AI could generate a customized payload for each victim, progressively advancing within compromised systems with patience and precision. The key for successful malware lies in emulating normal, expected behavior to avoid triggering any defensive measures, even from vigilant users themselves. We’re witnessing genuinely authentic-looking software emerging in various distributions, ostensibly offering specific functionalities while harboring ulterior motives to earn users’ trust, eventually acting with a malicious intent. In this context, AI is entirely capable of streamlining the process, crafting software with dormant malicious capabilities primed for activation at a later point, possibly during the next update.
AI: an accelerant of ransomware attacks
In the late 2010s, ransomware attacks often extended over several months, spanning from infection to encryption. Numerous compromised devices, if not promptly fixed, would remain compromised for extended periods of time. However, in recent years, we’ve witnessed a rapid escalation in attack velocity, compressing timelines from months to days and then from days to hours. Artificial intelligence possesses the potential to significantly accelerate this entire process. Despite encountering existing defenses, it can enhance the speed of attack and decisions on circumventing new obstacles. This extends to disabling backups and incapacitating defensive mechanisms. Should all elements align swiftly and effectively, a victim’s network could potentially be left defenseless within mere minutes.
Presently, data encryption and exfiltration processes are slow and prone to detection. Threat actors are exploring AI’s potential to strategically target critical data. During the course of a ransomware attack, millions of files may be affected, but not all contain sensitive data. The prioritization of data targeting is a highly desired function among threat actors. They seek to obtain critical information before targeting the remaining data. Post-exfiltration, indexed evaluation of data remains essential. While some data may appear intriguing yet lack value, others could carry regulatory risks or trade secrets. Instructed by the bad guys, AI can be harnessed to unearth these hidden gems. Even if data are password-protected, these passwords might be concealed within other exfiltrated files. Today, numerous threat actors are embracing AI to discern what they’ve stolen and determine the value of the acquired data.
AI as a ransomware negotiator
Even after the crime has been committed, there might be a need for help from artificial intelligence. There are already instances where, during ransomware negotiations, threat actors sought assistance from ChatGPT. Presently, the focus is on phrasing specific matters correctly, but the ultimate aim is to negotiate without emotions. An AI-driven chatbot could potentially serve as a future ransomware negotiator, employing a formulaic approach to preset demands, initiate timers and respond based on the victim’s actions.
As evident, nearly every facet of a ransomware attack can be delegated to AI. This includes operational security elements, concealing tracks that might lead to actual threat actors.
The question arises: how should we counter these attacks? It’s not solely a matter of superior AI and whether it’s held by threat actors or defenders. Rather, dealing with this AI-enhanced threat landscape demands a systemic approach to defense, one that anticipates the next moves and devises strategies to fend them off.
While integrating AI in our defenses may greatly improve our visibility, AI-driven attacks would exhibit a systemic and predictable pattern. Such attacks can be ensnared with honeypots, tools for measuring attack velocity and various other defensive methods. The good news is that as ransomware attacks continue evolving, our defenses can effectively match their advancements, provided they are applied meticulously.
Editor’s note: For more ransomware resources from ISACA, view the Blueprint for Ransomware Defense white paper and Ransomware Attack Survival Guide webinar.