Cybercriminals are weaponizing Artificial Intelligence (AI) to launch more sophisticated, scaled and advanced targeted cyberattacks. AI has empowered attackers and enabled them to create malware that transforms to evade detection, highly compelling phishing exploits, and automate advanced attacks.
Deep Instinct’s fourth edition report states that 75% of security professionals have witnessed an increase in cyberattacks this year and 85% were powered by generative AI. Traditional defense controls like rule-based intrusion detection and prevention systems, signature-based antivirus software and firewalls have proved ineffective in preventing evolving AI-driven cyberattacks. There is a great demand for more adaptive and advanced tools and strategies to protect the fast-transforming threat landscape and to defend against these automated dynamic exploits.
AI-Powered Cyber Attacks
AI has enabled cybercriminals to launch automated cyberattacks with unprecedented accuracy, speed and at scales that were difficult to achieve just by human hackers. Malicious users are taking advantage of AI technology in several ways. Below are some of the cyber exploits where attackers incorporate generative AI:
- Social engineering: Attackers use several psychologically manipulating tricks to misguide users into revealing their credentials, credit card details and personal information. They incorporate attacks such as phishing, baiting, vishing, pretexting and compromising personal and corporate emails. Hackers use generative AI to make phishing emails and fake websites more personalized, compelling, sophisticated and almost similar to the targeted original website. This makes it difficult for users to detect fake malicious emails and they are tricked and convinced into entering their personal details. Hackers can also use AI to increase the speed, scale and intensity of these exploits by automating the process and the generation of these fake emails and content.
- Malware: Previously the malware behavior and properties were studied and signatures were developed. The antivirus software and intrusion detection and prevention systems use these signatures to detect malware, viruses, trojans and other malicious software. Today, hackers are using generative AI technology to develop this malicious software. As they are dynamic and evolve rapidly, traditional security tools are unable to defect the transforming software.
- Deepfakes: Attackers use AI technology to create deceptive and misleading campaigns by easily manipulating audio and visual content. Just by tapping phone calls and using photos and videos published on social media, they can impersonate any person and create content that is used to mislead or manipulate public opinion. AI is used to make this fake content realistic and convincing as it appears legitimate. By combining this exploit with social engineering, extortion and other schemes, this attack can be disastrous.
- Brute force: AI technology has advanced the brute force tools and techniques used by cybercriminals. It has helped attackers improve the deciphering algorithms used to crack passwords, making these exploits more accurate and faster.
- Automated attacks: Malicious users have started using AI-powered bots to automate the detection of threats/weaknesses in websites, systems and networks. Once detected, it is used to further automate the exploitation of the identified vulnerabilities. This has greatly helped hackers in scaling their attacks and causing more damage.
- Cyber espionage: Generative AI technology can be used to automate the extraction of data from compromised networks and analyze it. This has made it much easier for cyber criminals to steal sensitive and confidential data.
- Ransomware attacks: Hackers can use AI to automate the process of identifying vulnerabilities in the target organization’s network. They can then automate the exploitation and the process of encrypting all the company files and folders. The hackers then demand ransomware payment to share the decryption key to retrieve the company data. AI has helped attackers make this whole process much simpler and less time-consuming.
- IoT attacks: Cybercriminals have begun to use AI to break Intrusion Detection Algorithms to attack IoT networks. Today AI is used to perform input attacks, algorithm/data poisoning, fake data injection, and automated detection of vulnerabilities in networks using techniques like fuzzing and symbolic execution.
Overall, generative AI technology has enabled cybercriminals to create more sophisticated and automated exploits that are much more scalable and less time-consuming. Organizations are struggling to keep in phase in detecting and preventing these advanced exploits.
The Limitations of Traditional Security Measures
Traditional security measures and tools like intrusion detection and prevention systems, SIEMs, firewalls and antivirus software have proven ineffective in protecting the fast-evolving threat landscape and preventing AI-powered cyberattacks. Below are some of their limitations (Figure 1):
Figure 1: Limitations of Traditional Security Measures
- Signature-based detection: Most of the security monitoring tools rely on the database of rules and signatures created by analyzing the previous attack patterns and behaviors. These tools fail to detect the transforming and fast-evolving AI-powered cyber exploits. Traditional tools do not provide real-time awareness of security incidents, which is critical in analyzing and mitigating security threats.
- Delay in signature updates: Once new threats are discovered there is always a delay in analyzing the exploits and updating the security tools with new signatures. Until the signatures and patches are updated the systems remain vulnerable to the latest exploits.
- Zero-day exploits: Traditional security tools are not capable of detecting security vulnerabilities that have never been encountered in the past. Since they completely rely only on the signatures database, they fail to detect AI-powered new and dynamic exploits.
- Manual monitoring and testing: Traditional security tools rely on trained human interventions to manually analyze and test the security alerts. These manual assessments can be very time-consuming as the analyst has to process a large amount of logs and event data. This can cause a lot of delays in detection and incident response.
- Error-prone methods: As the majority of the security assessments are performed manually, there are more chances for human errors, misinterpretation of alerts/data, or missing subtle signs of exploits leading to false negatives and positives.
- Not scalable: Most of the hosting environments are very dynamic and they rapidly provision and de-provision resources based on demand. Thus, the threat landscape in such dynamic environments is continuously changing and traditional security tools are overwhelmed and struggle to keep up with this transformation, complexity and advancement.
How AI can be used to protect against AI-powered cyberattacks?
Ironically, generative AI technology itself can be used to protect against AI-powered cyberthreats. The cybersecurity industry has started to rely on AI-powered security tools in conjunction with traditional security measures like identity and access management, intrusion detection, risk assessment, fraud detection, data loss prevention, incident response and other core security domains. Surprisingly, recent research revealed that the global market for AI-powered cybersecurity tools and products was US$15 billion in 2021 and is projected to surge to roughly $135 billion by 2030. There are several advantages to using AI-powered security tools in combating today’s advanced cyber threats (Figure 2). Some of them are:
Figure 2: AI-powered cybersecurity
- Baseline establishment: Security tools that use AI and machine learning algorithms do not rely on traditional rules and signature-based detections. Instead, they capture all events and analyze the vast datasets to create a baseline of normal behavior. By analyzing historical and live interaction data it is possible to exactly know all the resources used, exposed services, asset inventory, network traffic trends, and normal user activities/behaviors. This way the threat landscape and associated vulnerabilities can be easily identified and managed.
- Anomaly detection: The AI-powered tools are designed to detect deviation from the established normal behavior baseline and patterns. This includes unusual login activities, access requests from a new geographic location or IP address, new user access, change of permissions on files and other resources, extracting or deletion of large volumes of fines, and an exponential increase in traffic than normal rate.
- Attack prevention abilities: Once the AI-powered tools identify security threats or unusual behaviors, they are capable of taking defined proactive actions to stop the attack. This may include actions like logging off the user, account lockouts, declining transactions, blocking the traffic, isolating affected resources, and sending alerts and notifications to the administrators to take appropriate actions.
- Real-time monitoring: In this era of AI, real-time monitoring is very important. Several AI-powered tools are designed to continuously monitor production systems in runtime. This helps in immediately responding to security incidents as they arise and potentially reducing the damage.
- Predictive analysis: AI security tools are capable of analyzing historical data and current trends/behaviors, and forecasting potential security threats and attacks. Thus, they can proactively take measures to prevent those exploits.
- Detection of zero-day exploits and unseen threats: In traditional security tools only after the attack has occurred is the exploit analyzed, and preventive signatures and patches are generated and distributed. Thus, traditional tools fail to protect the systems from new, unseen zero-day exploits until the signatures are released. AI-powered tools on the other hand do not rely on signatures but create normal trend baselines, and if any deviation is detected, they take appropriate actions. Thus, AI tools can detect and protect from new, unseen zero-day exploits.
- Reduced false positives: Traditional tools generate a huge number of false positive alerts and analysts may miss a few important notifications while processing the huge amount of datasets. AI security tools tend to produce fewer false positives as they adapt to the evolving threat landscape and transforming threats.
- Automation: The significant advantage of using AI-powered security tools is their ability to support automation. It is possible to automate security assessments, pen tests, security reviews, and patch management without any manual interventions. This reduces response time and the risk of human errors.
- Scalability: The hosting environments are dynamic and AI security tools are designed to adapt to the fast-evolving environments, threat landscapes, network traffic patterns and dynamic resource allocations. They can scale seamlessly to provide continuous protection.
Generative AI-powered tools can improve themselves through machine learning capabilities by analyzing previous security incidents and training themselves to identify suspicious behaviors, predict threats and undertake preventive measures to stop cyberattacks. Also, this helps in filling the gaps in not having enough human resources with cybersecurity skills to fill 3.5 million security jobs. Using AI has freed security analysts from mundane initial event monitoring and analysis and allowed them to apply their skills in more advanced, strategic decision-making tasks. By combining both traditional and AI security tools, organizations are experiencing more productivity, effectiveness and reduction in security threats.
What can organizations do to tackle AI-powered cyberattacks?
Organizations need to keep up to date and stay informed about the latest research and developments in the space of AI-powered security attacks and ways to prevent/remediate the exploits. Perform regular security audits to detect security vulnerabilities and make sure your infrastructure is compliant and secure. Proactively take measures to prevent these advanced security exploits. Invest in generative AI-powered security tools to take advantage of the benefits they offer in combatting the fast-evolving cyber threats. Provide adequate training to your teams and create awareness about AI security risks and ways to take advantage of them securely.
About the author: Prathibha Muraleedhara is a Security Architecture Manager for a leading product manufacturing company. She holds a master’s degree in Information System Security and 10+ years of professional experience in Security Architecture, Cloud Security, and Penetration Testing. She is a committee member of the Women in Security-Information Systems Security Association specialty group and a Cyber Wyoming member of the Board of Directors. She is a passionate researcher, author, and enjoys educating people on security exploits and remediation.
Contact details: prathibha.muraleedhara@gmail.com, LinkedIn: http://www.linkedin.com/in/prathibha-muraleedhara-8a3976105/