CrowdStrike’s unprecedented blunder back in July exposed vulnerabilities in the aviation sector, while also highlighting the transformative potential of cybersecurity and how deeply our society relies on the seamless functioning of increasingly complex technologies. As Artificial Intelligence (AI) continues to integrate into our digital world, it has the potential to obscure these foundational systems further, raising concerns about our ability to prevent similar incidents in the future. This global IT outage, which affected airports, banks, health services and other businesses, appears to stem at least partly from a routine software update; could AI have helped prevent it?
As the aviation industry becomes increasingly digital, cybersecurity becomes ever more critical. The threat landscape includes a range of risks, from ransomware attacks targeting airport operations to sophisticated cyber espionage aimed at compromising flight control systems. Advanced cybersecurity measures are being implemented to counter these threats, such as real-time network monitoring, encryption of communication channels, and rigorous access controls. Indeed, a recent Forbes article highlights how airlines have bolstered their defences with AI-driven threat detection systems and multi-layered security protocols to prevent breaches.
Egis shared some key insights at a recent seminar on “The Opportunity of AI and Cybersecurity”:
- AI-driven phishing attacks have become 300% more sophisticated and successful.
- AI-enabled solutions have reduced the time to identify and contain data breaches by 27%.
- Cybercriminals are now launching attacks with dwell times measured in hours rather than days.
- Global cybercrime costs are expected to hit a staggering $13.82 trillion by 2028; the need to leverage AI for defence has never been more urgent.
Cybercriminals are increasingly using AI to amplify their attacks and avoid detection, making it clear that traditional, human-led responses just aren't enough anymore. This growing threat is driving businesses to step up their defences against sophisticated, AI-powered attacks. These trends underscore the evolving challenges and opportunities in the intersection of AI and cybersecurity.
In 2023, the global landscape was characterised by geopolitical tension, armed conflicts, and mixed feelings about future technologies. Amidst this complexity, the cybersecurity economy grew exponentially, outpacing both the global economy and the tech sector. Hackers are leveraging AI to enhance their attack capabilities, exploiting vulnerabilities, and adapting in real-time. Traditional cybersecurity methods are proving insufficient against AI-driven attacks due to their adaptability and rapid evolution. Effective defence now requires predictive, rapid, and accurate security algorithms capable of countering AI-based threats.
The need for defensive AI
“Defensive AI” extends beyond just protecting against cyberattacks. By using self-learning algorithms to monitor infrastructure and detect anomalies, AI can enhance operational resilience by identifying technical issues, such as misconfigurations or software errors, before they escalate. This adaptability makes it a powerful tool not only in cybersecurity but also in preventing IT disruptions.
A survey by MIT Technology Review Insights, in association with AI cybersecurity company Darktrace, revealed that 60% of respondents believe human-driven responses to cyberattacks fail to keep up with automated attacks. Consequently, organisations are turning to sophisticated technologies to meet these challenges, with 96% of respondents already implementing AI defences against AI-powered attacks. Offensive AI cyberattacks are particularly daunting due to their speed and intelligence. Deepfakes, for example, are fabricated images or videos depicting scenes or people that never existed, representing one type of weaponised AI tool. Nevertheless, the more conventional email phishing attacks (74%) and ransomware (73%) continue to be the greatest concerns for executives.
As AI continues to reshape risk and vulnerability, it's becoming essential to embrace continuous monitoring and flexible security measures. Security frameworks need to evolve to tackle the unique risks posed by AI and focus on proactive strategies.
AI in Operations (AIOps) and its potential
While AI plays a critical role in cybersecurity, its benefits extend to IT operations management, where it can help prevent disruptions caused by human error, misconfigurations, or technical glitches. The growing complexity of IT systems requires tools that can quickly identify and respond to both security threats and technical failures, making AI a vital asset across the entire digital landscape.
AIOps is revolutionising IT management by integrating big data and machine learning to automate critical processes like event correlation, anomaly detection, and causality determination. This approach bridges gaps between complex IT environments and siloed teams, enhancing application performance and meeting user expectations.
Given its capabilities, could this approach have prevented the CrowdStrike incident? Perhaps. While the CrowdStrike outage was reportedly caused by a DNS configuration issue, not a cyberattack, AI could have played a role in mitigating its effects. For instance, AI-powered systems with automated configuration management, predictive analytics, and real-time monitoring could have identified anomalies and alerted engineers to potential risks before the outage escalated. By automating the response and even deploying failover systems, AI could have reduced the severity and duration of the outage. So, the integration of AIOps into IT security frameworks could be a critical step towards preventing or minimising the damage of similar incidents in the future.
It is important to distinguish AI’s role in cybersecurity from its broader application in IT operations management. While defensive AI focuses on identifying and mitigating cyber threats, AI can also assist in detecting misconfigurations and automating responses to technical errors, as demonstrated in CrowdStrike’s case.
The impact on labour productivity is significant. AI and robotics adoption has already increased productivity by 11.4% (US Census Bureau, 2019). Generative AI is poised to further boost productivity by automating tasks that once required manual effort, shifting the demand towards skilled workers. It can also improve operational efficiency and reduce workloads by facilitating user-led training and real-time root-cause analysis. Automation could handle up to 70% of repetitive customer care and network operations tasks, enhancing productivity and decreasing manual workloads.
Could it also prevent human error? As automation reduces the reliance on manual processes, it may significantly lower the risk of errors caused by human loss of concentration or fatigue.
The rise of AI demands a shift in workforce skills towards problem-solving and analytical abilities. Organisations will therefore need to adapt to new technological requirements and promote continuous learning and flexibility.
Interplay of AI and data in cybersecurity
In cybersecurity, the effectiveness of AI depends on two crucial factors: the trustworthiness of AI models and the quality of their training data. For AI systems to reliably identify and mitigate threats, they must be rigorously validated to ensure transparency and dependability. Equally vital is the use of high-quality, unbiased data for training, which ensures that AI systems can accurately detect and respond to potential threats.
Regulation of emerging technology and supply chain
Regulators often approach technology with caution, striving to strike a balance between fostering innovation and ensuring public safety, privacy, and market fairness. Their key challenges include keeping up with the rapid pace of technological advancements, managing the cross-border implications of tech regulations, and addressing ethical concerns tied to emerging technologies like AI and blockchain. To navigate these challenges, there is a growing need for more agile and anticipatory regulatory frameworks that can quickly adapt to technological changes and effectively incorporate stakeholder feedback.
Supply chain security also plays a significant role in this landscape, as recognised by the NIS2 regulation that we wrote about earlier this year. The complexity of technology and dependence on third-party vendors can introduce risks - CrowdStrike was a third-party supplier to Microsoft. Ensuring the security of both data and AI models throughout their supply chain is essential to protect against vulnerabilities and threats. Supply chains, particularly in the realm of cybersecurity, are indeed complex and perhaps too expansive to fully address. However, it opens the door to future explorations of the trade-offs and nuanced decisions required in regulating emerging technologies and ensuring secure supply chains.
Closing thoughts
AI is revolutionising cyber defence by detecting unusual patterns and deviations from established operational behaviour, helping to identify and mitigate potential threats. By continuously monitoring and alerting on changes, AI enhances malware detection, enables self-configuring networks with autonomous responses to vulnerabilities, and improves overall cyber situational awareness.
AI holds significant promise in cybersecurity and IT management, but over-reliance on automation can lead to risks, as highlighted by the CrowdStrike incident. In complex environments, human oversight is essential to prevent errors in automated systems from cascading. Automation should augment, not replace, human involvement, especially in tasks like software updates, where manual checks can catch potential issues. A balanced approach, integrating AI’s efficiency with human review, is key to minimising risks and ensuring reliable system performance.
As AI continues to drive innovation, robust cybersecurity measures are more crucial than ever. With attackers leveraging AI to exploit new technologies, it is imperative to stay ahead of these evolving threats. Effective cybersecurity not only protects sensitive data and intellectual property but also sustains trust in technological advancements. While the future of AI in cybersecurity offers exciting possibilities for enhancing aviation security, it also highlights the need for constant vigilance, adaptability, and collaboration.
In our previous blog on the future of aviation cybersecurity, we emphasised that it's never too late to evaluate an organisation’s current cybersecurity status and identify opportunities for improvement. This assessment can be conducted internally or with the assistance of trusted independent advisors. By embracing emerging trends, investing in cutting-edge technologies, and fortifying both human and technological resources, we can navigate the evolving landscape and build a more secure and resilient future.