AI and cybersecurity: friends or foes?

Cyber security concept image with a digitalized padlock surrounded by digital interface
(Image credit: Getty Images)

Artificial intelligence (AI) has become ubiquitous like no other technology before it. Controlling everything from our traffic lights to our food production, it's now embedded in the fabric of our daily lives and working its way into every industry through myriad applications. This brings with it innovation, but also risk.

AI is also profoundly affecting cybersecurity, for good and bad. The security landscape is changing and reactionary security is no longer viable as businesses need to be more aggressive and increasingly adopt proactive security measures. And when security can’t keep up with the pace of innovation, the ability to deliver bottom-line results will become harder to achieve. 

To counter the threat of AI, cybersecurity experts like NetSPI believe that organizations need proactive security at the core of their cybersecurity setup. AI, specifically generative AI programs, like ChatGPT, have become powerful tools in the arsenal of hackers; the technology is used to automate incidents, create convincing phishing messages, develop more evasive malware, and even crack passwords.

However, NetSPI is a company ready to help organizations forge a successful and secure path forward. The company has deep roots in penetration testing and is uniquely positioned to help security teams adopt a proactive approach to security with more scale, speed, and clarity than ever before. 

AI-powered cybercrime is on the rise and only 15% of UK businesses have a formal cybersecurity incident management plan, NetSPI’s EMEA regional director Nick Walker writes in Teiss. As such, there has never been a better time to get proactive about security.

The company recently introduced its NetSPI Platform, a proactive security solution used to discover, prioritize, and remediate security vulnerabilities of the highest importance. But, one of the most fascinating areas of AI that NetSPI looks at is the study of attacks on machine learning algorithms. 

“We shifted away from ‘offensive’ security to ‘proactive’ security to better align with our customers who face insurmountable pressure in the never-ending battle to secure systems,” NetSPI CEO and president Aaron Shilts wrote in a blog post in May

“The solutions we provide are intended to support defensive teams, not to discourage. We are an ally to defensive teams, not an enemy.”

Shilts added that, for NetSPI, proactive security means three things: 

  1. Accurate and thorough discovery of known and unknown assets in the IT estate.
  2. Prioritization of the vulnerabilities to fix first based on a thorough understanding of the environment and risks that truly impact the business.
  3. Remediation advice that can be expedited by building integrations with customer systems, giving you guidance on what to fix, how to fix it faster, and how to ensure the effectiveness of the fix.

Adversarial machine learning (AML)

Adversarial machine learning (AML) isn’t a type of AI, but rather the study of techniques that can be employed to exploit vulnerabilities of machine learning models. Typically, these are attacks that attempt to insert deceiving inputs that cause the model to malfunction and expose data or simply disrupt the main function of the machine learning algorithm altogether.

ML models are self-training algorithms, specifically code that perform programmed actions by processing large data sets and classifying data points into categories to determine actions based on what the model can understand. In contrast, AML is the method for disrupting this workflow with the introduction of an input that deceives the model into an error. AML examples are often generated by making small, imperceptible modifications to the original data, but it can significantly impact the model’s output. 

“As machine learning becomes increasingly pervasive, it is crucial to address the need for robust and secure models throughout their development, training, and implementation processes,” NetSPI states in its eBook, The CISOs guide to securing AI/ML models.

Right now, sophisticated uses of AI are thought to be restricted to threat actors who have more resources and expertise, according to the UK’s National Cyber Security Council (NCSC). However, it stresses that AI will eventually lower the barrier for novice cyber criminals and hacktivists seeking to undertake data-gathering operations. 

However, AML research aims to bridge the gap between the theory and practical applications in the real world. It considers adversarial attacks during the development and implementation phases so that practitioners can design robust and secure machine learning systems that can withstand potential adversarial manipulations. 

The real-world deployments involve integrating defense mechanisms, risk assessment, and continuous monitoring to ensure the efficiency and robustness of machine learning models in practical environments. 

However, this presents a trade-off between model development being robust enough to repeal adversarial attacks and achieving high levels of performance. Adversarial training techniques have the potential to decrease a model’s ability to perform its intended function as they’re implemented. Therefore, striking the right balance between security and performance becomes a crucial consideration for its initial development. Fine-tuning, care evaluation, and necessary iteration are essential to ensure that developed models achieve the optimal combination of security and functionality in their intended application domain.

What can the security professionals do? 

AI-powered tactics like this are making cyber attacks extremely difficult to spot and cyber teams now face an expanded attack surface across their organization – leaving assets exposed and vulnerable.

There is also a danger here for models that have transparent architecture, where data can be revealed with sophisticated attacks. The question then is really whether the architecture details are publicly available or proprietary. 

“Fully transparent models might be more susceptible to white-box adversarial attacks where the attacker has full knowledge of the model,” NetSPI’s ebook notes. “On the other hand, keeping it a secret could lead to security through obscurity, which might not be a sustainable defense.”

Regardless, the impact of a cyber attack today is unlike ever before and the only way to stand up to the challenge is through collaboration across industry, innovating, and delivering improved technology to address these profound risks. 

For a comprehensive guide, NetSPI’s eBook delves deep into adversarial machine learning and the intricate art of penetration testing models. The eBook aims to empower security leaders and the broader industry with a shared understanding of the current state of AI, relevant challenges against its progression, and how companies like NetSPI are working to overcome these obstacles. 

As you read, the company invites you to reflect on how far your team has come with using AI in 2024, and which future-state applications will bring the greatest value to your business. 

One of the best takeaways from the ebook is the need to work alongside subject matter experts who are familiar with the nuances and can examine models from the lens of an adversary -  and NetSPI is here to help guide you every step of the way.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.