Making effective use of AI in cybersecurity demands a careful approach.

Brian Buntz

September 10, 2020

5 Min Read
red space, virtual connections, artificial intelligenceGetty Images

Key takeaways from this article include the following:

  • Data integrity and subject matter expertise are vital for using AI in cybersecurity.  

  • For many cybersecurity teams, funding is a hurdle for deploying emerging technologies.    

  • Understanding the role of AI in cybersecurity requires continual reassessment, given the quickly shifting landscape.  

Cybersecurity is frequently compared to an endless game of cat and mouse, pitting human aggressors against human guardians in the cyber realm. In the future, that game will change as the role of artificial intelligence begins to supplant manual offense and defense. 

A similar concept has gained ground for traditional warfare. “Speed is very important in warfare, and artificial intelligence can move us through the decision-making loop — the decision making cycle — far more quickly,” said John R. Allen, the president of the Brookings Institution and a retired general in a recent webinar.  

[For all our IoT World coverage, read our IoT World 2020 conference guide.]

In terms of cybersecurity, organizations that rely heavily on manual defenses are likely to be overwhelmed — not just by attackers equipped with artificial intelligence (AI), but by the difficulty of defending an exploding number of endpoints — including Internet of Things (IoT) devices. 

“By 2025, IoT devices and processes will generate 465 exabytes worth of data per year globally, which is equivalent to 200 million DVDs per day,” said Oluyemi James Odeyinka, technical cloud architect and cybersecurity leader at Walgreens in a session at IoT World in August

Organizational Maturity Is a Necessity 

But using AI in cybersecurity requires maturity, said David Coher, a former head of reliability and cybersecurity for a major electric utility. “First, it is important to ensure you have a well-defined security program, including thinking through the basics as well as the more advanced threats,” he explained. “It is important to have that base before beginning consideration of AI applications.” 

Next, when evaluating AI-powered cybersecurity offerings, Coher recommended developing a concrete understanding of operational risk and needs. “It is important to ask, ‘Do we really have a use for this [new technology]?’ Or, perhaps, ‘What does this [new technology] do better than what we already have?’ By answering these questions, you can both build the case for the expense and improve the design of your security program,” he said. 

Jason Haward-Grau, a leader in KPMG’s cybersecurity practice, shared similar advice concerning AI in cybersecurity. “Organizations often get hung up on asking, ‘What’s the optimal amount of technology that I should be using in my environment?’” Instead of seeking to identify promising artificial intelligence technology and to find applications for it, organizations should begin asking how they can augment their cybersecurity team’s workflows. “What’s crucially important is to start by understanding the functional capabilities you need to put in place,” Hayward-Grau said. Investments in artificial intelligence for cybersecurity should be rooted in deep consideration of both near- and long-term cybersecurity needs. “You can invest a huge amount of money [in AI], and end up with very little to show for it if you’re not careful.”

Data Integrity and Subject Matter Expertise Are Vital 

A related prerequisite for artificial intelligence programs is focusing on data integrity, said Zulfikar Ramzan, chief technology officer at RSA. “Too often, people get caught up in algorithms,” Ramzan said. “For every hour spent on data optimization, they might spend 50 hours on algorithm optimization. It should be just the opposite.” 

One factor needed to ensure data quality is subject matter expertise. Domain experts can provide invaluable feedback when deploying, for instance, machine learning for cybersecurity. “Typically, you do something with your data so that an AI algorithm can interpret it,” Ramzan said. That step might involve deleting incomplete or inaccurate data or adding contextual information to an additional data set. “When you do this step, it should be geared toward allowing the algorithm to look at the data elements that matter the most and coming up with some determination about the problem you are trying to solve,” Ramzan said. Many organizations cut corners at this step. “They might not do any cleaning. They might take it as it is and give it to somebody with a Ph.D. in machine learning who doesn’t know the industry at all, and then you just get these bad results,” Ramzan said.    

Doing More with Less

Many cybersecurity executives have been forced to put new investments on hold in 2020 in the aftermath of the COVID-19 crisis, according to McKinsey. And crisis responses are likely to remain top budget priorities for the remainder of the year. 

While interest in security automation remains robust during the COVID-19 pandemic, making substantial technological investments can be a tough sell for many organizations. “When companies have reduced top-line revenues, security budgets often get cut,” said Andrew Howard, CEO of Kudelski Security. “We have clients who had their security budgets cut by 75%.” 

Given that challenging financial reality, some organizations are looking to expand security automation, but to find ways to save while doing so, Howard said. “Some companies are taking advantage of the machine learning platforms that exist in [public cloud platforms] to try and do smarter data analytics on their security data,” he added. 

Be Open to New Possibilities 

One of the most difficult aspects of making sense of the AI landscape is how quickly it moves. “The challenge is that technology isn’t just changing,” said John R. Allen, the president of the Brookings Institution and a retired general in a recent webinar. “The rate of change of technology is so fast, it’s almost mind-boggling.” 

The quickly moving landscape requires being open to innovation,” Coher said. “A key part of a robust security program is to regularly re-evaluate options, seeking to increase efficiencies and reduce costs. These reevaluations are required to ensure an up-to-date security program.”

About the Author(s)

Brian Buntz

Brian is a veteran journalist with more than ten years’ experience covering an array of technologies including the Internet of Things, 3-D printing, and cybersecurity. Before coming to Penton and later Informa, he served as the editor-in-chief of UBM’s Qmed where he overhauled the brand’s news coverage and helped to grow the site’s traffic volume dramatically. He had previously held managing editor roles on the company’s medical device technology publications including European Medical Device Technology (EMDT) and Medical Device & Diagnostics Industry (MD+DI), and had served as editor-in-chief of Medical Product Manufacturing News (MPMN).

At UBM, Brian also worked closely with the company’s events group on speaker selection and direction and played an important role in cementing famed futurist Ray Kurzweil as a keynote speaker at the 2016 Medical Design & Manufacturing West event in Anaheim. An article of his was also prominently on kurzweilai.net, a website dedicated to Kurzweil’s ideas.

Multilingual, Brian has an M.A. degree in German from the University of Oklahoma.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like