Using AI for cybersecurity could transform the field of cybersecurity and help defend some IoT deployments, but the technology continues to be overhyped at present.

Brian Buntz

December 20, 2018

6 Min Read
artificial intelligence
Getty Images

Artificial intelligence is overhyped for cybersecurity, according to Rene Kolga, senior director of product and marketing at Nyotron. “Of course, I agree that machine learning algorithms and AI represent a transformational trend overall — no matter in which industry. But the current upswelling of attention the topic has received — within cybersecurity and seemingly everywhere else — could plunge the field into another period of AI winter, where funding and thus progress cools to a near standstill, Kolga reckons. “We have had multiple AI winters in the past 50 to 60 years,” he explained. “We are potentially heading to this point again because the field is so prone to overpromise.”

By late 2018, it seems every company either says they do AI or are involved in an AI-based project, but 4 percent of CIOs internationally say they have AI projects in production,

according to Gartner’s Hype Cycle for Artificial Intelligence from July 2018. In that same month, The Guardian published an exposé on the explosion of what it termed “pseudo AI” in which companies use a mix of algorithms buttressed by human labor in the background.

[IoT World is the event that takes IIoT from inspiration to implementation, supercharging business and operations. Get your ticket now.]

So whether one is evaluating the field of AI at large, or evaluating an AI-based strategy to improve the security of an IoT application, it is important “to understand what’s real and what’s not,” said Kolga, who also shared the following questions to help companies sift through the hype.

1. Are You Sure You Have Access to Good Data?

Some companies are so enamored with the prospects of AI-based cybersecurity and the power of the latest algorithms that they will rush to deploy the technology without ensuring they have the data they need for the program to be successful in the long-run.

But another related problem is that a company’s leaders may think they have access to good data when they have been unknowingly breached. A company might use User Entity Behavior Analytics products, for instance, to understand the baseline behavior of the network of their devices and users. After the initial period of baselining, they can theoretically detect anomalies. “What’s dangerous about this is that if the malware or a malicious insider is already inside your environment, now the algorithm will baseline that as the norm,” Kolga said. “If you do that, will you really be able to detect an infection?”

It is entirely possible that an organization’s cyber leaders might think their environment is safe only to discover later that it was not. The Poneman 2017 Cost of a Data Breach Study found that it took an organization an average of 206 days to detect a breach. And a 2017 Inc. article reports that 60 percent of small businesses in the United States are hacked each year.

2. Are You Working on Developing an AI-based Crystal Ball?

The important thing to remember about subjects such as big data and analytics is that it is a much more reliable strategy to codify past behavior than it is to use the technology to invent the future. The plan to use big data, machine learning, artificial intelligence, etc. to “enumerate badness” as Marcus Ranum has put it, is problematic in that there are vastly more types of “bad” in the form of malware and attacks than there is “good.” So if you feed a machine learning algorithm a massive trove of data related to known attacks and malware, it will likely be able to detect subtle variations of known malware from the past. But it will have less of a shot at detecting an entirely new form of malware or a new attack methodology. “Sometimes companies take the position that AI is this really magical tool that can detect everything,” Kolga said. “But then if you think about how it works, it’s trained on the known, old malware samples.”

3. Can You Explain Your AI’s Conclusions?

More than a decade ago, the British sketch comedy show “Little Britain” helped popularize the catchphrase: “The computer says ‘no,’” which served as a sort of snide customer service retort that often ran counter to logic. For instance, in one sketch, a receptionist told a 5-year-old girl that she was signed up to receive a double-hip replacement rather than have her tonsils removed, as initially requested. When the girl’s mother points out the mistake, the receptionist replies: “The computer says ‘no.’”  

While AI algorithms are unlikely to reach such firm conclusions they could say: “The current network behavior has a 79 percent chance of being bad.” “It can’t tell you why it thinks so,” Kolga said. “You as a security IT person or researcher person need to figure it out.”

There is a newer field of AI emerging dedicated to making algorithmic conclusions easier to follow logically. “But currently, it’s a complete black box,” Kolga said.

4. Is Your Interest in AI Pulling Your Focus Away from Domain Knowledge?

The current wave of hype swirling around machine learning and AI can make it seem like algorithms are widely available that have domain expertise. “If domain knowledge is not accounted for, things like IP addresses and port numbers for an algorithm will just look like another integer [for an AI-based system],” Kolga said.

5. Are You Following Research on Malicious Machine Learning?

If you have attended Black Hat or DEFCON recently, you may have seen a presentation on how bad guys could use machine learning for nefarious purposes — poisoning machine learning training data, using machine learning to identify targets and evade detection, and so forth. At Black Hat 2017, for instance, a researcher discussed strategies to use machine learning to evade machine learning-based malware detection. Session titles from this year’s Black Hat were titled AI & ML in Cyber Security – Why Algorithms are Dangerous and DeepLocker: Concealing Targeted Attacks with AI Locksmithing. “Bad guys have access to all of the same tools and technologies that the good guy have,” Kolga said. “So there’s no reason why the bad guys wouldn’t be using the same capabilities, algorithms for their bad activities.”

About the Author(s)

Brian Buntz

Brian is a veteran journalist with more than ten years’ experience covering an array of technologies including the Internet of Things, 3-D printing, and cybersecurity. Before coming to Penton and later Informa, he served as the editor-in-chief of UBM’s Qmed where he overhauled the brand’s news coverage and helped to grow the site’s traffic volume dramatically. He had previously held managing editor roles on the company’s medical device technology publications including European Medical Device Technology (EMDT) and Medical Device & Diagnostics Industry (MD+DI), and had served as editor-in-chief of Medical Product Manufacturing News (MPMN).

At UBM, Brian also worked closely with the company’s events group on speaker selection and direction and played an important role in cementing famed futurist Ray Kurzweil as a keynote speaker at the 2016 Medical Design & Manufacturing West event in Anaheim. An article of his was also prominently on kurzweilai.net, a website dedicated to Kurzweil’s ideas.

Multilingual, Brian has an M.A. degree in German from the University of Oklahoma.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like