When Will the Cybersecurity AI Hype Bubble Burst?
“Difference between machine learning and AI: If it is written in Python, it’s probably machine learning. If it is written in PowerPoint, it’s probably AI.” —Mat Velloso, technical advisor to the CEO at Microsoft
The theme of AI in cybersecurity seemed omnipresent at this year’s RSA security conference, but it remains difficult to make sense of the subject. At the core of the confusion is semantics — there are a plethora of varying definitions of what, exactly, cybersecurity vendors mean by “artificial intelligence.” Some vendors position AI generically, to include machine learning, deep learning and, occasionally, more primitive techniques. A number of companies are passing off rules engines or even if-then statements as examples of artificial intelligence with others positioning AI in a more rarefied light — as artificial general intelligence, which is a technology Gartner suggests is unlikely to emerge in the next decade.
But while AI and security automation offers considerable promise to help organizations reduce risk in the current threat landscape, driven partly by burgeoning IoT deployments, adoption of AI for cybersecurity technology remains at an early stage. It is, however, difficult to ascertain whether interest in AI hype is continuing to build or beginning to deflate.
While interest in another technology — blockchain for cybersecurity — is apparently on the decline, judging by the relative paucity of vendors advertising blockchain-based cyber products at RSA, the technology has benefited from considerable investment — including from many heavyweight tech firms intent on demonstrating that technology’s potential.
The way that many vendors are positioning AI calls to mind how some firms recently marketed blockchain for cybersecurity. “People are advocating technologies that are very buzzwordy and poorly implemented and not appropriate for the space,” said Adam Mashinchi, vice president of product management at Scythe. The problem is compounded by the fact that management teams often struggle to understand cybersecurity in the first place. A central hurdle for many is coming to grips with the potential threat adversaries pose for enterprise and industrial firms’ reputation and sensitive information. But the second hurdle for those who understand the risk is the challenge of knowing which technologies can help them effectively reduce their threat level. For, say, a CEO, CFO or another executive with purchasing authority for cybersecurity technology that lacks a solid grounding the technology, attempting to make sense of various vendor offerings can be something like looking for meaning at an amusement park. “It’s like some sort of absurd Disneyland,” Mashinchi said, where the slogan is: “‘Welcome to the world of tomorrow — built entirely out of buzzword bingo.’”
While technologies such as machine learning, deep learning and artificial intelligence offer considerable promise, coming up with a concrete understanding of how various vendor offerings compare and can address specific security vulnerabilities is an undoubtedly time-consuming undertaking.
On top of that, the marketing pitch that machine learning techniques can help organizations determine what network or endpoint behavior is normal flies in the face of the conclusion that cybersecurity compromises are inevitable. “We as an industry have said: ‘It’s not if but when’ and we’ve gone from barring the doors and windows to hanging bells on strings all throughout your house because you know they’re going to break in and you want to be able to find them as soon as possible to figure out what they’re doing,” Mashinchi said. That doesn’t necessarily mean that, say, a nation-state actor has compromised the network, but the organization might be deploying a suite of IoT devices with an outdated Linux distribution, or they might have users running vulnerable software. Whatever the case may be, if an organization’s cyber hygiene is less than stellar, they would be better served by addressing those problems than by training machine learning algorithms on a potentially unsecured network state.
While it is likely that adversaries will eventually make greater technologies like machine learning and AI over time, most adversaries simply don’t need to rely on such heavy-duty tools for the time being, said Exabeam chief data scientist Derek Lin. There continues to be an array of vulnerabilities — default credentials, unpatched networking devices and gullible employees eager to click on phishing links or fall prey to social engineering attacks. “You don’t see malware authors using machine learning because they don’t have to,” Mashinchi said.
AI may be one of the hottest buzzwords around — both within and outside the cybersecurity industry — but the technology will never be a panacea. “And when we talk about using advanced intelligence against basic vulnerabilities, it is like having a hammer and everything looking like a nail. We don’t need to get to that level yet until we fix the basic vulnerabilities on these platforms,” said Dean Weber, chief technology officer at Mocana.
“Practically speaking, the things [adversaries] are looking for are simple,” Mashinchi said. “The things they are trying to do haven’t really changed. And so why do we need the new fancy next-gen AI when we know what the adversarial behavior is. Can’t we just shore up for that with our monitoring first?”