IoT Cybersecurity in the Future Could Pit AI against AI

Are hacking and IT security becoming more data-driven than human-driven?

Brian Buntz

July 3, 2017

8 Min Read
AI chess match
Thinkstock

It’s no wonder that artificial intelligence is hot in cybersecurity. As the number of IoT devices is projected to reach into the tens of billions in coming years, enterprise companies will be compelled to embrace AI, machine learning and automation tools to help secure and manage their networks. Doing it the old-fashioned way will simply not be feasible.

As a result, the field of cybersecurity is beginning to look like an endless game of chess that pits human hackers against AI-enhanced security professionals. “Already, it is possible to automate cybersecurity responses with machine learning and AI, which demonstrates the edge of what’s possible,” says T.J. Laher, senior solutions marketing manager at Cloudera. A recent headline from CNBC declares that Cisco’s intent-based networking could “help stop cyberattacks.” Hackers, however, change tactics frequently, and cybersecurity is not a finite problem that can be solved, as a recent Harvard Business Review article notes. There’s also the risk that attackers could themselves leverage the technology or that they could corrupt machine learning cybersecurity algorithms to make mistakes. “[Cybersecurity] will always be a game of cat and mouse,” says Bob Noel, director of strategic partnerships and marketing at Plixer. “Anybody who says there will ever be an endgame is misinformed.”

In any event, machine learning has already changed the rules of the IoT cybersecurity game, says Thomas Dinsmore, director of product marketing for data science at Cloudera, making it look more like a chess match that pits machine against machine. Enterprise companies could use the technology to stay one step ahead of attacks while automating defenses. From a hacker’s perspective, self-learning attacks could both identify vulnerable targets and network misconfigurations. Threat actors, whether they are disgruntled employees or external agents, could tap the power of AI to help calculate and execute attacks designed to do maximum damage—even potentially predicting vulnerabilities likely to be present in future software releases.

Attaining full visibility into networks is key to stopping hackers, or machines, in their tracks, says Ofer Amitai, CEO and co-founder of Portnox. “If we’re able to understand which devices aren’t patched or can’t be patched, and therefore are more prone to cyber threats, there are measures that can be taken to control the damage here,” Amitai says. “For instance, segmenting vulnerable or potentially vulnerable IoT or other devices into a separate network, or limiting their Internet access, thereby creating hurdles for hackers and machines attempting to attack the network.”

Hackers’ Machine Learning Data Volume Problem

It could be challenging in the near term, however, for hackers to gain access to the large amounts of data needed for effective machine learning. “Typical security software is trained based on normal network behavior,” says Manjeet Rege, PhD, an associate professor in the graduate programs in software at the University of St. Thomas in St. Paul, MN. Anything that machine learning detects that is abnormal network behavior is a potential red flag. It would be harder, however, for hackers to use the same technology for nefarious purposes. “For hackers to train a machine-learning model, they would need access to the usage logs of the network, which is mostly never available to them,” Rege says. “On the other hand, if hackers did manage to hack into the system manually, they might have access to the usage logs, but they wouldn’t care for them anymore since they are already in the network.”

For the time being, human stupidity and simple attacks may still be the greatest cyber threats. Most hackers likely will prefer to attack easy targets such as IoT devices using default passwords. “Take a look at terrorism’s past playbook,” says Tamara McCleary, CEO of Thulium.co. “As governments across the world have bolstered their security protocols insulating against major bombings and the hijacking of airplanes, terrorists have resorted to using vehicles, hammers, knives and other more simplistic tools of terror,” she says. Most cybercriminals could deploy the same basic strategy, following the path of least resistance when selecting targets. “All cyberattackers have ever required to launch successful attacks is vulnerability exposed by human error,” McCleary notes. 

“The low-hanging fruit of cybersecurity constantly evolves as hackers increase their attack frequency against whatever target appears to be most vulnerable,” Laher says. “It might be Microsoft machines today, IoT devices tomorrow, AI chatbots in the future. The increasing scope of targets creates a greater need for machine learning because essentially everything needs to be monitored across the enterprise.”

Simple Defense Strategy: Better-Than-Average Security

Already, there are so many targets for hackers to choose from that enterprise companies can reduce their risk considerably by simply having better-than-average security. “There’s an American expression that illustrates this: You don’t have to outrun the bear; you just have to be faster than the next guy,” says Marc Blackmer, product marketing manager, industry solutions, at Cisco.

Better-than-average IoT cybersecurity means having the ability to identify and control devices that are already vulnerable or those that are potentially vulnerable, says Amitai of Portnox. “Machine learning can help here by understanding the behavior of devices, including IoT devices, on the network and identifying ‘soft spots’ on the network that are just waiting to be breached.” Amitai says that IP phones are a great example of a standard technology that usually isn’t monitored for threats. “A lot of people forget that these phones, like smartphones, are connected to the Internet and have access to all network ports, and if they are not monitored or controlled, can act as a gateway for human (or machine) attacks,” Amitai explains. “Using machine learning insights from networks is probably the best bet for beating hackers at their own ‘automated’ game.”

On the other hand, it is possible that powerful hacking groups—such as those supported by nation states—could be looking at AI to fuel crippling attacks against targets of their choosing. As Elon Musk and others have warned, there is also the risk that humans could lose control of AI-enhanced cyber-weapons or that powerful autonomous weapons emerge that select targets without human intervention. “But for the day-to-day stuff, I am [more worried] about people not following basic [security] steps,” Blackmer said.

In a national security context, AI-enhanced security systems could be used to guarantee uptime of IT systems. “Think about nuclear command-and-control or air defense: if your system goes down for an hour, you may be dead,” says Kenneth Geers, senior research scientist at Comodo and a senior fellow and a NATO Cyber Centre ambassador. “Tanks, aircraft, and ships are nothing more than rolling, flying, and floating boxes of information technology. You cannot attack or defend your assets without properly functioning IT. Human investigations take months and years, but the success of cyber defense may be in milliseconds.”

Machine Learning as Arrow in Enterprise Quiver

Machine learning holds immense potential for controlling the spread of cyberattacks within organizations, Amitai explains. “Together with visibility into endpoint activity on the network, there is the option to automate cyber control mechanisms, such as cutting off vulnerable ports if there’s suspicious activity going on, or instantaneously installing necessary software patches that are missing,” he adds. “Therefore, while machine learning is great for understanding network behavior patterns, it’s even more valuable when it comes to controlling network activity to ensure network security policies are upheld.”

At present, security operations are beginning to make greater use of machine-learning-based software to identify vulnerabilities, says Shaun Cooley, CTO of IoT and Industries at Cisco. His employer, for instance, uses machine learning in its internal vulnerability assessment and product testing, including static source code analysis and binary analysis of software. “Those are all commercial tools, so we have to assume that the guys on the other side of the fence are using those tools as well,” Cooley says. “If we fail to run those commercial tools against our products, the bad guys will be the first to run the commercial tools against our products,” he adds. “Again, it is a human error thing.”

Peter Tran, general manager and senior director at RSA Security, warns that even small inaccuracies in artificial intelligence algorithms can create significant problems. “AI and machine learning are only as good as the environment that they have to learn from. If the compass is off even one degree, AI’s ability to learn and defend networks gets off course exponentially. You end up with the equivalent to a grid blackout condition,” Tran says. “It’s like having your GPS navigate for you with the wrong map. This is the new reality for cyber counter intelligence,” Tran says.

Machine learning and artificial intelligence will undoubtedly be integral to the future IoT cybersecurity landscape. But the biggest threat might not be hackers armed to the hilt with malicious AI. “Again, it’s the human element that is critical here—whether it is humans being lax when training machine learning cybersecurity algorithms, losing control over powerful cyberweapons, humans corrupting machine learning through malicious inputs undetectable by human eyes, or humans believing that a given technology will make them safe,” McCleary says. “It won’t be long before machine learning is a requirement for the network of the future—but so will be understanding the psychological underpinnings of the cybersecurity chess game. In years to come, enterprise companies might want to plan on hiring AI safety experts to make sure their superintelligent AI security systems don’t wind up becoming something like Skynet in the Terminator film franchise.”

About the Author

Brian Buntz

Brian is a veteran journalist with more than ten years’ experience covering an array of technologies including the Internet of Things, 3-D printing, and cybersecurity. Before coming to Penton and later Informa, he served as the editor-in-chief of UBM’s Qmed where he overhauled the brand’s news coverage and helped to grow the site’s traffic volume dramatically. He had previously held managing editor roles on the company’s medical device technology publications including European Medical Device Technology (EMDT) and Medical Device & Diagnostics Industry (MD+DI), and had served as editor-in-chief of Medical Product Manufacturing News (MPMN).

At UBM, Brian also worked closely with the company’s events group on speaker selection and direction and played an important role in cementing famed futurist Ray Kurzweil as a keynote speaker at the 2016 Medical Design & Manufacturing West event in Anaheim. An article of his was also prominently on kurzweilai.net, a website dedicated to Kurzweil’s ideas.

Multilingual, Brian has an M.A. degree in German from the University of Oklahoma.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like