Security providers in IT and OT have implemented AI, ML and other advanced technologies to make systems smarter than malicious attackers.

Rich Castagna

February 10, 2021

8 Min Read
3d rendering of human brain on technology background
3d rendering of human brain on technology background represent artificial intelligence and cyber space conceptThinkstock

Securing vast and growing IoT environments may not seem to be a humanly possible task—and when the network hosts tens or hundreds of thousands of devices the task, indeed, may be unachievable. To solve this problem, vendors of security products have turned to a decidedly nonhuman alternative: artificial intelligence.

“Cyberanalysts are finding it increasingly difficult to effectively monitor current levels of data volume, velocity and variety across firewalls,” CapGemini noted in a survey research report, “Reinventing Cybersecurity With Artificial Intelligence.” The report also noted that traditional methods may no longer be effective: “Signature-based cybersecurity solutions are unlikely to deliver the requisite performance to detect new attack vectors.”

In addition to conventional security software’s limitations in IoT environments, CapGemini’s report revealed a weakness in the human element of cybersecurity. Fifty-six percent of the 850 IT and OT executives who participated in their survey said that their cybersecurity analysts were “overwhelmed.”

Technical professionals seem convinced that AI-enhanced security for their IoT environments is a requirement. In the CapGemini survey, 69% of respondents said, “We will not be able to respond to cyberattacks without AI.”

AI is generally considered an umbrella term for various practices, methodologies and disciplines that include machine learning (ML), deep learning, neural networks and other related technologies.

Why IoT Cybersecurity Is Challenging—and How AI Can Help

In addition to size and scope, IoT implementations involve myriad types of devices connecting to their networks. While conventional security apps could focus on Windows PCs or iOS devices or other widely deployed systems, IoT security must grapple with scores of different devices, some old and some new, but each with its own operating systems and particular vulnerabilities.

The high degree of device heterogeneity makes IoT networks prime targets for the bad guys as they troll for weak links they can exploit. “From an attacker’s perspective, it’s much easier to penetrate IoT devices than it is a PC nowadays,” said Derek Manky, chief of security insights and global threat alliances at security vendor Fortinet’s FortiGuard Labs.

The first step in managing an IoT installation and providing a solid foundation for deploying network security is identifying all the devices that the IoT links. In larger IoT environments, this could entail tallying up thousands to hundreds of thousands of sensors and other devices. It could be a monumental undertaking, but AI can help make this task much easier than other methods of discovery and provide more detailed information about the nature of connected devices. And although that might sound more like asset management than cybersecurity, it’s the key step in rolling out effective network security.

“The nature of IoT is just so many more manufacturers, so many more platforms, different code bases running on it, different traffic and behaviors and, by the nature of IoT, different locales,” Manky said. “So having visibility at any given time into what’s on the network is really important—and it’s not as easy as it sounds.

In some industrial verticals, IoT installations are vast, covering wide geographic areas and encompassing thousands of devices. AI is indispensable in those environments to establish secure operations. “The first thing about security is to know which assets you have, especially as IoT deployments go to scale,” noted Johan Vermij, senior research analyst for IoT at 451 Research. “In energy utilities or oil and gas rigs, the number of IoT devices is huge and there’s a lot of legacy, so AI is used to detect new devices and then start profiling them.”

AI vs. AI—Keeping a Step Ahead of the Bad Guys in IoT Cybersecurity

Given the ubiquity, size and growth of IoT deployments, vulnerability problems aren’t likely to go away anytime soon. “The top five of our top 10 attacks are IoT-related attacks,” noted Manky. “It’s not a complete surprise, it’s just the reality that we have to deal with and it’s going to continue to be intrinsically much harder.”

Targeting IoT is only half the story, as actors are starting to employ AI techniques to thwart network defenses. For example, malicious code could use AI to determine how the network’s security processes work and use that knowledge to figure out how to breach them.

Manky noted that malicious attackers have begun to use “offensive automation” which includes the “weaponization of ML and AI” to expand their sinister toolkits and enable new and more sophisticated attacks. “We are starting to see that there are indications that they’re also applying this technology to do the hacking as well to actually find those security flaws,” Manky said.

Opinions vary about the degree to which network attackers use AI technology, but better safe than sorry seems to be a diligent approach to IoT security. “Even without significant evidence of AI-enabled cyberattacks currently being deployed,” wrote Jessica Newman, research fellow and program lead, UC Berkeley AI Security Initiative, in an email,  “AI-enabled security is an attractive solution for IoT environments given the sheer number of connected devices and new vulnerabilities.”

Training AI for IoT Security

AI-enabled security systems need to be trained to effectively apply AI methodologies. Training allows an AI system to determine what’s connected to the network, what the connected devices do and what normal operations look like. With that information the security app can determine a baseline of allowable activities and thus detect anything that occurs outside the normal patterns.

Training typically involves letting the security software observe how data flows through the network, in addition to feeding additional data through the app. Generally, the more data that can be fed into the security system, the higher level of confidence it will attain, as it can learn more and then react more intelligently.

Training methods vary among security system vendors, but there are two basic training methodologies, according to Fortinet’s Manky. Both involve exposing the AI system to the data as described above, but one involves the intervention and decision making of human experts to provide the AI components more information from which to infer and make decisions. That approach is called supervised learning or training. The alternative method proceeds without the human supervision and is better suited for systems that collect and display data on dashboards rather than taking specific actions.

Vendors will often enhance AI training by providing databases of device types and behaviors that are relevant to particular environments. “The focus of the industry is to provide as much as possible in terms of context,” said 451’s Vermij. “The vendors are in fact accelerating that training by starting out with some knowledge of the environment.”

Faster Than a Speeding AI

Speed is the hallmark of AI-enabled security, allowing the system to perform analyses on operations and data transmissions at a rate much faster than conventional security applications can muster.

That speed cuts the time between the discovery of evidence that might indicate foul play and the determination and resolution of the specifics of the event. This means that the cause of incident can be pinpointed in a matter of seconds, rather than having experts pore over logs and other evidence in a much more laborious, time-consuming process.

AI-based discovery can identify aberrations in the operation or data passing of networked devices as well as abnormal behaviors and activities of otherwise legitimate users. It does this by detecting patterns of activities that don’t fit what is recognized as normal, benevolent actions. Those actions might include logging onto the system at an unexpected time or perhaps issuing unexpected and suspicious queries.

AI-enabled security may be indispensable for IoT installations that routinely connect to and exchange data with external networks, as is often the case in the energy and manufacturing sectors. The AI system can keep an eye on the connections and data flow just as it would for any device that is directly connected to the IoT ecosystem to ensure that nothing malicious is allowed to enter. “If you can manipulate that outside data and you send false data,” noted Vermij, “you can mess up a system.”

AI in IoT Cybersecurity Today

We’re really in the early days of AI-enhance IoT security. Traditional security vendors are not only implementing AI methods into their products, but in many cases they’re also adjusting from an IT orientation to a more operations technology perspective.

“They were very limited in their capabilities as they did not understand the OT specific security threats,” said Vermij, “but many of them are doing a great job in looking at the verticals and seeing what’s going on and what are the types of threats there.”

Still, Vermij recommends setting up test environments that mirror the installed IoT deployment to run the AI security app through its paces before releasing it into production.

“You probably need some test setup,” said Vermij. “If you are test driving these systems and you have to just launch a WannaCry or a Bad Rabbit or whatever, just launch it at the system and see what it does in a controlled environment.” Vermij concedes that such an undertaking is likely only feasible for large companies with extensive testing resources, but some degree of testing before going production is still recommended.

Berkeley’s Newman suggests a more measured approach: “AI products are being marketed as a silver bullet for IoT security, but buyers should know that the use of AI also introduces new vulnerabilities, and these are not due to programming mistakes, but to inherent limitations of current AI algorithms (for example susceptibility to data poisoning).” Data poisoning refers to the injection of bad data by an attacker during the training period.

While caution is good advice for an undertaking a project as daunting as securing a sprawling IoT network, the outlook for AI-enabled security is very promising, although it’s learning process for both vendors and users.

“It doesn’t have to sound scary or be complex,” Manky said. “The fundamentals are very important, so getting visibility on what’s on your network at any given time and especially the vulnerabilities and patching.”

About the Author(s)

Rich Castagna

Rich Castagna is a freelance writer and editor. He has been a tech journalist for 30 years, covering topics ranging from desktop apps to small business computing to enterprise IT. Rich was vice president of editorial at TechTarget, overseeing an editorial staff of 110 writers and editors; before that he headed up TechTarget’s storage coverage both online and in print. Rich also covered tech for CNet, UBM/CMP and Ziff-Davis publications. He’s also a produced playwright, with two New York Off-Broadway productions of his work.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like