Too many organizations assume they’ve got a handle on IoT security, instead of asking: “What if an attacker targets my system?”

Brian Buntz

November 8, 2019

7 Min Read
Crash Test
3D illustrationGetty Images

How would you feel about the prospect of purchasing a car that hadn’t been crash tested? Or about a vehicle that scored poorly in a crash test?

In the case of the latter question, sales would likely crater. That happened in 1998, when the Austin Metro supermini automobile known as the Rover 100 fared poorly in a EuroNCAP crash test. Video footage of the test revealed a spectacular explosion of glass and severe crumpling of the front half of the vehicle upon a 30 mile per hour collision. The rear end lifted on impact, pulling the entire car up into the air. A simulated side collision was almost as brutal. Not long after, the manufacturer pulled the car from the market. 

Crash testing is an excellent metaphor for cybersecurity, said Ted Harrington, executive partner at Independent Security Evaluators at the IoT Security Summit. The idea is to find problems in a controlled environment — before it is out in the real world. 

[IoT World is North America’s largest IoT event where strategists, technologists and implementers connect, putting IoT, AI, 5G and Edge into action across industry verticals. Book your ticket now.

Harrington preached a similar strategy when developing or implementing IoT technology, which he termed the “break-to-build method.”  

Just as carmakers design cars upfront with the hope of surviving collisions, organizations seeking to tamp down cybersecurity vulnerabilities need to anticipate attackers’ intentions. “You would be surprised how many organizations completely skip this step, and how many organizations do not even know what a threat model is, let alone, having one implemented,” Harrington said. “That’s the foundational document that helps you understand where to allocate your resources.” 

The next phase is what Harrington terms “the break stage.” In essence, it involves looking for mistakes in IT technology — in other words, security vulnerabilities, and addressing them. “Again, you would be surprised how many organizations skip this part of the method,” Harrington said. “There are all kinds of reasons for that,” he added. Popular excuses Harrington cited include: “‘Our developers are already too busy,’ ‘The road map already is all accounted for,’ ‘Our customers aren’t asking for it,’ or, ‘Frankly, I just don’t want to deal with it.’” Harrington said he hears such excuses every day. “But you can’t get better if you don’t fix the problems, and then finally, you need to do it again and again and again and again,” he added. “Security is not a linear process. Security is a never-ending loop. And for many individuals, that’s frustrating, because there’s no clearly defined finish line.”

And because there is no such thing as perfect security and because attackers are always evolving, it is difficult to decide how to secure an IoT product or project should be.  

It is further difficult for a given organization to develop an objective vision of its cybersecurity posture. Harrington thus recommends the use of third-party security experts. “You have to be working with somebody independent of the political winds in your organization, independent of whatever biases there might be,” he said. Your organization needs “someone who spends every waking moment of their life thinking about the attacker. You don’t do that if you spend every moment thinking about how to build your company.”

When hiring a company to perform a penetration test, there are two fundamental options: black-box and white-box testing. In the former, an attacker-for-hire has virtually no knowledge of the target. “This is essentially like flying with a blindfold on,” Harrington said. In white-box testing, the penetration tester is granted access to information about the system and can collaborate with the company’s security experts and engineers. White-box testing typically entails a high amount of collaboration. 

The white-box approach is substantially more effective than the alternative, Harrington said. It enables an organization to identify a higher volume of problems while identifying solutions to those problems. “Let me explain this in a story. We’re in the privileged and grateful position to serve one of the largest chipset manufacturers and their security mission,” he explained. The company in question manufactures chips used in many industrial IoT devices, smart city applications and so forth. 

“And they recently approached us and asked us to do a black box test. I actually thought [the executive at the chip company was] joking at first because they only ever wanted white-box tests,” Harrington recalled. When he asked for clarification, Harrington learned that an executive at the company requested a black-box test. 

“Here’s what we decided to do. We decided to do both [a white-box and black-box test],” Harrington said. 

For the black-box portion, Independent Security Evaluators allocated 200 person-hours. After the end of that time, the company found four security vulnerabilities. “Out of these four, two of them, the customer already knew about,” Harrington said. “The third issue was a misunderstanding by us of how the system works.” Because Independent Security Evaluators wasn’t familiar with the system they were testing, they made an incorrect assumption about why an input in the system created an output. “The fourth issue was, in fact, a previously unknown critical security vulnerability, and so that was super valuable to this customer.” But because of the black-box nature of the test, the company couldn’t advise how to fix the problem. 

In the white-box testing experiment, the company employed separate staff to allocate the same 200 person-hours in the assessment. “We found 21 previously unknown critical security vulnerabilities in the same system,” Harrington said. “And for every single one of those issues, we were able to articulate at least one solution for how to fix it.”

Returning to the crash-testing metaphor explained at the outset of this article, a variety of potentially destructive approaches are needed to get a clear sense of risk. “What’s a lie in this context that we hear commonly? The idea: ‘Oh, no one would think to do that,’” Harrington said, referring to potential ways to hack a system. If a white-hat security expert can think of a potential vulnerability, so can an attacker. 

“The point is, there are misconceptions that often hold us back that we don’t even realize we have,” Harrington said. Many of which are based on false assumptions of security rather than asking, “what if?” “What if your system is expecting a positive integer and we put in a negative integer? Or this field is expecting a password of 20 characters, and we put in 2000 characters? What if this field is expecting text, and we put in a command?” Harrington asked. “This is how the most catastrophic security vulnerabilities are found. And yet, most security testing doesn’t go to this level.”

Harrington points to an example with Ethereum, the cryptocurrency, to illustrate his point of how seemingly strong security protections can be defeated. His firm set about trying to predict the key that keeps Ethereum wallets secure. “Now, these keys are supposed to be unpredictable — statistically impossible that you could guess it,” he said. “To put it in context, this would be like: I go to the beach. I pick up a grain of sand. I throw the grain of sand back on the beach. The next day, you come to that same beach you pick up a single grain of sand, and it’s the same one — except multiply that by every beach in the world and multiply that by 1,000 planets,” Harrington said. “That’s the likelihood of being able to guess an unpredictable Ethereum wallet key — statistically impossible.” 

Independent Security Evaluators proceeded to ask a series of what-if questions and consequently discovered 752 predictable Ethereum keys. “The asset value in those wallets was $54 million, just sitting there for the taking,” Harrington said. “We actually caught someone in the act of stealing all of that money with this vulnerability.” 

About the Author(s)

Brian Buntz

Brian is a veteran journalist with more than ten years’ experience covering an array of technologies including the Internet of Things, 3-D printing, and cybersecurity. Before coming to Penton and later Informa, he served as the editor-in-chief of UBM’s Qmed where he overhauled the brand’s news coverage and helped to grow the site’s traffic volume dramatically. He had previously held managing editor roles on the company’s medical device technology publications including European Medical Device Technology (EMDT) and Medical Device & Diagnostics Industry (MD+DI), and had served as editor-in-chief of Medical Product Manufacturing News (MPMN).

At UBM, Brian also worked closely with the company’s events group on speaker selection and direction and played an important role in cementing famed futurist Ray Kurzweil as a keynote speaker at the 2016 Medical Design & Manufacturing West event in Anaheim. An article of his was also prominently on kurzweilai.net, a website dedicated to Kurzweil’s ideas.

Multilingual, Brian has an M.A. degree in German from the University of Oklahoma.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like