Building a Software-Defined Self-Optimizing Internet of Things

Early IoT descriptions tended to gloss over the logistical hurdles of managing potentially tens of billions of IoT devices. Software-defined IoT hardware could help address this challenge.

Brian Buntz

October 5, 2018

7 Min Read
Getty Images

The first descriptions of the Internet of Things tended to focus on its physical dimension — the number of IoT connections, the extension of internet functionality into the physical world. But nowadays, the IoT pendulum is swinging toward a “hardware-enabled, software-defined” approach.

This approach tends to streamline IoT device management — whether the devices in question are in an agricultural field, a warehouse or in a building. But the strategy is most important when the IoT endpoints are remote, in which case, an organization would traditionally send workers out in a truck to make adjustments to individual IoT devices. “If you’ve got 30,000 things deployed across the country, and you need to go out and put on new antennas, or change out radios, or do any of the things that require hardware changes, you’re pretty easily looking at millions of dollars of expense,” said Scott Nelson, Ph.D., chief product officer and vice president of product at Digi International. The concepts of software-defined network and network function virtualization have gained in popularity in recent years for this broader reason; it enables telecommunication companies to more efficiently manage their networks and generally be more agile. “You can just roll out a change over the air, as opposed to having to travel to the base of every tower and make hardware changes or make back-office changes.”  

In the following interview, Nelson discusses his thoughts on software-defined IoT projects, touching on machine learning and artificial intelligence, sharing his thoughts on self-optimizing systems that can adapt their own life cycle.

What do you think of the analogy that IoT will evolve similarly to how the internet has? 

I generally think it is an apropos example. The difference for IoT is, with the internet, content is something that the internet funnels to users. That is, content is something you distribute or broadcast. It goes out toward the edge from the cloud. The cloud, of course, has helped massively in the distribution of content. Look at Amazon Prime Video as an example and think about how different [online content streaming] is, from when I grew up, and you had to be within the antenna range of a [television] tower to watch anything. But with IoT, the edge is more about the starting point or the initiation of data that then travels back typically toward the cloud. And then the data in the cloud is converted to information, which then can drive actions. Their similarity is once you get to the cloud, I can create content and I can start to control how I distribute that informational or actionable content from the cloud to operations folks.

How does the rise of edge computing effect IoT and how do you see virtualization affecting that?

The virtualization at the edge is more about managing the product lifecycle of the edge purely in software. How do I decide what the edge is going to do? How do I decide to whom it’s going to talk? What radios it’s going to use? And what logic it is going to implement?

I’m looking forward to the future where we design edge products that become the next generation of product on their own, because they’re watching what has to be done to the data, they’re learning from the instructions they receive and the requests they get, and they adapt  their logic or they optimize their own performance to extend the value proposition.

And this helps cut down on truck rolls, right?

An edge product that runs out of value or can’t keep up with the value request is a truck roll. And truck rolls are a big big part of the IoT deployments. It’s one of the things that slows IoT deployers down because they forget about the truck rolls when they kind of conceive their business case.

Another trend involving IoT is the rise of its use for business logic and to support machine learning and IoT. Could you share your take on this subject?

Business logic is a phrase that you unwrap a little bit. Usually, business logic implies that I changed from just pure data to something upon which a business can act. And it implements a business condition in some simple algorithm that says: “Okay, it is time to schedule a visit there” or “Okay, time to fill something up.” It refers to whatever the appropriate action is at that edge point.

I worked in what would be called “machine learning” in the 1990s. We often referred to it as “neural nets” then. And I’ve done more recent projects with companies that are doing what I would call “true AI.” For me, AI obviously is a very important and hot area of technology. At the edge, I would argue what gets called AI is more often machine learning, which is also still very valuable. But for me, I draw a line in the sand that a true AI system is non-deterministic. You don’t know where the product is likely to go. It may not have just a simple optimization function that you can predict; it’s going to statistically optimize to this level of performance or something. A true AI system, in my mind, is allowed to learn on its own. Perhaps it’s allowed to experiment, or it’s allowed to go to a state that wasn’t anticipated by its programmer or designer.

[Industrial IoT World is the event that takes IIoT from inspiration to implementation, supercharging business and operations. Get your ticket now.]

Here is an example: Normally, installing HVAC systems involves doing computational fluid dynamics management. That work can quickly become obsolete when, say, somebody moves the fryer or some heat source or changes how people get in the building. But the AI system I worked on ran experiments on weekends and after hours to learn how to optimize performance as measured by temperature uniformity. Basically, it reverse engineers the environment in which it is operating and adapts as instructed.

I think we can anticipate products that transform themselves into next-generation products. They can adapt to the way users or other devices interact with them. There’s a structure that creates smart products that learn. To really go to the next level of learning products, we are going to need edge compute capabilities.

Why are many discussions around AI either overtly optimistic or pessimistic? As for the latter point, people have been worried about the unintended consequences of AI for a long time.

I gave a talk in Silicon Valley, where I put up a picture of HAL [from the 1968 film “2001: A Space Odyssey”] and I think about 80 percent of the people in the audience didn’t know what it was.

The concept of singularity and artificial intelligence probably almost came about almost simultaneously. If you think about machines living their own existence, there are apocalyptic versions of it and there are very positive versions of it. But I think the practical versions of it are, without a doubt, focused on the optimization of systems. As we deploy more and more edge devices and connect them together, there’s more and more opportunity for resource management and optimization of performance. Like I said earlier, one of the really exciting areas is when products change their own life cycle. Systems [can live longer in the field] because they can adapt to changing needs. As a manufacturer, I can sell that product longer.

How does AI relate to IoT security?

Security is perhaps the most everyday statement of [this potential for adaptable and learning systems]. If the device gets hacked and attacked in a way that you can’t anticipate or remedy with a software update, you’ve got a dead device. You have to go out and replace it with something with more up-to-date security. Security is one of those things that you know today has to be managed in a configurable way constantly and repeatedly.

About the Author

Brian Buntz

Brian is a veteran journalist with more than ten years’ experience covering an array of technologies including the Internet of Things, 3-D printing, and cybersecurity. Before coming to Penton and later Informa, he served as the editor-in-chief of UBM’s Qmed where he overhauled the brand’s news coverage and helped to grow the site’s traffic volume dramatically. He had previously held managing editor roles on the company’s medical device technology publications including European Medical Device Technology (EMDT) and Medical Device & Diagnostics Industry (MD+DI), and had served as editor-in-chief of Medical Product Manufacturing News (MPMN).

At UBM, Brian also worked closely with the company’s events group on speaker selection and direction and played an important role in cementing famed futurist Ray Kurzweil as a keynote speaker at the 2016 Medical Design & Manufacturing West event in Anaheim. An article of his was also prominently on kurzweilai.net, a website dedicated to Kurzweil’s ideas.

Multilingual, Brian has an M.A. degree in German from the University of Oklahoma.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like