Edge computing is a jigsaw puzzle, with multiple choices for IT architects and embedded developers. Ultimately, it can create edge AI, enabling faster, richer decision making.

Jack Vaughan

February 23, 2021

9 Min Read
Edge Computing
Getty Images

AI-based machine learning techniques are going beyond the cloud-based data center, as processing of vital IoT sensor data moves much closer to where the data first resides.

The move will be enabled by new artificial intelligence (AI)-equipped chips. These include embedded microcontrollers with narrower memory and power consumption requirements than GPUs (graphical processing units), FPGAs (field-programmable gate arrays) and other specialized IC types first used to answer data scientists’ questions in the cloud data centers of Amazon Web Services, Microsoft and Google.

It was in these clouds that machine learning and related neural network use exploded. But the rise of IoT created a data onslaught that required edge-based machine learning as well.

Now, cloud providers, Internet of Things (IoT) platform makers, and others see benefit in processing data at the edge before turning it over to the cloud for analytics.

Making AI decisions at the edge reduces latency and makes real-time response to sensor data more practical and possible. Still, what people call “edge AI” takes many forms. And how to power it with next-gen IoT presents challenges in terms of presenting good-quality actionable data.

Edge Computing Workloads Grow

Edge-based machine learning could drive significant growth of AI in the IoT market, which Mordor Intelligence estimates will grow at a 27.3% CAGR through to 2026.

That is buttressed by Eclipse Foundation IoT Group research in 2020, which pegged AI at 30% as the most commonly cited edge computing workload among IoT developers.

For many applications, replicating the endless racks of servers that enabled parallel machine learning on the cloud is not an option. IoT edge cases that benefit from local processing are many, and highlighted by varied cases of operations monitoring. The processors, for example, watch events triggered by pressure gauge changes on an oil rig, detection of an anomaly on a distant power line, or captured video surveillance of an issue at a factory.

The last case is one of those most widely pursued. Application of AI that parses image data at the edge has proved a fertile area. But there are many complex processing needs for event processing using IoT device-gathered data.

The Value of Edge Compute

Still, cloud-based IoT analytics will endure, said Steve Conway, senior adviser, Hyperion Research. But the distance data must travel brings processing latency. Moving data to and from a cloud naturally creates lag; the round trip takes time.

“There is something called the speed of light,” Conway quips. “And you cannot exceed it.” As result, a hierarchy of processing is developing on the edge.

Other than devices and board-level implementations, this hierarchy includes IoT gateways and data centers in manufacturing that expand architectural options available for next-generation IoT system development.

In the long view, edge AI architecture is yet another generational shift in data processing  focus  – but a key one, according to Saurabh Mishra, senior manager for product marketing at SAS’s IoT and Edge division.

“There is a progression here,” he said. “At one time, the idea was centralizing your data. You can do that for certain industries and certain use cases – ones where data was already created in a context, such as in a data center,” he said.

It’s not really possible to efficiently – and economically – move that to the cloud for analysis,” Mishra said, who noted that SAS has created validated edge IoT reference architectures on top of which customers can build AI and analytical applications. Striking a balance between cloud and edge AI will be a fundamental requirement, he said.

Finding balance begins with consideration of the amount of data needed to run machine learning models, according to Frédéric Desbiens, program manager, IoT and Edge Computing at the Eclipse Foundation. That is where the new intelligent processors come in.

“AI accelerators at the edge can do local processing before sending the data somewhere else. But, this requires you to think about the functional requirements, including the software stack and storage needed,” Desbiens said.

AI Edge Chip Abundance

The rise of cloud-based machine learning was influenced by the rise of the high-memory bandwidth GPU, often in the form of a NVIDIA semiconductor. That success drew the attention of other chip makers.

In-house AI-specific processors followed from hyperscale cloud-players Google, AWS and Microsoft.

That AI chip battle has been joined by leading lights such as AMD, Intel, Qualcomm, and ARM Technology (which, for its part, last year was acquired by NVIDIA).

In turn, embedded microprocessor and systems-on-a-chip mainstays like Maxim Integrated, NXP Semiconductors, Silicon Labs, STM Microelectronics and others began to focus on adding AI abilities to the edge.

Today,  IoT and edge processing needs have attracted AI chip start-ups that include EdgeQ,  Graphcore, Hailo, Mythic and others. Processing on the edge is constrained. Barriers include memory available, energy consumed and cost, emphasizes Hyperion’s Steve Conway.

“The embedded processors are very important, as energy use is very important,” Conway said. “The GPUs and CPUs are not tiny dies, and GPUs, particularly, use a ton of energy,” he said, referring to the relatively large silicon form factors GPUs and CPUs can take on.

Making Neurals Fit the Part

Data movement is a factor in energy consumption on the edge, advises Kris Ardis, executive director of Maxim Integrated’s microcontroller and software algorithm businesses. Recently, the company released the MAX78000, which pairs a low-power controller with a neural net processor that can run on battery-powered IoT devices.

“If you can do a computation at the very edge, you save bandwidth, and communications power. The challenge is taking the neural net and making it fit in the part,” Ardis said.

Individual IoT devices based on the chip can feed IoT gateways, which also have a useful part to play, combining rollups of data from devices, and further filtering data that may go to the cloud in order to analyze overall operations, he indicated.

Other semiconductor device makers also are adjusting to a trend that sees compute moving nearer to where data is. They are part of the effort to broaden the capabilities of developers, even as their hardware choices grow.

Bill Pearson, vice president of Intel’s IoT group admits there was a time when “the CPU was the answer to all problems.” Trends like edge AI belie that now.

He uses the term “XPU” to represent a variety of chip types that support different uses. But, he adds, the variety should be supported by a single software application programming interface (API).

To aid software developers, Intel recently released Version 2021.2 of the OpenVINO toolkit for inference on edge systems. It provides a, common environment for development among Intel components including CPUs, GPUs, and Movidius Visual Processing Units. As well, Intel offers DevCloud for the Edge software to forecast performance of neural network inference on different Intel hardware, according to Pearson.

The drive to simplify is marked at GPU powerhouse NVIDIA too.

“The industry has to make it easier for people that aren’t AI specialists,” said Justin Boitano, vice president and general manager for Enterprise and Edge Computing, NVIDIA.

That may take the form of NVIDIA Jetson, which includes a low-power ARM processor. Named with a nod to the ‘60s science-fiction cartoon series, Jetson is intended to provide GPU-accelerated parallel processing in mobile embedded systems.

Recently, to ease vision system development, NVIDIA rolled out Jetson JetPack 4.5, which includes the first production version of its Vision Programming Interface (VPI).

With time, edge AI development chores will be handled more by IT shops, and less by AI researchers with deep knowledge of machine learning, Boitano said.

The Tiny ML That Roared

The skills needed to migrate machine learning methods from the vast cloud to the constrained edge device are not easily gained. But new software techniques are being applied to enable compact edge AI, while easing the task of the developer.

In fact, industry has experienced the rise of “Tiny ML” approaches. These make do with less power and use limited memory, while achieving capable inference-operations-per-second ratings.

Various machine learning tooling to reduce edge processing requirements have emerged, including Apache MXNet,  Edge Impulse’s EON, Facebook’s Glow, Foghorn Lightning Edge ML, Google TensorFlow Lite, Microsoft ELL, OctoML’s Octomizer and others.

Down-sizing neural net processing is a main target here, and the techniques are several. Among these are quantization, binarization and pruning, according to Sastry Malladi, who is CTO at Foghorn, a maker of a software platform that supports a variety of edge and on-premises implementations.

Quantization of neural net processing focuses on use of low bit-width math. Binarization, in turn, is used to reduce the complexity of computations. And, pruning is used to reduce the number of neural nodes that must be processed.

Malladi admits that is a daunting gamut for most developers to traverse – especially across a range of hardware. The efforts behind Foghorn’s Lightning platform, he said, are intended to abstract the complexity in machine learning on the edge.

The goal is to allow line operators and reliability engineers, for example, to work with drag-and-drop interfaces, rather than application programming interfaces and software development kits, which are less intuitive and require more coding knowledge.

Software that simplifies development and runs across multiple types of edge AI hardware is also a focus for Edge Impulse, makers of a development platform for embedded machine learning.

Ultimately, machine learning maturation means some model miniaturization, according to Zach Shelby, CEO, Edge Impulse.

“Once, the direction of the research was toward bigger and bigger models of more and more complexity,” Shelby said. “But, as machine learning hit prime time, people started to care about efficiency again.” That led to Tiny ML.

Software that can work on existing IoT infrastructure is necessary, while supporting a path to new varieties of hardware, he said. Edge Impulse tools allow cloud-based modeling of algorithms and events on available hardware, Shelby continued, so that users can try different options before they make selections.

Keep Your Eyes on Vision

On the edge, computer vision has become a prominent use case for AI, especially in the form of deep learning, which employs multiple layers of neural networks and unsupervised techniques to achieve results in image pattern recognition.

Vision system architecture is undergoing shifts today, as cameras on the very edge add processing capabilities via embedded hardware for deep learning, according to Forrester Research’s Kjell Carlsson, principal analyst. But finding the best application targets can be a challenge.

“The issue with AI on the edge is that you more frequently end up looking at use cases that are ‘net new,’” he said.

Developing these greenfield solutions has inherent risk, Carlsson said, so a helpful tactic is to focus on use cases that offer a high benefit to cost ratio, even if the pattern recognition accuracy might trail that of full-fledged existing systems.

Overall, Carlsson said edge AI could help fulfill IoT’s original promise, which has lagged at times as implementers sorted through myriad potential use cases.

“IoT on its own had some limitations. Now, with AI, machine learning and deep learning that makes IoT more applicable – as well as valuable,” he said.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like