Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!
October 18, 2021
Enterprises are eager to use intelligent artificial intelligence (AI) chips to bring data analytics workloads down to the edge of IoT networks.
In four years, the size of the market for edge AI chips is set to surpass the equivalent for cloud architectures, reaching $12.2 billion against $11.1 billion for the cloud, according to ABI Research.
During the COVID-19 pandemic, enterprise IoT deployment has accelerated as a means to overcoming the specific operational challenges encountered by organizations throughout the crisis.
This increased demand for enterprise IoT comes as the capabilities of connected things at the edge has expanded. With intelligent technologies that derive insights with fewer round trips to the cloud, the number of potential applications increases. In addition, the cost of bandwidth expended on cloud connectivity is reduced.
This matters to businesses because there’s a limit to what existing sensor networks can achieve, even in sectors where IoT adoption has been strong, such as in the industrial sector or the supply chain.
According to a study from University of Brescia, the typical industrial IoT device, using lightweight protocols sends messages to the cloud with roundtrip latency in the region of 300 milliseconds, when using “free-access” cloud servers. To unlock applications that will truly redefine industry 4.0 (computer vision, robotics or predictive maintenance in real time), there’s a need for AI enriched chips in IoT endpoints to catalyze data as it arrives.
As the size of data inputs from the edge increases, so does the strain on network infrastructure, which makes existing data highways to the cloud more fragile.
Machine learning at the edge will help assuage these pressures, but it must also cater for highly distributed infrastructures, containing hundreds or potentially thousands of sensors. Crucially, in many cases, AI in IoT will also be expected to disperse computing resources, switching between the connected edge and public or private cloud servers as needed.
Intelligence for All and Sundry
Intelligent edge chips now cover all ends of the spectrum. Capabilities vary from generalized and entry-level machine learning cores from market leader Arm, to digital signal processing units built for audio and video intelligence, or dedicated neural networks that pair to external microcontrollers.
As such, vendors are pushing architecture designs to their limits to find designs that not only provide immediate machine learning inference with limited energy consumption but also sufficient customization options for each enterprise’s specific requirements.
A gamut of integrated AI options exist to complement IoT microprocessors, targeting “smart tiny devices” for audio, voice and health monitoring, along with industrial machine vision, autonomous drones or intelligent surveillance cameras.
Qualcomm and Intelligent Automation in Outer Space
Further inspiration for IoT’s intelligent revolution might be found beyond Earth’s frontiers, according to Cem Dilmegani, a lead consultant at AI consultancy AIMultiple. Dilmegani said Qualcomm Technologies was working alongside NASA’s Jet Propulsion Laps to deliver autonomous functionality in the Mars helicopter Ingenuity, the first powered flight set for another planet.
Because it can take up to 22 minutes for the helicopter to receive wireless signals, it’s impossible to control the vehicle via remote control in real time. Autonomous circuits must instead make decisions internally based on the delayed messages. It’s mission-critical AI at its most finely poised, where a mistaken autonomous judgment could wreck everything.
Although NASA’s budget can far surpass that of a typical enterprise, there are still some parallels. For example, AI chips used in NASA’s aircraft need hefty computer processing resources within a compact energy profile. In the helicopter’s case, most of the energy available for AI comes from a single components heater.
“The Ingenuity drone’s flight on Mars was an interesting development,” Dilmegani said. “If it works in space it should do fine on Earth.”
System on a Chip and TinyML
Qualcomm is also involved in system on a chip (SoC) development. Its range includes an SOC that targets connected healthcare, logistics, management and warehousing.
When applied to the Internet of Things, these SoCs serve as beefed up successors to the microcontroller unit for applying intelligence from edge devices. SoCs integrate multiple peripherals from within a single semiconductor architecture. However, the SoC is crafted for complex applications and the demands they place on internalized peripherals.
Engineers may look to implement approaches like tiny machine learning (TinyML), the development ecosystem that reduces artificial intelligence functions to megabyte-sized software in embedded architectures.
“TinyML and edge AI chips are completely complementary,” said Alexander Harrowell, senior analyst, advanced computing and AI, Omdia “TinyML is more of a community than a specific piece of software.
“You might even say it’s a vibe, but the whole idea is to bring high-level frameworks such as TensorFlow, Caffé or PyTorch, that a great majority of AI developers use, into the context of much smaller devices.
“TensorFlow itself is designed to be adaptable to different hardware architectures, through what it calls delegates, like a pluggable device drive.
“So applications developers can use the same interfaces and abstractions once someone’s done the heavy lifting of integration.”
AI Accelerator Strategy
Hardware revenue for the total addressable market of edge AI chips is projected to reach $54 billion by 2026, up from $30 billion at present, according to Harrowell.
“This is split roughly equally between PC/tablet, mobile and a wide variety of embedded and other devices in the vertical industries,” he said, “Of those, the biggest single fraction is expected to be edge servers and appliances, the kind of devices most people think of as edge computing, at about 15% of the total. The rest is made up of cameras, automotive, smart speakers, robots and drones.”
Omdia has seen a substantial fall in price points of devices that contain accelerated machine learning chips, particularly in consumer devices.
Smartphones costing as little as $130 now sport AI accelerators to handle deep learning straight from the internal motherboard, Omdia has found, enabling everything from photograph touch-ups to battery saving at high processing speeds. Examples of this can certainly be found in China, for instance the UMDIGI A11 smartphone, which contains an AI camera and digital infrared thermometer.
“This suggests they will be turning up in very cheap devices indeed, when you think that they are a subset of the system-on-a-chip cost, and the SoC itself is a subset of the phone bill of materials (BoM),” Harrowell said.
For smartphones, the BoM refers to a few hundred key components that together represent overall component costs for designing and building product.
A consideration in IoT can be where exactly to install edge AI resources. Running AI from edge devices means handling noisy data efficiently. The software must effectively be able to disregard meaningless information, but exchange key insights to other parts of the network.
As AI is often described as “ubiquitous autonomy,” it’s sometimes tempting to gloss over the separate compute phases that it consists of.
In practical terms, the implementation of autonomy at the edge must handle data in multiple phases, including pre-processing, description and classification.
Some use cases might demand that a deluge of information is pre-processed in the cloud, while others will require data to say local to the end device for latency and privacy reasons. The cloud may always be available to sustain edge AI, but the frequency at which it is accessed could change.
You May Also Like