Upgrading microcontrollers with small, essentially self-contained neural networks enables organizations to deploy efficient AI capabilities for IoT without waiting for specialized AI chips.

Pete Bartolik

November 3, 2020

6 Min Read
Image shows deep learning artificial intelligence and big data algorithms.
Getty Images

Much of the development in machine language (ML) implementations follows a “bigger is better” path: more data, more storage, more compute power, more bandwidth all equals better results – or so the mantra goes.

But a community-based effort is moving in the opposite direction, building “TinyML” implementations for use on low-powered devices with scarce computer and memory assets — like the millions of sensors deployed in Internet of Things (IoT) implementations.

TinyML downsizes the technology to deploy neural networks on low-cost microprocessors where they can operate completely or semi-independently using long-lasting, low-power batteries. While semiconductor companies have created chipsets to exploit this technology, proponents say they can deploy TinyML software to existing microcontrollers in the field. 

The TinyML effort coalesced in early 2019 at a meeting to formalize effort around the fast-growing movement, which drew nearly 200 attendees and led to the formation of the tinyML Foundation

“We are really exploring machine learning for low-power inexpensive applications, rather than big machine learning algorithms running in a data center,” said Zach Shelby, then an executive with the Arm Ltd. semiconductor and software design company, who left to co-found Edge Impulse, which provides TinyML developer resources.

Making Sense of Noisy Sensor Data With Tiny ML for IoT

TinyML started as a hashtag from Pete Warden of Google, one of the proponents of the movement. In a book he co-authored for TinyML developers, Warden wrote, “It became clear to me that there was a whole new class of products emerging, with the key characteristics that they used ML to make sense of noisy sensor data, could run using a battery or energy harvesting for years, and cost only a dollar or two.”

TinyML encompasses efforts that have accelerated over the past few years. Alexander Wong and Mohammad Javad Shafiee, systems design engineering professors at the University of Waterloo, announced in 2017 they had achieved a 200-fold reduction in the size of deep-learning AI software used for a particular object recognition task. 

“Tiny ML is essentially about the premise of building very small and efficient machine learning algorithms,” Wong said in a recent interview. “So, what we’ve done is we’ve created an AI that builds these tiny ML models automatically.”

One example is TinyML speech code, as small as 16 kilobits, said Wong. “It could recognize a number of different commands like, ‘yes,’ ‘no,’ ‘left,’ ‘right,’ ‘stop,’ and this really caters to advancing on-device, voice assistance that’s untethered to the cloud,” he explained.  

‘Simple Software Fix’

The ability to upgrade already deployed microcontrollers with these small, essentially self-contained neural networks provides opportunities for organizations to deploy AI capabilities across IoT implementations without having to wait for development of specialized AI chip products that could take years to develop.

“If you have millions of 8-bit or 16-bit, or 32-bit microcontrollers deployed into various systems or applications, you now have the ability to some degree to offer to your customer base AI as a fairly simple software fix,” explained Richard Wawrzyniak, senior market analyst, ASIC & SoC, with Semico Research Corp.

Advancements in embedded AI are driven not only by innovation and invention but also of necessity given the downsides of data transmission to the cloud. “People finally figured out that sending every bit of data to the cloud is a nonstarter,” said Wawrzyniak. “There isn’t enough bandwidth to support that.” 

The vast amount of IoT sensors and the flood of data they generate represents too great a volume to cost-effectively transmit to the cloud or data center for data extraction, analysis and refining AI training models.

Proponents of TinyML make the case that more traditional ML implementations discard as much as 90% of the data collected by devices before sending them to a cloud service or data center. In contrast, TinyML can sift through all that data. 

“Sensors produce an amazing amount of information, of very high-quality data,” said Shelby. “Today, we can do some pattern matching, but it is difficult, and we throw away most of the data. When we do send data to the cloud, it is too much to deal with.” 

With TinyML algorithms, devices can do complex pattern matching locally, such as detecting an elderly person falling, or recognizing endangered species, or potentially spotting COVID-19 hot spots based on the type and frequency of coughing. According to Shelby, one electric utility is deploying sensors with TinyML and 10-year life batteries on ceramic capacitors that can monitor real-time disruptions to power lines.

“TinyML has the potential to be massive in implication,” Nikolas Kairinos, founder of international think tank Fountech.ai, wrote in an email exchange. “It is going to force us to develop more elegant algorithms and move beyond deep learning — reducing the brute-force approach to developing AI. We are also going to be able to deploy complex models on devices in KB of RAM, overcoming restrictions like bandwidth and power constraints.”

Democratizing AI for IoT

One of the most disruptive aspects of TinyML may be the potential to demystify ML and expand the numbers of developers working with AI. As Warden and co-author Daniel Situnayake wrote, “In reality, machine learning can be simple to understand and is accessible to anyone with a text editor. After you learn a few key ideas, you can easily use it in your own projects. Beneath all the mystique is a handy set of tools for solving various types of problems. It might sometimes feel like magic, but it’s all just code, and you don’t need a Ph.D. to work with it.”

There are only so many data scientists around to create algorithms. So, the development of TinyML developers aided by life cycle tools such as compilers and modelers will be vital to exploiting the explosive growth of microcontrollers and other AI-embedded chips. In addition to the potentially billions of existing microcontrollers today that could be upgraded with TinyML, tech market advisory firm ABI Research recently predicted that TinyML-specific chipsets will ship in 2.5 billion devices by 2030.


For more on AI in IoT, check out our AI in IoT conference. Register here.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like