Reality AI’s Quest to Bring Defense-Grade ML to Industry
The CEO of the award-winning, machine-learning company Reality AI says the company is seeing growing traction for its technology in the industrial, automotive and consumer sectors.
May 2, 2018
From digital photography to GPS to the internet, a spectrum of utilitarian technologies have military roots. Such is the case for the AI tools developed at Reality AI, the winner of the Project Kairos “Innovator of Things” Award at IoT World 2017. “We are bringing to bear technology that was originally built for military and intelligence applications like surveillance and target acquisition and making it available for industrial, automotive and consumer product applications,” said Stuart Feffer, chief executive officer and company co-founder . “In 2015, we transitioned to be commercially focused and sunsetted all of our classified work.”
Today, Reality AI’s focus is to empower industrial, automotive and consumer-focused companies deploying machine learning and artificial intelligence. “I would say that we are pretty unique. We offer a signal-processing–based approach to machine learning that generates code that can run on inexpensive processors — microcontrollers,” Feffer said.
Ahead of the IoT World conference, where Reality AI will be exhibiting, we caught up with Feffer to learn about what has happened with the firm in the past year and what it has planned for the rest of 2018.
What has happened with Reality AI since your firm won Project Kairos “Innovator of Things” Award at IoT World 2017?
Feffer: That award brought us a lot of visibility, and that was certainly very helpful.
Over the last year, it has primarily been one of getting new customers and moving forward with implementations.
At CES in January, we had an automotive customer of ours, Koito, known in the U.S. as North American Lighting, introduce a new product that features our technology: an adaptive beam headlight product. The high beams are on all night when you are moving at speed, but the headlights use an array of LEDs and can selectively turn on and off parts of the beams so they don’t blind an oncoming driver. Our technology detects when there is an upcoming vehicle.
And we just had a demonstration at the Siemens’ pavilion with the industrial valve manufacturing company Ham-Let at Hannover Messe last week. Ham-Let makes valves primarily for oil and gas process industries. The company has come up with a smart valve that is loaded up with sensors that enables their customers to monitor the valve for safety and predictive maintenance applications.
What is Reality AI’s contribution to the Ham-Let smart valve project?
Feffer: Our role in that technology is to help them use sound to understand what happens on the inside of a valve. One of the sensors on the valve is a microphone. They are using our software to listen to the sound that valve makes as it opens and closes and, for the moment, to determine how long it takes for valve failure to occur. Based on wear levels, that time to open and close will change.
Siemens provides the data pathway and the application infrastructure, and we provide the AI that provides the detection in the audio stream. We are running on a gateway provided by Siemens that reports the findings of the sound analysis up to the cloud application. It runs on Siemens Mindsphere. The valves’ owner or operator can know what is happening across their operations.
How representative is this valve monitoring use case of the work you do in the industrial sector?
We are experts in detection, classification, prediction of input signals. I think this [valve-monitoring project] is emblematic of the trends we see in industrial cases. Makers of equipment of all sorts used in an industrial process — valves, pumps, robotic assembly — are working to make those devices report on their status. The makers of such systems want to provide their customers with IoT-based insight into what is happening in the equipment and the process that it serves. This trend is happening across types of equipment and industries.
Can you share some more examples of partnerships Reality AI has developed?
Feffer: We are working with Bosch and Cisco, and we are trying to develop partnerships with a few other large providers, as well. That is our pathway to the future: to work with these large companies that have platforms and hardware pieces that our software can run on.
How does Reality AI compete in terms of price?
Feffer: Most of the time, the solutions we are developing run in firmware locally on a device at the edge or in a gateway. Some solutions like deep learning require a fairly needy processor. Sometimes even a GPUs to do what they do.
Our approach will often run on a microcontroller: a Cortex-M3 or Cortex-M4 series or one of the low-end A-Series type of microcontrollers. These are chips that cost single to double-digit dollars per unit at scale. Some of them even cheaper.
With industrial, one of the most significant challenges is getting the data off the machine. Wiring is expensive. Wireless can be prohibitive in many environments especially if you are dealing with a high bandwidth signal like vibration or sound. We provide a way to run it on a cheap microcontroller that may already be on that equipment. So you don’t have to move all of that high bandwidth signal off of the machine and analyze it. You analyze it right there locally and only send the findings.
[Internet of Things World is the world’s biggest IoT event, bringing you the latest news and strategies for every vertical and technology. Get your tickets and free expo passes now.]
For the headlight product with Kalito, we are not running a whole autonomous driving stack. I don’t get to tote a server around in the trunk to do the processing for this headlight
We have to do the processing in a control unit that fits the price point of a smart headlight
That is the trick for that product. We are doing it in an environment that is constrained in terms of power, weight and price. That is what our technology is good at.
What job titles do you tend to target with your technology?
Feffer: Our toolset is aimed at a working engineer. We expect our typical user to be someone who is, say, trained as a mechanical engineer who understands the equipment they are working on.
They understand their instrumentation and data and the problems that need to be solved.
They are not, however, an expert in signal processing or machine learning. We allow that type of engineering user — somebody typically from an R&D group for a company making an automotive, industrial or consumer product — to put powerful algorithms to work without really being an expert in these areas.
We operate at the intersection of machine learning and signal processing. That is pretty rarified. If you need somebody that is also an expert in the intersection of AI, signal processing and valves, that is a high ask.
How mature is your customer base in terms of framing IoT use cases for your software?
Feffer: We do see a fair amount of customers with a mandate to use AI in a project, and they ask us what we can use AI for. We do engage with those customers, of course. But the ones that are far more successful have a clear idea of something to improve or a problem to solve,
What are your plans for Reality AI for the rest of the year?
Feffer: In the rest of 2018, we are focused on our customers and finding more of them, and we have some new additions to our technology that we are getting ready to come out with and we hopefully will raise another funding round before the year-end.
About the Author
You May Also Like