Edge Computing: Enabling Real-Time Insights and Actions
Simone, who described his definition for edge computing as more restrictive than most, said the term should apply only to processing happening “on the device that generated the data, or that needs that data.” Presumably, data process that takes place at a nearby gateway or edge device would be classified as near-edge, according to Simone.
“My definition of edge computing is if you have a solution that runs on the cloud or elsewhere, and then you can take that solution and sync it completely with a device in the car, on a train, in a factory, and run it there,” he added. “That means standardization of the way you transmit not just data, but the entire system.”
That description edges — pun intended — in the direction of the concept of firmware, the software that enables basic hardware operations on a wide variety of devices. He said edge computing, however, is different, having more to do with the processing of data to run application-level solutions. Although, evolving device firmware, computing power and memory is complementary to the practice of edge computing. Ultimately, he acknowledged that concepts such as edge cloud or “near edge” computing, firmware and others all deliver their own value — they just aren’t edge computing.
Why Edge Computing Is Hot
While the definition of edge computing may be fluid most agree on the benefits it provides within enterprise IoT deployments, and why it makes sense for those IoT architectures to move in this direction. Gartner anticipates that by 2025, 75 percent of data processing will move to the edge — up from 10 percent in 2018.
It remains to be seen if edge computing adoption will grow so quickly, but the technology’s most important benefit is the ability for real-time decision making. “You don’t have any latency factors other than those on the device itself,” said Sixgill’s Ressler, who added that the escalating movement in mission-critical and industrial IoT applications requires operational decisions to be made on the sensors themselves “in a sub-second or millisecond time frame.”
While 5G mobile broadband connectivity increasingly is being positioned to help IoT devices achieve latency of no more than 1–2 milliseconds between devices and clouds, it still carries the potential for a connectivity failure at a critical moment.
Similar thinking applies to the benefits of edge computing in an IoT environment where the devices are remotely located or distributed over broad geographies. This describes everything from construction sites to autonomous vehicles to railroads to mining and other transportation and logistics fleets.
“There’s a benefit if the device is remote and you’re able to deploy the data application functionality on the same device that is collecting the data,” Ressler said. While connectivity is very important in these use cases, he said edge computing on the device itself means that if that connectivity is intermittent, the IoT-connected device “can operate on its last known information state,” to enable key decisions to be made.
There are many factors involved in the Internet of Things. Ten years ago sensors were not small enough or did not have the computing power of sensors today. The fact that a sensor today can now actually crunch the numbers for the data it is collecting and provide real actionable data or situational awareness to the “Edge” and on to the “Cloud” means that data can be analyzed at multiple levels simultaneously. Manufacturers will need to use the data collected at the Cloud and Edge levels to identify updates that can be made to sensor algorithms to make those algorithms more efficient. Many algorithm updates could even be made automatically with the use of AI modules built into cloud and edge processes.