Edge Computing: Enabling Real-Time Insights and Actions
In enterprise IoT architectures, cloud computing can play a significant role in organizing, storing and analyzing a variety of data collected from IoT sensors, However, as companies start to use IoT for a broader array purposes, including management of mission-critical or industrial applications, there will be a growing need for real-time analytics and faster decision-making to occur directly on the IoT device, or somewhere closer to the device, .
For example, an autonomous car or a critical piece of equipment in a partially automated factory may require instant, almost zero-latency response to detected conditions, as human lives or business livelihood, may hang in the balance. To enable necessary action to be taken more quickly–perhaps autonomously–companies deploying IoT for such purposes increasingly may turn to the concept of edge computing.
Generally, edge computing is the concept of processing data on the connected device, or as close to that device as possible, but the term has no universally agreed-upon definition.
Phil Ressler, CEO of IoT sensor platform company Sixgill, said, “Edge computing is the notion of processing data and conducting analytics on the endpoint device itself, or somewhere else at the edge of the enterprise network, as opposed to sending the data out to a data center.”
David Linthicum, chief cloud strategist at Cisco Systems, shares that view. “By eliminating the distance and time it takes to send data to centralized sources, we can improve the speed and performance of data transport, as well as devices and applications on the edge,” he wrote in a blog post.
Telecom service provider AT&T put a slightly different spin on it in a 2017 white paper, describing edge computing as “placement of processing and storage capabilities near the perimeter (i.e., “edge”) of a provider’s network.” That definition suggests processing occurring not on the enterprise premises, but at a location in carrier’s network, such as a 5G cell tower close to an enterprise customer, a take that befits AT&T’s mission to serve enterprises as a provider of managed services. (AT&T, according to a Light Reading report, is planning to test enterprise use cases for edge computing later this year. The same outlet reports Verizon recently tested edge computing on its 5G network in Houston and determined that edge helped them cut latency in half.)
Sixgill’s Ressler, acknowledged, “There’s not a binary definition of where the edge begins and where the edge ends.”
There are other terms floating around in the edge computing stew, such as edge cloud, which is similar to AT&T’s definition; fog computing, which was spun by Cisco as a way of describing a standardized model for edge computing; and mist computing, the new-age-sounding notion that puts processing in between the cloud and edge — within a sensor, for instance.
If all of this sounds more like the decades-old concept of distributed computing — with Ethernet and ARPANET being prominent examples, there’s a reason for that. Edge computing is one of the latest ways of applying distributed computing philosophy and practice, according to Eric Simone, CEO of ClearBlade, a company that has been talking up the benefits of edge computing for several years.
There are many factors involved in the Internet of Things. Ten years ago sensors were not small enough or did not have the computing power of sensors today. The fact that a sensor today can now actually crunch the numbers for the data it is collecting and provide real actionable data or situational awareness to the “Edge” and on to the “Cloud” means that data can be analyzed at multiple levels simultaneously. Manufacturers will need to use the data collected at the Cloud and Edge levels to identify updates that can be made to sensor algorithms to make those algorithms more efficient. Many algorithm updates could even be made automatically with the use of AI modules built into cloud and edge processes.