Should We Use Software Containers or Serverless for IoT?
- Containers can help enterprises avoid costs that occur in serverless hosting models when more IoT events are notified to public cloud services than anticipated.
- That makes them a viable alternative to serverless hosting structures in many cases, although management must maintain expertise in both disciplines.
- Containers may fall down as a strategy in cases where events traffic is processed at the local edge rather than from on-premises edge servers.
There’s a stark difference between IoT events processing and traditional transaction processing. This is reflected in the tools and cloud services that IoT uses.
One of the most complex dilemmas to resolve is choosing between serverless structures – or functional computing – and containers.
Either choice can be implemented as pure cloud, on-premises or hybrid services. But making the right selection here will be vital to securing the best foundations for your IoT software application, in terms of both cost and performance.
To assess serverless versus the container option, architects must consider the following:
- The volume and distribution of generated IoT events – this will establish the most economical option for processing._
- The length of the control loop, which refers to the maximum anticipated delay between generating a real-time IoT event and receiving the response.
- The degree to which the event processing applications can be dynamically distributed between premises edge, cloud, and data center.
Let’s explore these points in more detail, specifically in the context of the serverless/container choice.
We often think of transient microservices – often dubbed “functions” or “lambdas” – as serverless, but this description can be misleading. Where transient microservices excel is in removing the need to maintain a running copy of event-processing software, to operate whenever new events are expected.
Instead, transient microservices enable the software to be loaded on demand, when a suitable event needs it. In the cloud, it abolishes the need for explicit server, container or VM management , although there’s still a server underneath. Moreover, premises functional computing relies on specific server hosting. You can also host functional components in the cloud without “serverless” features, on VMs or containers, with the right tools.
Serverless hosting– which abstracts the runtime environment into high-level APIs to save on resources – is best with low event volumes. This means that events of a given type, and requiring a given application to process them, don’t occur very often. As event volumes rise, the time between consecutive events falls, and at some point it becomes more efficient to simply keep the software loaded, perhaps in a container.
There are specific issues to consider depending on whether the enterprise is using on-premises, serverless cloud providers or container/VM hosting of serverless tools:
- If cloud provider lambda/functional serverless computing is to be considered, the cold-start time to load and run a function can be a significant contributor to the length of the control loop. That means this approach may not be suited to applications that require low latency.
- Cloud provider serverless hosting may be ideal if events are widely distributed and infrequent, typically to enable on-premises processing event, so long as control-loop latency can be controlled.
- Consider using a serverless software tool (Kafka, Kubeless) for on-premises hosting if there are a large number of events but also many different event types, each requiring its own special handling. While it might be that no single event occurs often(which is why serverless is useful), there will likely be a continuous stream of events of some kind, so hosting functions in your own container/VM/server is justified.
- Consider using the same serverless software tools, with the same pattern of events as point 3 above but run within cloud containers or IaaS, perhaps because on-premises hosting isn’t practical or because the sources are mobile or too distributed. This way, it’s possible to control cold-start latency more efficiently, which could enable serverless processing for more time-critical IoT applications.
No matter how “serverless” processes are run, there will be a delay in loading and running them. If serverless services are run from the public cloud, bear in mind there will be a significant cost overrun if events traffic is higher than expected. Consider container use rather than serverless if this is likely to be a major issue.
Software Containers as a Framework for IoT
Containers are in many ways a halfway point between function/serverless computing and monolithic applications, and that makes them the best overall model for IoT application hosting. While it’s possible to run almost anything in a container, best container practices call for at least a form of stateless behavior, allowing IoT components to be scaled and replaced in case of a failure.
Container services in the cloud offer a fixed and predictable cost for IoT applications, providing that event loads are constrained to the capacity of the hosts, including scaling. This is because containers are typically persistent and, with proper application design, will allow for dynamic scaling and resilience, so long as container deployment and orchestration options are optimized. There’s no cold-start delay with a containerized IoT application and, because users control scaling and resilience through orchestration, there are no cost surprises.
Software containers also support agility. Whether run on-premises (including the premises edge), in the cloud on container and managed container services, on VMs in the cloud or in the data center, containers facilitate IoT component deployment across the full range of hosting options. That creates a unified operations environment to improve operational expenditure and reduce errors.
The big question for IoT architects of containerized applications is the extent to which this dynamism in deployment is valuable to their goals.
Containers introduce some overheads for system and event processing in the case of local edge applications. As such, it may be wise to deploy local-edge IoT components on bare metal to conserve resources and control latency, particularly given that most IoT applications have fairly static edge deployments with few backup resources held in these locations.
In such situations, latency-critical IoT control loops won’t benefit much from containerization. The use of software containers in deeper parts of the IoT application will instead depend on traditional variables – like component scalability, redeployment and the distribution between cloud and data center.
If events are processed at the local edge and containers are desirable for reasons of uniform application packaging, a simple Docker container system will be OK if the set of events is homogenous and can be handled by a single system. If multiple systems are available, perhaps with each having a primary set of event sources to support, then there’s a benefit to setting up all local edge resources as a resource pool, in a Kubernetes cluster, and using Kubernetes to orchestrate deployment, scaling and redeployment.
However, only enterprises with Kubernetes experience in-house should consider it for IoT because of its growing complexity. Orchestration tools like Docker Swarm are more appropriate where limited or no in-house Kubernetes expertise exists.
A good rule of thumb is: if you don’t use Kubernetes in the cloud or data center, avoid it in IoT wherever possible.
Perhaps the most important point in the serverless/container debate for IoT hosting is that expertise in both areas is critical to making the right choice. Companies who have never developed serverless applications will need to hire or train qualified people, and companies without container experience should do the same. Whichever hosting model is deployed, IoT applications will almost certainly require experience in event processing. Enterprises must seriously consider whether staff needs to be augmented, or perhaps existing personnel can be trained to be proficient in both areas.