Featured guest post by Carl Moberg, CTO and co-founder Avassa
The success of the public cloud model emerged from the fact that it made software development and application operations much more comfortable than before. It brought with it a myriad of tools and practices that were designed to improve the quality of life for application teams. It allowed them to avoid time-consuming and mostly manual steps that used to be associated with deploying new applications.
Concepts like continuous integration and continuous deployment (CI/CD), and DevOps practices shifted the focus away from infrastructure towards an application-first mindset. And a key technology underpinning this shift was the introduction of containerized applications. This technology offered developers a universal packaging format that allowed them to run the exact same software setup during development, through test, and into production. And it provided not only a standard packaging format, but also standardized means to version, distribute, start, stop, upgrade, and provide logging for applications.
We now see a rapid uptake of container applications at the edge for some good reasons, and below we'll dive deeper into three of them.
Three reasons why containers make sense at the edge
Containers have emerged as a groundbreaking technology that enables software developers to package applications and their dependencies into a single, self-contained unit. By encapsulating the application code, libraries, and runtime environment into a single file, containers offer an isolated and consistent execution environment.
Meanwhile, the edge computing paradigm brings computation and data storage closer to the devices and sensors, reducing latency and enabling real-time processing. However, managing distributed applications at the edge poses unique challenges in contrast with centralized cloud environments.
A universal and self-contained packaging format
Containers are easy and efficient to deploy to the edge
Standardized monitoring and observability
The container format allows developers to reliably package all required libraries for a specific application into one portable container file format. Since applications bring their own libraries, that means that the administrative task of managing libraries on the operating system level is now gone. What used to be a manual, complex, and time-consuming task of tracking which libraries a certain application required and making sure that it was installed on the OS before the application was installed is now solved at an architectural level.
A second benefit of the container format is that it is easy to create multi-architecture packages. This means that a single-build artifact can support multiple CPU architectures (like x86 and ARMv8). This means that edge operators can mix and match architectures according to their needs and budgets with no impact on the deployment process.
The container ecosystem provides a broad set of supporting technologies above and beyond a packaging format. This includes things like a standardized way to publish container applications in repositories (including APIs) as well as fetch container images based on their names and versions. This means that container images can be stored both in public repositories (e.g. open source applications) as well as in private repositories (e.g. internally developed applications). Either way, fetching the applications into edge locations will use the same mechanisms independent of location and thereby significantly reduce the complexity.
Many types of edge locations have limited or costly connectivity. This means that any transport of large files into or out of the edge sites may be slow or expensive. Consequently, edge environments benefit greatly from efficient file transfer mechanisms and only transporting the bits that are really required. The container image file format is based on the concept of layers. A layer represents the result of a specific step in the process of building the container application. The file transfer mechanism involved in pulling a new version container into an edge site includes comparing the layers of the current version with the layers of the next version and only fetching the layers that have changed. Used right, this simple feature can significantly reduce the size of the file transfer to the bare minimum of what has actually changed between the two versions and completely avoid downloading applications or libraries that have not changed.
We have covered how the container ecosystem has provided the means of standardized and efficient packaging and distribution of the container images, but there is more. To figure out the health of an application there are readiness and liveness probes. Readiness probes control if an application is indeed ready to start serving traffic. Liveness probes on the other hand exist to check if an application is in a healthy state and can perform its designated tasks. This is important as applications may be alive, but due to unforeseen circumstances (memory leaks, deadlocks) are not able to function correctly and require some administrative actions (e.g. restart). Both of these probe types can easily be integrated into a more comprehensive monitoring system such that when any of them fail for a particular application in a particular location, an alarm can be raised.
Observability is the ability to dive deeper into the behavior of a specific application or component when it is suspected that something is not functioning correctly. Two common ways of doing this are to turn on detailed logging and provide terminal access to the execution environment of the application for additional live inspection. The common container runtimes on the market (Docker, Podman) attach to the log output of the applications running in containers and collect them for streaming or later analysis. This provides a well-known means for an external observer, like a logging subsystem, to remotely attach to a container runtime and tap into the logs of all applications running on it in a well-known way. For edge environments, this means that we can offer centralized monitoring tools that can scale with the number of sites by only fetching logs and telemetry on demand.
In conclusion, containers have revolutionized software deployment and are now making their mark at the edge. By providing a lightweight and portable execution environment, containers streamline development, enhance security and isolation, and enable efficient orchestration of distributed applications in parallel with hyper-converged solutions. As organizations embrace the power of containers, they can unlock new opportunities, drive innovation, and deliver exceptional experiences to their customers. The edge is becoming lovable for development and application operations teams, thanks to the transformative potential of containers.
Find us at NRF 2024: Retail’s Big Show
Specifically for the retail industry, containerization emerges as a game-changer, offering a transformative solution that enhances scalability and streamlines operations. The benefits of containerization in retail are many, providing a consistent environment for applications to run seamlessly across various platforms. This technology facilitates rapid deployment, scalability, and resource utilization optimization, ultimately leading to cost savings and increased productivity. Scale Computing will have available demo’s in their booth at this year’s show.
Scale Computing: Booth #2316
Avassa: Booth #954