This past May, Scale Computing announced the release of Scale Computing Fleet Manager, designed to help customers securely monitor and manage their entire fleet of clusters running Scale Computing HyperCore. One of the many aspects that makes SC//HyperCore so unique and powerful is its ability to seamlessly and programmatically deploy both virtual machines and containers at scale, across widely distributed environments - increasingly, out at the network edge where more data and applications now reside.
In this post, we sit down with Dave Demlow, Scale Computing’s VP of Product Strategy to dig into how the new capabilities of SC//Fleet Manager and SC//HyperCore are enabling customers to fully leverage containers in their edge environments, helping them to improve their operational efficiency and assert greater control over their growing container ecosystems.
What exactly is a container and how does this help IT administrators in their daily operations?
A container is a software model that packages all the code and its dependencies so that an application can be run independently and be consistently and easily deployed in a variety of computing environments. Containers continue to see widespread adoption as enterprises are deploying container-based applications virtually everywhere — from the data center to the cloud and now out to the edge.
What is cloud-init and how does SC//HyperCore leverage it for deploying containers?
Cloud-init is a powerful open-source technology that allows customers to take common VM templates and provide them with their unique configuration information at first boot via scripting, allowing for easy, mass provisioning of customized VMs. The latest version of SC//HyperCore now includes REST-APIs that enhance the speed and ease with which users can deploy VMs and containers at scale using cloud-init.
Leveraging cloud-init lets us pass all of the necessary configuration files into a virtual machine which otherwise would have needed to be performed manually. Whether that’s configuring a host name, creating users and groups, or setting up SSH and IP information for running scripts that are used to provision applications, all of these time-consuming and error prone tasks can be performed defined in configuration files and applied with a simple script.
This means that IT administrators no longer need to manually create and customize each individual VM, but can instead programmatically provision hundreds or even thousands of virtual machines with their own settings via a simple script. Not only can this save an enormous amount of time but it also helps improve the overall security and performance of your infrastructure as it vastly reduces the opportunity for manual errors.
This also applies to running containers since running containers requires an operating system to be installed, configured with the appropriate container runtime such as Docker or others, and then needs run-time information provided about the particular containers to run or the installation of agents, connectors or orchestration tools such as Kubernetes in order to connect into a broader container management system to deploy and update containers over time.
How are containers being utilized at the edge?
The edge is a complex ecosystem of devices and applications that either run on local infrastructure or have some pieces that are being run or managed in the cloud. Many of these edge applications are continuously collecting data from sensors, cameras, or other connected IoT devices over networks, using a variety of different protocols, and increasingly they’re going to be deployed as containers. So the ability to monitor and manage all of these containers in a systematic and cloud-like manner will become essential, especially by industries like manufacturing and retail who are investing heavily in applications like inferencing, computer vision, and AI/ML.
What other container-based edge use cases does this solution address?
Another advantage of using containers across edge environments is the ability to use policy updates to deploy new types of applications. For example, some container management systems may be geared towards deploying machine learning systems, such as computer vision based applications. So in addition to being able to deploy containers, if you're running a machine learning or computer vision application, you need the ability to frequently and seamlessly update versions of the model that it's running and you may want to do that without actually updating the entire container. Scale has worked with integrating a variety of container management and orchestration platforms to address a wide range of edge use cases.
What types of container technologies does SC//Fleet Manager work with?
The SC//Fleet Manager and SC//HyperCore solutions are both container and “control plane” agnostic and as such, we support solutions from all of the major public cloud providers including AWS, Microsoft Azure, Google and IBM as well as the leading container technologies such as Docker and Kubernetes. We also provide support for independent cloud management consoles or container management systems like Avassa, IBM Edge Application Manager (IEAM) and Portainer for managing Docker deployments across fleets of SC//Platform based systems.
How does SC//Fleet Manager deploy multiple containers across an entire fleet?
This is the real power of SC//Fleet Manager and why we believe this can effectively future-proof your edge infrastructure. Let’s say you want to deploy or update lots of different containers or you want the ability to rapidly deploy new applications, and you need to do that across an entire fleet – that's where you'll likely want to integrate and utilize some type of edge container management system. There are various systems that can help you deploy, manage and update containerized applications – whether they're running Docker or Kubernetes across an entire fleet of SC//HyperCore clusters.
For example using Google Anthos, which is Google’s managed Kubernetes distribution, on SC//HyperCore based fleets, you can deploy applications and change configurations from a centralized, cloud-based interface. From a developer standpoint, they get their standard Kubernetes API interface and they can direct commands and APIs at Google across multiple clusters, regardless of the location. There’s also a similar cloud control-plane capability for Azure called Azure ARC where you can run on-premises Kubernetes clusters on SC//HyperCore, connect those into the Azure control plane and be able to centrally manage and deploy applications across your entire edge fleet.
If you’re a Scale Computing customer or partner and have other questions about how to deploy, monitor and manage containers in your edge environment, check out the Scale Computing User Community. You can also find more information about managing and deploying containers at our Scale Computing Containers page and schedule a demo here.