We debunk the top five most common points of confusion and shed light on the often misunderstood world of hyperconverged infrastructure.
Hyperconverged infrastructure, or HCI, has become increasingly prevalent in recent years. Yet even among those with HCI solutions deployed, myths and misconceptions remain. Here, we debunk the top five most common points of confusion and shed light on the often misunderstood world of hyperconverged infrastructure.
Myth: Building a DIY virtualization infrastructure is cheaper than purchasing an HCI solution
Myth: HCI is more resource-intensive.
Myth: HCI cannot cover the spectrum of enterprise to edge computing
Myth: HCI is just software-defined storage with a hypervisor
Myth: A single-vendor HCI solution is a risky business move
While the individual parts needed to create a virtualization infrastructure can be less expensive than acquiring certain HCI solutions, the initial purchase price of the physical components required is only one small part of the story. It is a bit like comparing building your own car from scratch versus buying one from a manufacturer - Building from parts may be cheaper, but it doesn’t come with all of the engineering, testing, and build quality and experience that one from the manufacturer does (not to mention the warranty). The truth is, good HCI solutions make infrastructure easier to deploy, manage, and scale, resulting in a lower total cost of ownership.
By utilizing automation and machine intelligence to handle many of the daily tasks associated with managing and maintaining virtualized resources, HCI can free up budget and human resources for other tasks and projects. Depending on the hypervisor deployed or supported by the HCI vendor, HCI solutions can even eliminate hypervisor software licensing. From deploying in under an hour rather than days, to scaling out seamlessly without downtime, the cost savings over DIY virtualization solutions is dramatic, even under the best-case scenarios. With HCI vendor solutions, you get a thoroughly engineered solution backed by enterprise support. With DIY solutions, the guarantee is that if it breaks, you can keep the pieces.
Like everything in this industry, the answer is “it depends''. Some HCI offerings leave fewer resources available right out of the box, but this is certainly not true for all HCI solutions. The available resources within HCI depend on several factors, the most significant of which is the VSA-based storage architecture used. VSAs, or virtual SAN appliances, emulate SAN storage to support third-party hypervisors designed to consume these storage types and are required on each node of an HCI cluster. They can be resource-intensive, often requiring multiple cores and 24 to as much as 150GB or more of RAM per node in the cluster, in addition to the RAM consumed by the hypervisor. These VSA-based HCI solutions may also require SSD or NVMe storage as a cache due to their inefficient data pathing, further contributing to RAM consumption. Three-way replication of data blocks is also often needed with VSA-based approaches to support large clusters and prevent data corruption from multiple drive failures. This will consume large amounts of disk space in these VSA-based HCI solutions.
However, not all HCI solutions are as resource-intensive. Some HCI solutions avoid the VSA-based storage approach entirely by implementing native block-level control of physical storage devices in the nodes that allow for more direct storage paths while eliminating the resource consumption inherent in the VSA-based approach. This native “in-kernel” approach returns the CPU and RAM resources otherwise consumed by VSAs and CVMs back to the end users in the form of resources available for running workloads instead of overhead.
Early on, many HCI vendors focused on the enterprise market, but the rise of edge computing has highlighted HCI as another vehicle for edge infrastructure. We’ve addressed how VSA-based HCI appliances can be resource-intensive when running the storage and hypervisor, making these solutions nearly impossible to use on the smaller form factor appliances required for edge computing. It simply is not feasible to install VSA-based HCI stacks on edge appliances with resource footprints as small as 8GB of RAM. Most VSA-based solutions cannot fit even their storage appliances in that footprint, much less actual edge workloads.
But unlike VSA-based HCI appliances, HCI solutions with in-kernel storage use dramatically fewer resources (4GB vs the commonly seen 32GB or more consumed by VSA’s) and can install and run on these edge appliances efficiently and effectively while leaving the bulk of CPU and RAM resources available for running workloads. These HCI vendors have the right architecture to answer the call of edge infrastructure needs, providing solutions that support distributed, low-power edge devices that maintain fault tolerance and high availability, as well as the ease of use and administration that HCI is known for — making edge computing a cost-effective reality.
While many software-defined storage (SDS) vendors have begun offering HCI solutions that meet this very description, HCI is far more than just SDS — HCI is a means of simplifying infrastructure at every level through a cohesive approach to efficiency in architecture. SDS solutions may aim to provide a level of storage simplification, but vendors often utilize VSAs to emulate SAN for the hypervisors they support, resulting in a solution more complex than streamlined. On the other hand, Real HCI solutions integrate hypervisors directly rather than relying on third parties, allowing for more efficient data pathing and resource utilization. These purpose-built HCI appliances go far beyond SDS solutions to automate configuration and management tasks, including automatic storage pooling, rolling updates, and self-healing, to provide truly simplified virtualization solutions.
Some IT professionals dislike the idea of having their entire infrastructure stack come from a single vendor, believing diversification is the safer move should one vendor not live up to its promises. But while putting ‘all your eggs in one basket’ is a fair point, businesses have discovered the inherent cost and complexity in architecture and support resulting from diversifying their vendor portfolio. The exact opposite of the efficiency businesses need to compete in today's world.
One of the reasons HCI emerged was a need to overcome the problems caused by combining multiple vendors' solutions into a single stack. An often-cited challenge among IT professionals is the finger-pointing between vendors that occurs when a customer calls for support. Vendors often waste days or longer debating ownership of the problem, leaving the customer without resolution.
With a single vendor creating the entire stack from a clean sheet of paper (as seen with some HCI vendors), levels of integration and automation are reached that are simply not feasible in legacy mixed architectures, which further alleviates many of the challenges that arise from “too many cooks in the kitchen.” One clear example is HCI solutions that use third-party hypervisors and require separate system updates for the hypervisor, storage, and hardware. Any one vendor’s updates have the potential to cause issues with the other vendor solutions, and multi-vendor system updates have historically been lengthy and arduous tasks. With single-vendor HCI solutions and the level of automation they bring to the table, IT administrators can focus on workloads, rather than being bogged down with managing infrastructure and its related tasks.
While hyperconverged infrastructure solutions may indeed simplify infrastructure, they are often shrouded in myths of complexity among the uninitiated.
HCI has proven its worth and value in simplifying data center and edge computing implementation and management. HCI is not just a fad; it is a critical technology for organizations looking to deploy, manage, and scale virtualization infrastructure effectively and efficiently. With the continued growth of the HCI market from the edge to the data center and beyond, it is clear that hyperconverged infrastructure is here to stay.
If you are interested in learning more about Scale Computing’s approach to hyperconverged infrastructure, please schedule a demo.