One of the best parts of my job is the constant conversations with customers around architecting great infrastructure solutions. I have always had a passion for talking to customers to assemble the “jigsaw puzzle pieces” into something unique for every customer. One common theme in recent conversations is what’s next? If we aren’t careful, Today’s Buzzword is tomorrow’s Trough of Disillusionment. With our industry evolving at a velocity that seems to be increasing every year it is getting harder and harder to keep up. Because of this trend, it is common to lean on trusted advisors in our industry for guidance. With that in mind, what are the top customer concerns we have seen recently when asked about evaluating hyperconverged solutions?

 

Both converged and hyperconverged systems provide the biggest bang for the operational buck. There has always been a sweet spot for these systems in the Enterprise. Customers have fallen in love with the shortened time to deploy and provision, operations simplicity, and consolidated support. In addition, customers have told us of their need to move away from a “buy up front and grow into it” world to a “buy in small increments as you grow” model. This will be the key to growth in the converged markets going forward. The days of the 3-7 year buy cycle for infrastructure are dwindling. There will always be a place in the market for build your own, best of breed infrastructure but in my experience, the valid use cases are getting smaller and smaller. As this market has matured, customers are asking for a more integrated experience and this trend is continuing as the hyperconverged market continues to grow. The question to ultimately ask yourself is what variables are you looking for in your next evaluation. As HCI adoption in the Enterprise continues, and as we move into the second generation of hyperconverged infrastructure, I predict customers will expect more from their infrastructure and the lines between converged infrastructure and hyperconverged infrastructure will continue to blur.

 

You’re probably saying, “Great, so what? How do I know where to place value and trust in my next purchase?” A shift in infrastructure can be daunting, so Gartner developed give key determinants in a Hyperconverged Integrated System Decision: simplicity, flexibility, economic, prescriptive and selectivity. Here are my thoughts on evaluation criteria:

 

The foundational layer of any infrastructure is trust. Without a prescriptive environment that offers predictable, guaranteed performance we have nothing. We have built our house on a foundation made of sand. This applies to all components in the stack. A prescriptive stack allows us to maximize system resources while creating virtualized, dynamic pools that can be quickly allocated and deallocated. Another factor here is the maturity of the products in the solution. The hyperconverged space currently has 30+ companies, most of them startups. How many will make it long term? Over time this space will naturally consolidate down to a few mature players the Enterprise will trust. This has happened over and over again in our industry (Cloud Management Platforms, Software Defined Networking, and All Flash Arrays to name a few) and is the natural evolution from startup to widespread Enterprise adoption. Statistically, very few startups make it to an IPO or acquisition exit, and even that is no guarantee for long term success. You need to have confidence that your vendor of choice will be there for you.

 

Another aspect of trust in the stack provides confidence to consolidate workloads, including Enterprise Tier One workloads. Customers are asking us for the ability to manage hundreds of applications on thousands of volumes while at the same time guaranteeing performance to critical applications, all on an a stack that provides simplified operations. The days of islands of infrastructure and silos and tiers are coming to an end.

 

The next item to consider is Simplicity. It seems so obvious but simplicity in execution is actually very difficult. This reminds me of the heyday of on-prem Infrastructure-as-a-Service (IaaS). There was a was the goal of self service simplicity presented to the user and operator by abstracting away the underlying resource layers. By adding orchestration, automation, and scheduling on top of virtualization, the end result was a lot of moving parts and overhead. Simplicity in user  experienced were traded for additional operations complexity tying together all the virtual and physical layers. The vitalization administrator wants something that is both easier to stand up (the Day 0 experience) as well as operate over time (Day1 and beyond). The less overhead in the abstraction layers, the better.

 

A critical attribute of converged systems is flexibility. A prescriptive, simple environment appeals to the virtualization admin (more sleep at night, time to tackle projects that the business needs), but flexibility is the key to a great user experience. We’ve all heard the horror stories of legacy virtualization environments where it would take days to weeks to request a new virtual machine or application for the IT department. I still remember a customer years ago that required a paper form to be filled out and signed by the procurement, network, storage, server departments before a virtual machine could be cloned from a template. Mind you, this task took a couple clicks and a few minutes for the virtualization admin. The paperwork and approvals took exponentially longer than the actual deployment. The reason for this is because the underlying layers were static and fixed the costs to grow were fixed on an annual budget cycle and needed to be managed closely. There was no option to scale as the company grew. It was very difficult to grow (or shrink) the pools of resources to match customer demands. Customers demand a more flexible stack that allows them to match the velocity of their business needs and to stay ahead of their competitors.

 

No infrastructure is complete without a plan for data protection and portability you can trust. At Netapp, that vision is the Data Fabric. Customers want know their data is protected at all times without having to sacrifice vendor lock-in. This also includes the ability to move your data from on-prem to public cloud as needed and back again. I recently celebrated my one year anniversary with NetApp but I have been working with NetApp systems for years when I was an SE for a NetApp partner. A personal early example of this vision was a number of years ago (probably 5+) setting up complex VMware infrastructure for a customer. The challenge was creating Snapshots of vmfs datastores and then shipping the data to another NetApp system across the country for DR recovery in another data center in the event of a primary site failure. NetApp’s combination of SnapMirror and SnapCenter (Snap Manager at the time actually) was critical to this success. This is where companies with a core data protection portfolio, such as NetApp, provide many advantages.

 

Lastly, I would be remiss if I didn’t include selectivity as my final point. To further expand on the vendor lock-in point from above, what if you want integrations that you either create yourself with an open API or the ability to automate and provision your infrastructure to your exact standards through industry automation (VMware, Chef, Puppet, Ansible)? The concept of flexibility expands beyond infrastructure pools into open systems management. I call this moving the control plane. In a Solidfire context, we have been moving the control for years. In the early days of SolidFire, we were successful with Service Provider customers because we fit into their Cloud Management Systems (CloudStack and OpenStack) or their customer tools (API integration). As we moved into the Enterprise, we added VMware vCenter with our plugin. The ability to meet customers where they are managing systems today without the need to access the UI for your storage system is very important. During your evaluation of hyperconverged infrastructure systems, the ability to seamlessly adapt a new platform into your existing tools and workflows reduces both up front configuration as well as long term integration and operations.

 

If you are looking for further insight, check out this report on Creating an Effective Hyperconvergence Strategy from Gartner. They cover what I covered here today as well as some other aspects for you to consider. If you are evaluating a change in your infrastructure in the near future, I would love to hear what you think. What is important to you?

 

Gartner Five Keys to Creating an Effective Hyperconvergence Strategy, George Weiss, Refreshed: 06 February 2017, Published: 29 October 2015

mm

Aaron Delp

Aaron is the Director of Technology Solutions for SolidFire, specializing in cloud based solutions, reference architectures, and customer enablement. In his vast experience in high tech, Aaron has been a leader in the Citrix Cloud Platforms Group; Cloud Field Enablement for VCE; in the management, orchestration, and automation products from VMware, Cisco, EMC, and CA Technologies; and has led the design and publication of the configuration of Cisco Unified Communications (UC) on the VCE Vblock Platform. Other past responsibilities include enabling a top 50 technology value added reseller (VAR), serving as the Data Center Practice Lead, and over 10 years at IBM.