The Stop that, start this series expands upon storage concepts that can allow you to focus on moving forward in storage technology, and away from the pains of storage past. In Part 3, Gabe continues the conversation about storage provisioning, allocation, and performance issues of old that are … not so old, as it turns out. Start from the top of the series with a fun jaunt through the history of the storage industry.

I owe you good readers a follow up to my last blog post, so let’s call this Part B to that. The TL:DR (Too Long:Didn’t Read) version looks something like this:

We still struggle to provision and allocate storage.

We still employ tricks to address performance.

We have yet to tame storage growth.

With those items identified, I’d like to take a look at how we at SolidFire have looked to mitigate, remove, and reduce the pain associated with each.

Storage provisioning: The struggle is real

Storage provisioning has long been the bane of many data center teams, primarily because of the complexities involved with ever-changing and evolving workload requirements, but also because so much of storage provisioning was a manual process that required forethought, significant design time, and planning based on the original array configuration. Furthermore, the utilization of RAID as a primary data protection scheme injected additional complexity and forced administrators to play a shell game with performance and capacity.

At SolidFire, one of the base decisions that was made early on in the systems design was to take the virtualization of storage approach and abstract away much the complexity so that the end user did not have to make so many decisions at allocation time. The goal was to simply ask two questions:

  1. How much storage do you need?

  2. What kind of performance does it require?

Knowing those two pieces of information, the storage administrator is now free to allocate the required amount of storage and performance at the same time. Instead of deciding if the workload needs RAID1 or RAID10 to address specific performance characteristics or looking to RAID5 or RAID50 for capacity, simply selecting the storage LUN’s size and then leveraging the power of SolidFire Quality of Service (QoS) features eradicates the need for complex design calculations, drive management and placement concerns, and complex classifications based on the size of specific disk tiers.

What once could prove to be an exercise in calculus now becomes a simple 1+1 calculation.

The “It’s tricky” understatement

Managing storage performance and storage capacity in tandem has always been a difficult if not impossible task. To say “it’s tricky” may be a slight understatement, especially using legacy storage platforms, and that’s why gains in performance have been slow and incremental in nature.

Traditional means of gaining performance have required unnatural acts or expensive additions of cache and other means to accelerate slow-moving disk. Finding the right mix of capacity and performance, and then being able to tune each independently at an incremental level has been elusive for storage administrators. SolidFire has taken a unique approach to this “tricky” situation by decoupling performance and capacity from each other.

Unlike common solutions today where the two are paired (i.e. I can’t expand my performance without also expanding my capacity), the SolidFire user actually knows the answer to the two questions he is presented with when designing for storage needs: How much storage do I need,  and how fast do I need it to be?

Knowing at the onset of a deployment exactly how much performance and capacity your storage solution provides helps alleviate the pain points associated with storage design and allocation for your workloads.

volume-create.jpg

Many of you may be saying to yourself “Don’t most all-flash storage solutions have ample performance?” My response would be: most certainly, yes, all-flash technology affords significant performance increases over its spinning disk counterparts. But contention and resource allocation still come into play just like they do in the spinning disk world.

The I/O blender effect that hypervisor-based virtualization presents can defeat the raw speed at which flash storage excels. This is why fine-grained QoS at the LUN level is so important. With flash technologies being more expensive than spinning disk, the mantra we at SolidFire have designed to is “work smarter, not harder.” QoS allows us to do that by setting a min, max, and burst configuration for each volume on a SolidFire array, which guarantees that no two workloads will be starved of resources, nor have to compete against each other (aka the noisy neighbor).

Miss Cleo can’t predict your performance needs

miss-cleo-storage-performance.jpg

What we can predict in the storage world is that your storage will grow, and that’s about it. We can’t accurately predict at what rate that growth will be. It may be 5%, 10%, or 100%. Things would be so much easier if we simply had a storage crystal ball. But since we don’t, we need to look at leveraging solutions that have the flexibility to grow at incremental levels that aren’t too small or too large. The ability to fine-tune and scale our storage to address growth as well as performance requirements should be a key decision-making factor when considering the purchase of storage technologies.

And while I’ve talked about the unique ability afforded to SolidFire customers by separating those two metrics in the design process, when it comes to growth, the ability to address both simultaneously is of great value. Most solutions on the market handle growth at the capacity end (e.g. a new shelf of disks is added to a pair of controllers), and this works up until a point — that point being when the controllers themselves are overrun by the amount of capacity they need to manage. That puts us into a decision making process around what to do next, forklift upgrade the controllers I have today, or even go buy another array (and create an additional management point).

Take a look at the four SolidFire nodes a customer can purchase that have varying capacity and performance characteristics. And note that while that’s helpful, the extra added benefit is the ability to run them as a mixed cluster and scale them together as a single point of management and a single pool of storage resources. You grow in incremental units, and only according to your need.

choose-your-nodes.jpg

Closing this post up with a few final thoughts:

  • Storage is the foundation of the data center. It is the bricks upon which all else is built. Up until just recently those bricks always had to be of a certain type. That’s not the case today.

  • Storage utility is just as important as storage capacity or performance. Getting the most out of what you have while at the same time being flexible enough to adjust to ever-changing needs should be the new normal.

Choosing a storage platform that can meet the needs of a next generation data center and the workloads that it will host is still a challenging task, but we are making it less so. Compare today’s storage platform capabilities using your complimentary copy of Gartner’s second annual Magic Quadrant and Critical Capabilities Study for Solid-State Arrays. For the second consecutive year, Gartner gave SolidFire the highest score (3.60) for overall use case in the 2015 Critical Capabilities (CC) for Solid-State Arrays.

Time to put this one to bed, folks. Don’t worry, I’m sure to have some more bedtime reading for you shortly.

Gabriel Chapman