Requirement #2 for guaranteed Quality of Service (QoS): a true scale-out architecture
Welcome to the third blog in the SolidFire Benchmark QoS series, where we’ve been explaining how guaranteeing Quality of Service (QoS) isn’t a feature that can be bolted on to a storage system. It requires an architecture built for it from the ground up, starting with an all-SSD platform. Now let’s discuss a second requirement: a true scale-out architecture.
Traditional storage architectures follow a scale-up model, where a controller (or pair of controllers) are attached to a set of disk shelves. More capacity can be added by simply adding shelves, but controller resources can only be upgraded by moving to the next “larger” controller (often with a data migration). Once you’ve maxed out the biggest controller, the only option is to deploy more storage systems, increasing the management burden and operational costs.
Tipping the scales not in your favor
This scale-up model poses significant challenges to guaranteeing consistent performance to individual applications. As more disk shelves and applications are added to the system, contention for controller resources increases, causing decreased performance as the system scales. While adding disk spindles is typically seen as increasing system performance, many storage architectures only put new volumes on the added disks, or require manual migration. Mixing disks with varying capacities and performance characteristics (such as SATA and SSD) makes it even more difficult to predict how much performance will be gained, particularly when the controller itself can quickly become the bottleneck.
Scaling out is the only way to go
By comparison, a true-scale out architecture such as SolidFire adds controller resources and storage capacity together. Each time capacity is increased and more applications are added, a consistent amount of performance is added as well. The SolidFire architecture ensures that the added performance is available for any volume in the system, not just new data. This solution is critical for both the administrator’s planning ability as well as for the storage system itself. If the storage system itself can’t predict how much performance it has now or will have in the future, it can’t possibly offer any kind of guaranteed Quality of Service.