In Part 1 of our series on the challenges service providers face delivering block storage as a service, we discussed the challenge of performance. Today, we’re going to talk about a second challenge service providers face – storage efficiency. The ratio between how much storage capacity a service provider buys, and how much they are able to sell is a critical driver of bottom line profitability – yet obtaining high utilization rates – is a constant struggle.

 

Part of the reason for this inefficiency goes back to our discussion of the imbalance between storage capacity and performance. In an effort to provide as consistent performance as possible, it is common place that service providers are forced to deploy far more capacity (spindles) than they are able to use in order to provide the right number of IOPS.  All that wasted capacity continues to consume space, power, and cooling, and drags down the profitability of the capacity that is sold.

 

Another challenge to high utilization rates is how service providers plan for growth and deploy new capacity. While most storage systems are designed to allow for capacity expansion through disk shelves, in practice, many service providers deploy storage “fully configured” from day one. Reasons for deploying full storage configurations include; better pricing from the vendor, reducing the risk and complexity of adding new capacity while operating, and the fact that much of the cost of the storage system is in the controller and software which need to be purchased up front. Whatever the reason, the result is the same – low utilization rates during early deployment, which brings down overall utilization and reduces profitability.
Over the past few years, efficiency technologies that allow you to store more data in less space, like compression and deduplication, have started to appear in primary storage systems. While on the surface these features should be a huge boon to service providers looking to increase their efficiency, in reality they are seldom used.  Again, it comes down to the balance between performance and capacity. These efficiency features often incur a significant performance penalty while providing space that can’t actually be used. In fact, many service providers don’t even use thin provisioning, a storage feature that has been standard for years. Why? Because it makes capacity planning more difficult, and they get better performance by fat provisioning volumes up front.

 

What service providers really want is storage that is designed and balanced to run at consistently high utilization rates, and can be grown incrementally over time, so that it can be profitable from day one.

mm

Dave Wright

Dave Wright, SolidFire CEO and founder, left Stanford in 1998 to help start GameSpy Industries, a leader in online videogame media, technology, and software. GameSpy merged with IGN Entertainment in 2004 and Dave served as Chief Architect for IGN and led technology integration with FIM / MySpace after IGN was acquired by NewsCorp in 2005. In 2007 Dave founded Jungle Disk, a pioneer and early leader in cloud-based storage and backup solutions for consumers and businesses. Jungle Disk was acquired by leading cloud provider Rackspace in 2008 and Dave worked closely with the Rackspace Cloud division to build a cloud platform supporting tens of thousands of customers. In December 2009 Dave left Rackspace to start SolidFire.