The demise of traditional spinning storage media in the enterprise has been touted for over a decade, and for good reasons. Solid state drives (SSDs) are superbly faster, quieter, lighter, and more energy efficient than traditional hard disk drives (HDDs). Simply put, SSDs beat HDDs in almost every category. Why then has it taken so long for SSDs to take over HDDs in enterprise IT?

 

Two reasons: relatively low capacity and high cost. Although many enterprise IT organizations have been actively deploying all-flash storage arrays (using SSDs) for a few years now, HDDs remain vastly popular because they offer more capacity at a relatively low $/GB cost. However, the capacity handicap of SSDs has been diminishing over the past year. SSD capacity technology has advanced to a point that even the largest HDDs are about half as dense as the largest SSDs available today. In addition, compression/deduplication technologies significantly stretch the amount of data stored on the same physical device, effectively lowering the cost per GB stored.

 

As the same time, the cost of SSDs continues to decrease dramatically due to wider market adoption, lower manufacturing costs, and better technology. We may be finally marking the end of HDDs for primary data storage in the data center.

 

Balancing Capacity and Performance

From a technical standpoint, providing enterprise storage for Tier-0/Tier-1 applications has always been a game of balancing capacity and performance. In order for traditional HDD-based arrays (as well as hybrid arrays) to meet application performance requirements, we must constantly battle the physics of disk rotational speeds and estimations of how much data can be cached. HDDs are ultimately the performance bottleneck of any traditional storage array. The result is that most of the arrays end up having large quantities of HDDs to meet performance demands. A side effect is excess capacity (which can’t be used to maintain performance levels) and operational inefficiency.

 

All-flash arrays on SSDs dispel the notion that you need a large number of drives to meet a performance goal. Thus, while historically a handful of SSDs (vs. hundreds of HDDs) can be used to meet performance requirements, the cost of the solution would balloon to meet capacity needs. The availability of today’s large-capacity enterprise SSDs (i.e. 3.8TB and 15.3TB) solves both technical challenges: performance and capacity. IT no longer has to compromise one for the other.

 

Finally, SSDs have positive implications on operational costs. All-flash arrays occupy significantly less data center space, consume less power, require less cooling, are more durable (no moving parts), and physically weigh less than their HDD-based counterparts. These are real costs that should be accounted for when calculating $/GB costs.

 

Within NetApp IT, all-flash arrays are the obvious choice when it comes to serving our applications’ primary data needs going forward. That is not to say that we will be immediately throwing away any of our existing hybrid or HDD-based arrays. They are still doing the job they were built to do. However, as they age, their replacements will certainly be all-flash arrays.

 

Flash and Secondary Storage

The idea that all-flash arrays are the new normal for our applications’ primary data needs raises an even more interesting question. Will SSDs replace the secondary storage devices based on 7.2K RPM drives and used for data backup, replication, and protection? I think that eventually they will.

 

I believe the increasing density and steadily dropping cost of SSD technology will quickly outperform mechanical drives from a $/GB perspective in the next couple of years. When this comes to pass, there will be no incentive to buy HDDs anymore. SSD will be the new option for that capacity storage tier. At the same time, the rise of new storage technologies (e.g. FRAM, MRAM, holographic memory, etc.) will offer better performance than what SSD offers today and will become the new Tier-0 storage technologies.

 

Strategy Is Even More Important

As history has taught us, technology advancements in one hardware cause bottlenecks elsewhere.

To address this conundrum, NetApp IT provisions its storage using a top-down approach that builds storage services based on application needs. We use Quality of Service (QoS) features to take advantage of powerful and dense new flash arrays to deliver on storage service level agreements (SLAs). These SLAs   are tailored to application needs, not to the speeds and feeds of the storage array. (Read my blog on this subject here.)

 

QoS is now even more important because it normalizes the storage landscape. It doesn’t matter where the hardware bottleneck occurs. If we use application needs to dictate our services, the technology below it can fluctuate and change without affecting service delivery. We can celebrate the incredible density that a single SSD shelf can deliver and build ROI models based on DC power and cooling savings. As storage grows in capacity but shrinks in rack space, we can improve storage efficiency. By removing the hardware headaches that storage admins encounter every day, we open the door to exploring new capabilities and innovating in areas such as performance. And that’s a cause worthy of celebration.

 

To read more about NetApp IT & QoS, visit these resources:

 

The NetApp-on-NetApp blog series features advice from subject matter experts from NetApp IT who share their real-world experiences using NetApp’s industry-leading storage solutions to support business goals. Want to learn more about the program? Visit www.NetAppIT.com.

mm

Eduardo Rivera

As a senior storage architect, Eduardo drives the strategy for NetApp IT’s storage infrastructure across its global enterprise. He oversees the adoption of NetApp products into IT as part of the Customer-1 program and is an expert in designing storage service levels using OnCommand Insight. He has 15 years of IT experience.