This is the first blog in a series on NetApp IT’s adoption of storage service levels. To read part 2 of the series, visit The Role of QoS in Delivering Storage as a Service (Part 2).

Can NetApp IT deliver storage as a service?

NetApp IT posed this question to itself more than a year ago. Our goal was to find a new way to offer our business customers a method by which they could consume storage that not only met their capacity requirements, but also their performance requirements. At the same time, we wanted this storage consumption model to be presented as a predictive and easily consumable service. After consulting with enterprise architects for NetApp’s cloud provider services, we developed a storage service catalog leveraging two main items: IO Density and NetApp clustered Data ONTAP®‘s QoS (quality of service).

In this first part of this two-part blog, we will discuss how NetApp OnCommand Insight’s IO Density metric played a key role in the design of our storage service catalog.

The Role of IO Density

IO Density is a simple, yet powerful idea. The concept itself is not new, but it is essential to building a sound storage consumption model. By definition, IO Density is the measurement of IO generated over a given amount of stored capacity and expressed as IOPS/TB. In other words, IO Density measures how much performance can be delivered by a given amount of storage capacity. Here’s an example of how IO Density works.

Suppose we have a single 7.2K RPM drive. By rule of thumb, a single drive of this type can deliver around 50 IOPS @ 20ms response time. Consider, however, that 7.2K RPM drives today can range anywhere from 1TB to 8TB in size. The ability of the drive to deliver 50 IOPS does not change with its size. Therefore, as the size of the drive increases, the IOPS/TB ratio worsens (i.e. you get 50 IOPS/TB with 1TB drive and ??6.25 IOPS/TB with an 8TB drive).

?Applying the same logic, we can divide the amount of capacity? that we provision to a given application by the amount of IO that an application demands from its storage. The difference is that at the array level, there are many other technologies and variables at play that can determine the IO throughput for a given storage volume. Elements like disk type, controller type, amount of cache, etc., affect how many IOPS a storage array can deliver. Nonetheless, the general capabilities of a known storage array configuration can be estimated with a good degree of accuracy given a set of reasonable assumptions.

Using OnCommand Insight we were able to gather, analyze, and visualize the IO Density of all the applications that run on our storage infrastructure. Initially, what we found was surprising. Some applications that anecdotally were marked as high performance were demonstrating very low IO Density rates, and thus were essentially wasting high-performance storage capacity. We also saw the reverse, where applications were pounding the heck out of lower performance arrays because their actual IO requirements were incorrectly estimated at the time of deployment. Therefore, we started to use NetApp OnCommand Insight’s aggregated IO Density report to profile application performance across the entire infrastructure and establish a fact-based architecture.

storage service levels.pngUltimately, OnCommand Insight’s IO Density report helped us to identify the range of service levels (defined as IOPS/TB) that the apps actually needed. With this information, we created a storage catalog based on three standard service levels:

  1. Value: Services workloads requiring between 0 and 512 IOPS/TB.
  2. Performance: Services workloads requiring between 512 and 2048 IOPS/TB.
  3. Extreme: Services workloads requiring between 2048 and 8192 IOPS/TB.

Based on our own understanding of our application requirements (as depicted by our IO Density reports), the above three tiers would address 99 percent of our installed base. Those workloads requiring something other than these pre-defined workloads are easily dealt with on a case-by-case basis since there are so few of them.

A New Perspective on of Application Performance

IO Density gave us a new perspective on how to profile and deploy our applications across our storage infrastructure. By recognizing that performance and storage capacity go hand in hand, we were able to create a storage catalog with tiers that reflected the actual requirements of our installed base.

Our next step was placing IO limits on volumes to prevent applications from stepping on the performance resources of other applications within the same storage array. Stay tuned for part two of this blog where I will discuss how we used clustered Data ONTAP’s adaptive QoS feature to address this issue.

For more information on this topic, check out the latest edition of the Tech ONTAP Podcast.

The NetApp-on-NetApp blog series features advice from subject matter experts from NetApp IT who share their real-world experiences using NetApp’s industry-leading storage solutions to support business goals. Want to learn more about the program? Visit www.NetAppIT.com.

mm

Eduardo Rivera

As a senior storage architect, Eduardo drives the strategy for NetApp IT’s storage infrastructure across its global enterprise. He oversees the adoption of NetApp products into IT as part of the Customer-1 program and is an expert in designing storage service levels using OnCommand Insight. He has 15 years of IT experience.