Storage is a major capital expenditure for many IT organizations. While virtualization has helped control costs of servers and networks, data growth continues unabated and with it comes storage.

 

Like most companies, NetApp IT faced an interesting challenge: In the face of exploding data usage, how do we maximize storage performance while gaining better control of our storage costs? Underlying this is the goal of always providing superior service delivery to our business customers.

 

Looking Back

The NetApp® IT storage environment has been evolving at a rapid clip in the past decade. Before virtualization technology was a mature and viable option, NetApp IT purchased individual servers and storage for each large business project. The allowed us to create a customized architecture to meet the requirements for every application. However, this custom approach was slow to provision and cumbersome to maintain. In addition, the dedicated legacy environments could not share resources, so we were unable to reuse available capacity between applications.

 

Once we deployed virtualization, we were able to consolidate many of our applications and begin sharing server and storage resources. However, since we did not always know the application performance requirements beforehand, we placed all our applications in the same high-performance storage pool regardless of the business requirements. This maximized application performance and uptime but was very expensive. Then, to control costs, we moved all our applications into a lower-cost, lower-performance storage. This reduced costs but didn’t fully satisfy our internal business customers with high performance requirements.

 

We had learned our lesson. One size does not fit all, at least when it came to balancing performance against costs.

 

Clustered Data ONTAP Saves the Day

The introduction of NetApp® clustered Data ONTAP® had a significant impact on our storage environment because it gave us the opportunity to match performance against application requirements. Within each cluster we could use the appropriate underlying technology to address the shared requirements of a pool of applications. Plus, clustered Data ONTAP could scale up to 24 nodes, which meant we could have one large virtual pool and could move our applications between storage pools without disruption.

 

We now provide a service catalog with three service level options-value, performance and extreme. Each service level has a defined set of specifications based on a trade-off between cost and performance. Each operates at a different price point and service guarantee. An integrated backup and archive function is provided for all IT services.

 

How do we determine which application fits into which service level? We use NetApp® OnCommand® Insight to run an I/O density report, which helps us measure and report on the delivery of services using the input/output per second per terabyte (IOPS/TB) metric. This is a common performance measurement for benchmarking computer storage devices and gives us the information we need to place an application in the right service level or storage cluster for its level of performance.

 

Benefits for IT

Some of the biggest benefits of storage service levels for NetApp IT are:

  • We can better match application requirements to storage requirements and ensure the application has the right level of performance-and isn’t using more or less than what it needs.
  • We can guarantee application performance within the service level without linking it to an actual technology platform. By abstracting the technology from the service level, we can better isolate issues and move applications to other platforms as needed.
  • We can more easily transform our service delivery over time. Our discussions with customers are about guaranteed performance levels, not the technology being used to achieve it.
  • We can benchmark our operations against those of external service providers to see how well we are doing and make adjustments to stay competitive.
  • We have a more predictable cost model for our operation. OnCommand Insight reports give us real-time performance information so that we can adjust within our clusters and eliminate orphan storage.
  • The IOPS/TB to measure performance metric has replaced uptime and gives us a more accurate picture of our storage environment.

Looking into the Crystal Ball

Will storage continue to evolve? Yes. But for now we realize that the service catalog model helps us deliver the performance our business customers demand while controlling costs. By consolidating our services to three levels, we have more efficient service delivery while minimizing our risk. Our storage service platform design is flexible enough to accommodate whatever the future brings. Learn more about it in the infographic below.

 

The NetApp-on-NetApp blog series features advice from subject matter experts from NetApp IT who share their real-world experiences using NetApp’s industry-leading storage solutions to support business goals. Want to view learn more about the program? Visit www.NetAppIT.com.