The NetApp IT storage team has been challenged to find ways to balance the need for increasing capacity due to growing data demands while remaining as efficient as possible. This has demanded out-of-the-box thinking when it comes to storage, some of which we have described in past blogs.

 

With each ONTAP release, we incorporate new ideas into the ONTAP storage efficiency standard that we use to guide our team’s activities. The latest addition to our standard incorporates recent ONTAP upgrades for deduplication, compression, and compaction.  When you add this to efficiencies from thin provisioning and data protection, we are achieving a 99.96% storage efficiency rate, an impressive milestone for any IT shop. Even better, we have added minimal storage capacity for organic growth, resulting in significant savings.

 

Below, we describe how our storage efficiency standard evolved to reach this milestone.

 

Deduplication, Compression, and Compaction test version of our storage efficiency standards incorporates deduplication, compression, and compaction into our FAS and All Flash FAS (AFF) environments in the following ways:

  • Inline zero-block deduplication eliminates space wasted by all-zero blocks;
  • Inline compression compresses 8k logical blocks into 4k physical blocks;
  • Inline deduplication deduplicates incoming blocks with existing blocks on disk;
  • Adaptive data compaction reduces waste created by blocks smaller than 4k by combining multiple <4k blocks into a single 4k block.
  • Post-process deduplication and compression are also leveraged to gain more savings over time, resulting in more data being stored in less space.

Each of these actions is additive, building upon the others to increase efficiency. When applied to the NetApp IT storage environment, our process looks like this:

  • AFF deployments receive both inline and post-process deduplication and adaptive compression as well as inline compaction, which is the default for AFF systems;
  • Hybrid FAS deployments receive post-process deduplication and inline and post-process adaptive compression and data compaction;
  • For our online archive data sets, we are using post-process deduplication, secondary inline and post-process compression and inline compaction, to attain maximum space savings.

OurONTAP clusters have maintained roughly the same average used capacity (in fact, it is trending down) while the number of volumes has increased by nearly 10%, from just under 18,000 volumes to more than 21,000 volumes. As more data is written or as we move volumes (using ONTAP’s non-disruptive volume-move feature), savings should continue to improve.

 

Thin Provisioning and Backup

Our storage efficiency standard also includes implementation of thin provisioning of all our volumes and LUNs, which allows us to provision the full amount of storage at time of creation, but allocate it when it is needed. This reduces the amount of consumed capacity. NetApp Snapshots and cloning for point-in-time backups provide efficient storage utilization by storing only copies of blocks that have been added or changed.

 

If we add the efficiencies delivered by thin provisioning, snapshots and FlexClone to those of deduplication, compression, and compaction, we have saved approximately 141PB and are consuming 6.96PB at a 20:1 ratio. This means that for each 1 TB of data we store we save 20TB of data via efficiency.

 

OnCommand® System Manager shows a 99.96% efficiency rate in our ONTAP 9.1 and higher clusters.  Our goal is to get as close to 100% as possible although a 100% efficiency rate is unattainable unless you don’t store any blocks of data.

 

Greater Performance

We’ve also realized other benefits. We are maximizing the capacity in our existing infrastructure as much as we can. We allocate less capacity to overprovisioning because of ONTAP’s reliable performance. We expect ONTAP 9.2’s cross-volume (aggregate-level) deduplication feature to drive even more savings in the AFF install base.

 

As storage architects, we are always concerned about balancing efficiency and performance. It’s part of our job. We’ll be sharing more of our storage experiences in future blogs, so check back here soon for more tips from the NetApp IT storage team.

 

Read other recent blogs from the storage team:

  • Integrating NetApp Flash into our data centers, resulting in capacity increases with significant space and power reductions;
  • Automating our storage capacity management to streamline thin provisioning; and
  • Automating the configuration of our ONTAP clusters.

This was published as part of the NetApp-on-NetApp blog series which features advice from NetApp IT subject matter experts who share their real-world experiences using NetApp’s industry-leading storage solutions to improve IT service delivery.

mm

Ezra Tingler

Ezra Tingler is a Senior Storage Engineer in NetApp’s corporate IT team. In this role, he is a member of Customer-1, which acts as the first adopter of NetApp’s products and services. Since 2011, he has been the lead for both the storage ecosystem services and storage capacity and performance service lines. Ezra is responsible for the automation and management of IT storage capacity, efficiency, and performance. This includes overseeing its architecture, procurement, deployment, standards development, and maintenance. He has more than 22 years of IT experience.

mm

Eduardo Rivera

As a senior storage architect, Eduardo drives the strategy for NetApp IT’s storage infrastructure across its global enterprise. He oversees the adoption of NetApp products into IT as part of the Customer-1 program and is an expert in designing storage service levels using OnCommand Insight. He has 15 years of IT experience.