This blog is part 1 of a four-part series that explains how Nonvolatile Memory Express (NVMe), NVMe over Fabrics (NVMe-oF), and new storage-class memory (SCM) are changing the game for data centers. For a deeper dive, download the white paper “New Frontiers in Solid- State Storage”.
Any major new technology development is bound to generate buzz. Sometimes it turns out to be a fad, and everyone wonders what all the fuss was about. Other times, you can look back and realize that it was the beginning of a trend that would be with us for years. Such a trend is happening now with a new wave of solid-state technologies starting to reach the market, and NetApp is making sure that our customers can capitalize on these technologies. And, while I hesitate to predict the future, it looks like they’ll probably be long-term trends. Why? I believe adopting these technologies the right way will redefine how we design and build a next–generation data center infrastructure. Such an infrastructure will deliver consistently low latency for a wide range of workloads, at the lowest cost, using a robust ecosystem of best–in-class suppliers.
I’m talking about three new innovations:
- NVMe is a protocol that allows for fast access for direct-attached flash storage. NVMe is an evolutionary step toward exploiting inherent parallelism built into solid-state disks (SSDs) and other solid-state technologies.
- NVMe-oF exposes the advantages of NVMe to a fabric connecting hosts with networked storage. With increased adoption of low–latency, high–bandwidth network fabric (Ethernet, InfiniBand, and Fibre Channel), it is possible to build infrastructure that extends the performance advantages of NVMe over standard fabrics to access low–latency nonvolatile persistent storage.
- SCM (also known as persistent memory, or PMEM) such as Intel 3D XPoint and Samsung Z-NAND media, connected over NVMe technology or even on the memory bus, enables device latencies about 10 times lower than those provided by today’s NAND-based SSDs.
These technologies alone don’t mean anything without the ecosystem software (operating system/hypervisor, drivers, and protocols) being optimized. The NVMe software ecosystem is being designed to take advantage of such low–latency transport and media. The combination of ultrafast media types and low-latency access mechanisms enabled by remote direct memory access (RDMA) capabilities enables new disaggregated data center system architectures that can be deployed on standard fabrics familiar to customers. A class of emerging applications can take advantage of this ultralow–latency infrastructure, enabling customers to gain significant insights by shifting the focus from postprocess to real–time handling of data.
Where Will NVMe and SCM be Implemented?
From a storage system perspective, NVMe-oF will be deployed in two contexts: front end (from server to storage system) and back end (from storage system to NVMe device). Along with the current Fibre Channel front-end and back-end SAS/SATA choices, many combinations are possible. SCM media will initially be used as a read/write cache to provide significantly lower latency than available with today’s NAND flash SSDs. As the price of SCM media comes down and it becomes a viable option for more applications, you’ll be able to create a pool of SCM storage. Such storage will deliver consistent low latency that is an order of magnitude faster than today’s shared storage. NetApp will roll out some of these new technologies over time as they mature while protecting investment in existing storage technologies.
What Results Can You Expect?
With these new NVMe, NVMe-oF, and SCM technologies, you can expect significant, tangible cost savings in your data center infrastructure, because they drive down CPU overhead in the I/O stack and allow external storage arrays to operate much more efficiently. This change will allow for better utilization of both server compute and storage resources for a variety of workloads, including business intelligence, analytics, and data warehousing. Fewer CPU cores handling the same set of application workloads will lead to reduction in application licensing costs and higher application density.
Moreover, emerging applications with more stringent service–level objectives will be feasible in enterprise data centers. The next round of real-time analytics, machine learning, and artificial intelligence will demand real-time response from a highly parallel, ultralow-latency data management infrastructure.
It’s an exciting time in storage. At NetApp, we’re playing a lead role in the development of these new technologies. Most importantly, we’re making sure that our customers can capitalize on these technologies without having to rip and replace their infrastructures or sacrifice the NetApp® data management features on which they rely daily. In the future, you’ll be able to use front-end NVMe-oF in NetApp all-flash array products to gain higher throughput at lower latencies and use less CPU at the server and in the storage system. You’ll also be able to use additional NVMe form factors and media types in our All Flash FAS (AFF) family to take advantage of the highest-performance SSD devices.
Keep an eye on this space, as we explore the implications of these new innovations in upcoming blogs between now and August 7.
Visit our booth at Flash Memory Summit, August 8–10, 2017, and attend our keynote, “Creating the Fabric of a New Generation of Enterprise Apps,” on Thursday, August 10, 11:30 a.m. to noon.
Explore the implications of these new innovations in the other three blog posts in this series: