In a previous blog, I examined the forces driving IT towards a hybrid cloud computing environment and how this shift is prompting storage vendors to adapt their offerings in support of a new ecosystem.

 

However, to deliver the full potential of the cloud, storage vendors will need to provide a way to easily manage and move data between multiple clouds (or, to use a term that some prefer, the Intercloud)-just as the server virtualization vendors have done.

 

Imagine a cloud environment in which all of the data management capabilities are consistent and connected into a coherent, integrated, and compatible system-in essence, a single virtual cloud with a unified data structure. That is precisely the vision of the data fabric.

 

Blog - Data Fabric.jpg

 

With a unified set of data services spanning multiple clouds, it is no longer necessary to keep applications locked in silo’d cloud environments; rather, applications become portable as they (and their data) seamlessly move between on-premises and off-premises clouds as requirements change.

 

The Three Data Fabric Enablers

 

There are three basic requirements of a data fabric: consistent data format, software-defined data management, and a fast and efficient data transport. These characteristics deliver flexibility across disparate cloud resources. Let’s examine each characteristic more closely:

 

  • Consistent Data Format

Enterprise data storage is comprised of a myriad of interfaces, protocols, and formats. FC, iSCSI, and FCoE are popular block storage protocols, while NFS and CIFS (now referred to as SMB) are the dominant file storage protocols. To further complicate things, cloud providers often use object storage APIs based on the RESTful interface, such as Amazon Web Services’ popular S3 API, and the Swift object protocol developed by the OpenStack community.

 

Blog - Data Storage Protocols.gif

 

In order to create a data fabric, all of the many methods of data storage communication must be understood, and translated when necessary. For instance, data residing within AWS S3 buckets should be able to be easily copied to on-premises block based storage systems, or visa-versa. Likewise, data residing in one cloud should be able to readily migrate to another cloud provider, regardless of format.

 

  • Software-defined data management

Within a data fabric, there needs to be a single control plane that executes and monitors the flow of data throughout the fabric, regardless of end point. This management interface must contain policy engines that enforce service level objectives for data availability and data protection, and which alert administrators when a policy falls out of limits. In addition, the data fabric manager should apply storage efficiency policies such as deduplication, compression, and cloning, when desirable; for example, only enable extra efficiencies during backup or archival operations.

 

Blog - Cloud Storage Manager.jpg

 

  • Fast and efficient data transport

A potential barrier to hybrid cloud adoption is the relatively slow data transfer rates offered by public cloud providers compared to on-premises storage arrays. According to recent performance data, the fastest cloud storage providers (using SSD devices) tend to max out around 30,000 IOPS (Input/Output Operations per Second), while high performance on-premises all flash arrays can easily sustain hundreds of thousands of IOPS.

 

One way to boost data transport speeds within the data fabric is to transfer less data via the use of storage efficiencies. Snapshot-based backup and replication, deduplication, and data compression are examples of methods that reduce the actual amount of bytes transferred.

 

Another benefit of the data fabric is the ability to help match application performance needs with available IO resources throughout the application lifecycle. Applications may begin life as a pilot project residing on a relatively slow cloud-hosted environment. Moving into production, faster and more secure storage may be required, necessitating a private cloud environment, until the application is ready for sun setting back to another hosted cloud site. In each case, the application data can be seamlessly transported between fabric endpoints in a fast and efficient manner.

 

Vendor Keys to Success in the Hybrid Cloud Ecosystem

 

Creating a data fabric requires several key capabilities. First, a vendor must have sufficient expertise in cloud-based storage and cloud storage management. Next, it must be able to provide a unified platform that can span existing on-premises infrastructure and the cloud. Finally, the vendor must develop an ecosystem that encompasses industry leading hyperscale cloud services, cloud service providers, and orchestration partners that will enable its customers to take a holistic approach to their hybrid cloud deployments.

 

Additional Resources:

 

In this demo,  a unified cloud storage manager is used to view hybrid cloud resources and replicate a data volume from a secure, co-located NetApp Private Storage configuration to a Cloud ONTAP instance running in the AWS cloud.

 

Larry Freeman