This post is a part of the Architectural Comparison Series, where we look at what other storage vendors are offering around key infrastructure components. Catch up on another post in this series, “VVols vs. traditional storage.”

 

Without a doubt, OpenStack has been gaining in popularity and deployments over the past year, particularly in the enterprise. More and more companies are looking at deploying private clouds and moving to an Infrastructure as a Service (IaaS) approach for their IT delivery.

 

Cinder is the project that addresses block storage in OpenStack, and its mission is well defined:

 

“To implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide Software Defined Block Storage via abstraction and automation on top of various traditional backend block storage devices.”

 

As of the Kilo release this spring, there are 50 listed Cinder drivers. That means there are 50 unique block storage devices that can be used in an OpenStack deployment. But not all 50 devices have feature parity, configure quickly and easily, or enable you to orchestrate workflows (e.g. installation) as well as others. A Cinder driver in and of itself is not a competitive differentiation. It merely means a particular storage device has the capability to be presented as storage to a Nova instance. If your storage has been on the market for years but your first Cinder driver was in Juno or Kilo, you’re a tad bit late to the game.

 

Here at SolidFire, our all-flash architecture and from-the-ground-up approach to full automation through a complete set of APIs has made integration with OpenStack seamless and our Cinder driver feature rich. We first released our first Cinder driver with OpenStack Folsom back in 2012, and have spent the intervening three years improving upon our integrations. A partner recently commented that “SolidFire just works with OpenStack, out of the box.”

 

Here are some of the many ways we’ve ensured that SolidFire “just works” in an OpenStack environment:

 

Configuration check

Configuring the SolidFire Cinder driver takes four lines of code. Period. Simply enter the volume driver, SAN IP, SAN log in and password and — voilà — you’re configured. From personal experience, configuring SolidFire is quick and painless, and requires no additional add-ons or special integration libraries to make a cluster work with OpenStack.

 

With this being OpenStack and an orchestrated environment, I’m guessing automated install is more than just a nice-to-have. When automation is critical, the configuration enables a simple automated installation, getting you up and running on SolidFire much faster and easier than other systems requiring integration libraries or special configuration sauce. With other vendors, when libraries and add-ons come into play, automating becomes more complex, OpenStack versions and libraries must be compatible, and more manual intervention is needed.

 

configuring SolidFire for OpenStack
Configuring SolidFire for OpenStack takes four lines of code.

 

Scalability check
With many OpenStack vendors enabling simple, incremental, and elastic scaling of compute resources, similar growth in your storage is not only complementary, but critical. Start small with particular workloads, such as test/dev, and grow incrementally over time as you add more workloads and move toward deploying in production.

 

SolidFire’s scale-out architecture and RAID-less data protection easily facilitates a granular growth strategy. Because nodes can be added one at a time non-disruptively, Cinder services don’t have to be stopped and restarted when you need to add more capacity or performance. Generational compatibility and support for mixed nodes means you can deploy SolidFire, grow over time as business needs dictate, move workloads into production when ready by simply adjusting SolidFire’s Quality of Service (QoS), and say goodbye to forklift upgrades forever.

 

Multi-workload/environment check
When running test/dev, production applications, or Database as a Service (DBaaS) workloads, our industry-unique granular minimum, maximum, and burst IOPS QoS settings enable you to run all those workloads simultaneously and graduate them to production with a simple software change.

 

Unlike many other vendor QoS offerings, SolidFire’s QoS has long been available through our driver via Volume Types and Extra Specs, and with the release of Kilo, QoS settings now can be managed from Horizon. Many QoS “features” by other vendors are architected through pools of storage, carved up physically or virtually, to provide pre-determined performance tiers. With SolidFire, you can quickly retype a Cinder volume to adjust IOPS, and when ready, deploy that volume in production by increasing its QoS. It’s all transparent to end users because data migration – endemic to so many other competitive QoS implementations – is obviated.

 

Critical OpenStack instances check
Instances in OpenStack are numerous and disposable: if one becomes problematic, you simply replace it. This is the central paradigm of cloud-based environments; Resiliency resides in the application, not the infrastructure. But sometimes, for a variety of reasons, an instance becomes critical and can’t be deleted. You need to provide it with care and attention, and ensure it lives on.

 

Live Migration has been available in OpenStack for a while, predominantly among shared storage vendors, but few block storage devices have added support. With SolidFire the iSCSI target lives with each volume, for the life of that volume. You can transparently live-migrate an OpenStack instance from one OpenStack Compute server to another OpenStack Compute server. If you have legacy applications tied to their instances, or you need to perform maintenance on underlying hypervisors and move instances without interruption, you can give them the care and feeding they need as you migrate and evolve toward more cloud-designed applications.

 

Does your block storage support Live Migration – moving an OpenStack instance from one OpenStack compute server to another?
Does your block storage support Live Migration – moving an OpenStack instance from one OpenStack compute server to another?

 

There are a lot of additional questions you should be asking when looking for OpenStack storage:

  • Can it handle multiple, concurrent API calls? Storage systems can fall over under even light load if their API is not architected to handle multiple calls.
  • Open source, roll-your own storage solutions may be compelling, but what if something goes wrong? How is it supported?
  • Cloning Glance images makes booting from image on the storage system much faster, and with inline deduplication, those image clones take up no overhead. But how does your storage choice handle data reduction? Do they even offer it at all?

Identifying key uses cases for your OpenStack deployment, and then choosing between object storage and block storage, is a critical first step. Once you’ve decided on block, repurposing and retrofitting your legacy storage for an initial OpenStack deployment may seem like an easy fix, but that storage may prove so unwieldy to use and use well in OpenStack (or lack key capabilities, like Live Migration) that the success of your project might suffer.

 

Choosing a vendor is the hardest part; make sure you choose wisely!

mm

Kelly Boeckman