I recently had the pleasure of attending Cisco Live in San Diego. This came right on the heels of the OpenStack Summit in Vancouver. Both events were great opportunities for the team at SolidFire to interact with customers and partners, many of whom are investigating solutions built around OpenStack.
For me, the one thing that linked the two events was the release of the Agile Infrastructure (AI) solution based on Cisco UCS, RedHat Openstack, and SolidFire storage.
At its core, a SolidFire AI is a series of reference architecture documents that provide a blueprint for our customers and ecosystem partners to rapidly deploy scalable infrastructure solutions.
Their genesis comes from the daily work of our Technical Solutions team, who spend the majority of their time designing for the various infrastructure and application challenges our customers face on a daily basis. As was the case for what we released at Cisco Live, we replicated a solution for a workload environment where several disparate technologies would need to leverage high performance, scalable storage along with the ability to provision several distinct workloads and have them run concurrently without performance degradation.
To make it even more fun, we gave ourselves a time limit of 90 minutes to deploy, provision, and spin up the entire environment. I know, sounds easy as pie.
In this instance we decided to implement the following solutions working in conjunction to simulate a business that had the need to utilize a high transactional web presence with read and write heavy MongoDB instances working in parallel. Second, dozens of 3-tier LAMP stack distributed web application workloads were provisioned with a goal of doubling these instances quickly to rapidly scale to meet the demands of a highly transnational web-based business.
To inject even more disparity into the mix, a series of MySQL instances were also created to stage a production database into a test/dev environment. The goal in having all three of these unique solutions running in concurrence was to bear out one of the primary benefits of the SolidFire all-flash storage platform: Quality of Service (QoS).
It is through QoS that SolidFire storage is able to segment the performance of multiple workloads and ensure storage performance requirements are met, and that strict enforcement of storage-based SLA’s are achieved.
Both solutions are leaders in their respective space and have strong, dedicated teams contributing heavily to the OpenStack foundation. Within SolidFire, we have spent significant time and effort to ensure that our scale-out, all-flash storage platform is turn-key ready for customers looking to build OpenStack-based clouds.
Having been a major contributor to the Cinder project going back to the Folsom release, SolidFire has worked hard to position itself as the defacto standard for flash-based block storage in the OpenStack ecosystem. Working alongside our partners at RedHat and Cisco, we were able to craft this specific AI solution based on the workloads detailed above, and have the entire platform operational in 90 minutes. I know that may sound like pure marketecture, but at SolidFire we like to provide proof points for our claims.
As a point of illustration, deployments of the initial workload instances was a time-consuming process because of the amount of I/O required to provision production images. By enabling the QoS feature to dynamically grant more performance to the SolidFire volumes, real-time adjustments to the storage performance were utilized to accelerate the instance deployment times. This was achieved through a Cinder retype command to adjust the performance requirements in real time, and resulted in a drastic reduction in deployment times vs traditional storage solutions where QoS cannot be utilized.
As a closing note, I would like to invite you to join us June 30th for a webinar that will detail how AI enables you to go from Zero to OpenStack Cloud in 90 minutes. Register now to reserve your spot! I’m looking forward to having you join us on the 30th. For a quick teaser, watch this 3-minute video.