We are again at a fascinating point in the ever-accelerating world of IT. Every 7-10 years or so, a combination of IT trends and technologies simultaneously reach a point of maturity, dramatically changing the way IT infrastructure needs to be architected in order for organizations to stay competitive. In 2006-09, it was the combination of mature second platform (client-server) applications together with virtual machines, virtualized storage, and converged infrastructure that changed the industry for the next decade. Starting around 2010, the third computing platform emerged — meaning applications began to be developed to take advantage of new social, mobile, cloud, and analytics trends, capabilities, and requirements. A few years later, those applications are becoming ever more business critical. Today it is a combination of DevOps together with containers and next-generation storage that is intersecting. Combined with the maturity of cloud services, this is again significantly impacting the choices and investments every IT department need to make.

 

The principals behind the DevOps movement have become the organizational catalyst many organizations needed to accelerate their efforts to modernize their entire approach to IT. There are now many documented and published examples of how this team effort across developers and operations results in higher IT performance, faster time to market, and increased employee and customer loyalty. Puppet’s 2016 State of DevOps report, based on the results of surveying 25,000 technical professionals, concludes that high-performing organizations using DevOps principles deploy 200x more frequently and spend 29% more time on new work. The report also shows that the number of DevOps teams increased by 22% in 2016. There is no doubt this trend will eventually affect everyone interested in next-generation IT.

 

But that’s only the beginning.

 

At the first European DevOps Enterprise Summit in London this summer, a wide range of transforming, digital enterprises presented their DevOps journey. Most began over the past couple years with an agile transformation project i and are now looking to move to deployment at scale. The results presented were impressive, with every single organization benefitting from increased agility, increased quality, and improved innovation. Perhaps most surprising was that those running these projects saw far fewer outages compared to their traditional IT environments. Good news all round! At the end of each session, the presenter was asked what they wanted to know more about related to DevOps. The most common requests: How do I run DevOps infrastructure better at web scale? How can I benefit from containers at scale? If you have or run DevOps teams for several years then that is probably not a surprise, but where should you look for answers?

 

Over the past 10 years as the DevOps movement emerged, the infrastructure, designed by architects and built by operations teams, had already changed. Thanks to the hypergrowth of VMware, operating-system (OS) virtualization is now pervasive within most organizations and data centers. However, just as many developers have become used to the flexibility of the public cloud for the requirements of their modern applications, they have also become all too aware of the limitations of a traditional, non-automated virtual-machine environment.

 

Container-based infrastructure removes the need for the hypervisor and is seen by many as the answer. Containers are smaller, faster, and more agile. They are also more easy to automate and share between applications. As a result, they facilitate the move to modern applications built around microservices. They provide teams with access to more immediate resource scaling, meaning innovation can move faster from idea to production. Increasing business and customer service demands will drive more and more organizations to move from VMs to containers. Those investing in DevOps infrastructure as an enabler to their business will move faster.

 

If the answer to DevOps at web scale is based on containers, then the next logical question has to be the approach to storing the data those container-based applications will access. Many early virtualization projects were either delayed or simply failed to deliver as they moved toward production. All too many times, the reason was a complete lack of any focus on the storage and data management architecture. As a result, the adoption of VMware ESX and vSphere took longer as teams came to understand that traditional storage could no longer meet the requirements of virtualized infrastructure. In today’s highly competitive world, where IT is the foundation of digital business, organizations cannot afford to make the same mistake again with the Docker ecosystem and containers. As operations teams select which cloud architectures are needed to support their DevOps teams’ ambitions, they must not forget that the storage architecture chosen will either dramatically accelerate or hold back the applications development teams want to build. Next-generation DevOps infrastructure needs containers. Successfully building container-based infrastructure at scale needs a next-generation approach to storage.

mm

John Rollason

John Rollason is Marketing Director, International at SolidFire, responsible for marketing strategy and delivery across Europe, the Middle East and Africa, Asia Pacific, and Japan. John lives in London and speaks regularly at industry events.