Containers.jpg

 

By Andrew Sullivan, Technical Marketing Engineer, NetApp

 

Containers continue to occupy an ever-increasing portion of our collective consciousness. Many advocates extoll their benefits while breathlessly pushing to have everything built and deployed in a container. However, while containers do offer myriad benefits, they aren’t a tool which should be blindly adopted.

 

Containers are not a magic elixir which make all of your data center woes go away. Simply placing an application into a container doesn’t mean that it automatically gains the ability to seamlessly scale to hundreds or thousands of instances. This is an important message that gets forgotten by too many container advocates, who are busy preaching their love of Docker and parroting that “Monoliths are BAD!” Docker/containers are not the right solution for every problem, and they don’t have the same features, or even reliability, as virtual machines. Containers may never be fully equivalent for traditional enterprise applications, and that’s OK.

 

Modernizing the Application Lifecycle

My perspective is that there are two primary uses for containers within many organizations:

  1. Facilitating rapid development by standardizing and abstracting the toolchain and libraries into an immutable, portable image which can be instantiated quickly, anywhere, and by anyone (person or machine)
  2. Reducing friction during deployment by decoupling the application (or microservice) from the underlying operating system (OS)

 

What does each of these represent? The first of these use cases is most commonly used by development teams to enable simple, fast, and portable development, test, QA, etc., environments for their applications. The container is a set of libraries and tools which, when distributed as an image, is immutable. When the container is instantiated from that image it behave exactly the same, regardless of whether it’s on the developer’s desktop, an instance created by the continuous integration system, or one that is running in production.

 

The second use case is focused on optimizing the flow of applications from development to operations. Containers aren’t DevOps, but they are one potential tool to enable that philosophy. When I give presentations on containers internally and externally at NetApp and with our customers, I frequently describe the container as a tool which reduces friction between the two teams (dev and ops) by decoupling the application from the host operating system.

 

Remove Contention Between Dev and Ops

One of the most contentious, and easily one of the most complex, parts of deploying any application is resolving dependency and configuration discrepancies between the multitude of environments used during each stage of the application lifecycle. I have referred to this as lobbing the grenade from one phase to the next, with no regard for what the steps before or after actually need. Containers abstract these differences away. The developer wants to use Ubuntu on his laptop, but operations requires the use of Red Hat on production servers? That’s OK, because the app in the container can’t tell the difference!

 

So far, most of the production deployments using containers have happened with applications which already meet the “cloud native” definition. These are applications which may or may not be new, but are hosted on premises by OpenStack or another mature Infrastructure-as-a-Service (IaaS) solution, and already adopt many of the concepts of a 12-factor app. Lacking reliability isn’t a big deal for these applications-think Amazon Web Services (AWS), where unreliability of the instances, their storage, and their network is expected at small scale. Despite this, AWS is hugely popular and hosts some of the Internet’s largest web sites because those website developers have architected that unreliability into the application and it’s able to compensate.

 

Simplifying Deployment For All Applications

Do you remember the software architecture term Service-Oriented Architecture (SOA)? Possibly the best known example of this was the Model-View-Controller (MVC) paradigm. The reality is that microservices are an extension of this concept. The SOA concept predates the microservices term by a number of years, so how did we do it before? The answer is simple, and somewhat obvious: a mixture of virtualization and declarative management tools (e.g., Puppet, Chef, Ansible, Salt, and their homegrown equivalents).

 

The problem with these solutions is twofold. First, they are relatively slow, taking a few minutes to a couple of hours to fully instantiate a service depending on complexity and size. Second, they are themselves a complex software solution which must be constantly maintained. It’s not unusual to have one or more people dedicated to simply doing upkeep on the various management systems full time.

 

Containers simplify the process of application deployment and configuration compared to using IaaS and declarative configuration management together. Creating a new instance is simple. Remember, a container is nothing more than a process which has been namespaced away from the rest of the system, and the promise is that the application requirements travel in the container image. However, containers cannot solve bad application architecture or development practices. They can’t turn an application with only one dimension of scalability into the purported panacea of microservices with full independent scalability and resiliency. But they do make it easier once you’re there.

 

Containerizing Monoliths

This doesn’t mean that containers don’t have some benefits for traditional “monolithic” applications. The container abstracts the underlying OS, encapsulates the application, and makes it easier to deploy, and that is still a benefit regardless of the application architecture.  This can be especially beneficial for simplifying those configuration management tools like Puppet, Chef, etc. There are some things to keep in mind though: be sure to resource the container appropriately, and monitoring is still important. Just like with virtual machines, an application needs the correct amount of RAM, CPU, and other system resources, regardless of whether it’s installed directly to the OS or in a container. Additionally, putting the application in a container doesn’t mean it shouldn’t be monitored for critical events, as at any other time.  Most importantly, test, test, and test again.  You never want to shove an application into production without being confident that it is ready, and the same applies when you’re changing the deployment mechanism

 

Remember to work closely with your dev or ops counterparts. After all, that’s what the DevOps movement is about: the two groups working more closely to enable better outcomes for the business. Containers are a great tool, but they are not the only tool. Likewise, virtual (and physical) machines are still just as valid a solution today as they have been for the last decade. Give an honest assessment of each aspect of your application and determine which solution best meets your needs.

 

If you’re curious about containers or microservices, or want to learn more about how NetApp is addressing the persistent storage paradigm challenges associated with this new ecosystem, please reach out to us at any time via the comments below. Or send an email to: opensource@netapp.com.

Andrew Sullivan

Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows.