Containers are one of the most interesting technologies to be available to the development communities in many years. They provide the ability to package an application and its dependencies into an easy-to-transport and easy-to-consume package available to anyone who needs it. This technology has started a transformation in development organizations facilitating faster iteration from code creation to code running in production.

Many of the early adopters of this technology are organizations that have adopted the DevOps mentality. If you’re unfamiliar with the term, DevOps is a movement that has been ongoing for several years. In this movement, developers are either responsible for or work very closely with operations for the application under development. This situation substantially blurs the lines between developer and administrator for most operations; it also brings a new perspective to simplifying application deployment strategies. Frequently, the same person who writes the code also pushes it into production after testing has completed, and many times these operations happen within hours of each other.

Historically, an application was developed as a monolithic entity, with all services and their interaction tied together using internal mechanisms. Yes, things such as databases were typically managed externally, but inside the application, data moved using internal mechanisms. Containers facilitate the adoption of microservices, an application design principle in which the internal processes are split apart into multiple disparate pieces. This approach has many advantages, not the least of which are smaller code bases of individual components (making maintenance and troubleshooting easier), simplified scalability, and easy deployments.

However, one of the things that becomes more complex with containers is the use of persistent storage. With a traditional monolithic application, a server or virtual machine is provisioned that hosts all of the binaries, libraries, and even data (often in the form of a VMDK) for the application. This approach results in high overhead, with wasted resources at the hypervisor level managing virtual hardware. It also adds complexity to the overall environment for managing another server in the application farm.

Traditional data management techniques are also applied when deploying an application to a single server or a very small number of servers. When using NetApp® storage, this means using such things as NetApp Snapshot® copies, SnapMirror® software, the FlexClone® platform, and all the other features of the NetApp clustered Data ONTAP® operating system. They are leveraged to take advantage of the application-integrated data protection and data management features.

However, when using a container, data resides in what are known as volumes (not to be confused with NetApp FlexVol® volumes). Docker’s container volumes are ephemeral, existing only as long as the container does, and they are stored on the host that executes the container in a specific location: /var/lib/docker.

NetApp + Docker: Persistent Data for Containers

This capability leads to a new set of data management challenges: How do we protect the application when the stored data is kept only temporarily by the host? The simplest solution is to leverage NetApp NFS and LUN objects that are being connected into the container. Using this method you can take advantage of NetApp data protection and storage efficiency while also knowing that the application’s data is being stored for as long as needed on highly resilient storage.

Let’s start by creating and exporting a volume on our clustered Data ONTAP system.

Creating a volume

Once the volume is available to the Docker host, you can mount it:

Image 2

Instantiate the container using the -v option to pass the host mount of the NFS export into the container at a specific point.

Image 3.png

One of the biggest concerns with ephemeral Docker volumes is ensuring that the data is permanent.  Here we create some data from inside the container, then destroy the container.

Image 4.png

You can see that we connected into the container, created a file (index.html), and populated it with some simple content. After exiting the container, we use curl to retrieve that page and verify its contents. Finally, we destroy the container.

Standard Docker volumes are not persistent; this means that the data contained in that file normally is lost. However, in this instance we passed a local file system into the container that happens to be where the file we created resides. You can see that file in the mounted location:

Image 5.png

To verify that our data has been retained, let’s instantiate a new container and connect it to the same export as before.

Image 6.png

NetApp + Docker: Data Protection for Applications

Now that we have verified that our data is persistent and being stored on our NetApp storage system, we also want to be able to protect the application data. The NetApp OnCommand® Snap Creator® Framework provides an easy-to-use interface for coordinating Snapshot copies on the NetApp storage system with the application running in the container. The simplest implementation takes advantage of Docker’s capability to pause and restart a container, which Snap Creator can take advantage of as delivered without any special plug-ins.

Image 7.png

Now, when we want to create a storage-based Snapshot copy of the FlexVol volume being used by the container, we can leverage Snap Creator to pause the container. Then we can take the Snapshot copy and resume the container.

Image 8.png

The resulting Snapshot copy on the storage is:

Image 9.png

The process above is the successful result of the Snap Creator Framework pausing a container, taking a Snapshot copy of the volume, then unpausing the container. This process results in the volume Snapshot copy being made when no I/O is taking place, which ideally means that the application is quiesced.

There are a number of other ways in which Snap Creator can be used with containers, including:

  • Install the Snap Creator Agent to the application container so that it can be directly leveraged from the Snap Creator Server interface.
  • Use the docker exec command to stun the application inside the container. For example, an application, such as a database, could be quiesced by issuing SQL Server commands using the CLI.

NetApp + Docker: Enterprise Ready Today

As containers become more popular, they are beginning their transition inside enterprises from a development tool to the deployment mechanism of choice for new applications that support the microservices design paradigm. This means that operations administrators are becoming aware of containers and concerned about their ability to protect data as they always have. By combining NetApp clustered Data ONTAP with Docker deployments, your applications are as robustly protected as they always have been. Further, by leveraging the Snap Creator Framework, managing data protection is as simple as ever, enabling application-consistent backups regardless of the application being deployed.

I hope that this introduction to leveraging NetApp storage for persistent application data used by containers has been informative. For more information about integrating Docker containers with NetApp storage, please use the comments below, or reach out to me using the message system.

Andrew Sullivan

Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows.