I was able to sit down with John Griffith recently to talk about Docker and containers and why they are changing the application delivery and development landscape. We recorded the conversation and published it as an Elements of SolidFire podcast.
For those of you that would rather read about it or don’t have time to listen, the following is a paraphrased transcript of the conversation that highlights the most interesting and helpful points. We cover an introduction to Docker and containers, the advantages of this new (but old) technology, and the advantages of combining container technologies with persistent storage.
Aaron: John, give everyone a quick introduction to your role at SolidFire and historically what you’ve done with the company.
John: I’ve been with SolidFire for almost four years now. I came on board mostly to work on OpenStack to help drive the OpenStack project, particularly Cinder. SolidFire was part of the core team that started Cinder and that’s what I’ve been doing pretty much the whole time I’ve been at SolidFire. I’ve also been on the fringe looking at new technologies, and of course Docker is one of those things.
Aaron: Hence the reason why we’re here at DockerCon. We wanted to have a high-level discussion about containers and storage today. There’s probably a lot of people out there that have heard about Docker and containers but they don’t quite understand it. In your view, what are containers?
John: The thing about containers that’s kind of cool is it’s not the newest technology, actually. Back in the day there were things called chroot jails and cells. Basically, what Docker is and what containers have done is taken that technology and capitalized on it. The result is more and more features and functionality are getting pushed into the Linux kernel. What we’re doing is exposing that and allowing you to run isolated processes. Or, if you want to view it as an isolated deployment of another Linux OS inside of your existing OS. The other thing that’s really cool is you can do things more easily. If you’re on a Windows machine or a Mac, you can have a stripped down OS or something with just a Linux kernel and run an Ubuntu system on it and do things with it.
Aaron: Many will ask: “How is that different from a virtual machine?” A virtual machine is one operating system and one or more applications in one big stack if you will. Right?
John: With a virtual machine, you’re actually loading up all of the bits including a kernel. The big difference between a virtual machine and containers is with a container, you’re actually sharing the system’s kernel. So, you’re sharing those resources. You’re not duplicating those resources. You’re actually sharing multiple containers across a single operating system.
Aaron: For both developers and operators, what are some of the benefits of containers? Why are they so interesting now?
John: The thing that’s really cool about containers is they’re super light-weight. They’re super fast and they’re super easy to implement. That is probably for me, one of the strongest things. I can go download on my Ubuntu machine. I can download a Red Hat version, a CentOS version, an Ubuntu version, whatever I want. I can load all those up and run different applications on them in a matter of seconds. It’s really that simple. It gives me the flexibility to actually run on all these different platforms and test my code. Or even better, NOT test my code on all those platforms because now I can say, “Don’t install it on your CentOS or don’t install it on your Fedora.” Just take this container which is the bare minimum, pull it down off of my Docker hub and run it just like it is. I don’t have to worry about the operating system layer anymore.
Aaron: Another big benefit I’ve always seen as well is the concept of starting, stopping or even rebooting a container. If you’re going to reboot a container, it reboots in seconds versus minutes because there’s no operating system.
John: Yeah. It’s awesome. The thing that’s cool, too, is even just keeping the whole upper layer out the equation when you talk about Docker Swarm or Kubernetes. Even the container itself, you can set it up and say, “Hey, launch this container and automatically re-spawn it”. If something does happen and something goes down, it automatically comes back up and your processes and services that you were running in there just automatically restart. It’s pretty cool.
Aaron: How does storage fit into all of this? You and I have had some discussions previously of ways in which storage interacts with containers and then everything changed yesterday as well. Let’s start with the old ways first.
John: Traditionally, what most people were doing were using things like NFS or ZFS for data persistence in containers. Then, when you boot up a Docker container, you can pass in a mapping to that path and use the storage. There are some variations you can do in there. You can also do it directly on a container. You can do things like set up a storage container which is an interesting abstraction in a little circular path. There’s a lot of advantages to that. A lot of people have been doing things more stateless and keeping all the state in the container itself. I think we’re going to see a lot of that changing.
Aaron: When you’re storing state inside a container, I wouldn’t say it ruins, but it certainly makes the scalability — the scale up, scale down quickly — more difficult, because if you have to drain the data from the containers when you’re taking offline, you have to take that state out of the container if you’re going to get rid of it.
John: There’s an interesting distinction there. There’s a difference between the actual state of the machine and when we start talking about doing things like running a database and having your data persist. That’s really the key that I’m talking about.
Aaron: Yesterday Docker announced something interesting called Docker Plugins. Specifically, storage and networking plug-ins. I think you’ve had some time to dig into that a little bit. Give us your impressions.
John: Awesome stuff. Between yesterday and today they’ve announced a ton of things that are pretty exciting and cool. The plug-in architecture in particular I think is really huge. There’s now going to be an opportunity for people to start developing their own plugins for storage and networking. Docker is beefing up support for third-party backends. It’s similar to where we started out with OpenStack, to be honest. The cool thing so far is it’s going to be simple. Docker’s plan is to stick to the standard interface. It’s going to be cool stuff.
Look for more information about SolidFire and Docker at the upcoming DockerCon 2015 EU Event in Barcelona November 16-17. SolidFire will be a sponsor of the event and we hope to see you there!