I think there are two schools of thought on managing storage with a tool like Puppet. Let’s set the stage first, and then we’ll explore how each of these plays out and the ramifications. The first school of thought is to keep the existing storage-centric workflow and simply use Puppet as a new tool that makes the workflow faster and more documentable. The second school of thought is to change or use a whole server (or VM or container) provisioning workflow. In the second case, we smash the line between storage and servers and think about the process holistically.
In many IT shops, there is a separation of storage and server management, sometimes by conscious design, politics, or happenstance. In our first school of thought, we use Puppet as a tool for the storage group to manage the storage systems and provision storage on those systems. The Puppet “device” construct allows a manifest to define the storage environment, and that definition is then pushed to the storage systems. The storage definition manifest(s) can be revision controlled and audited and by definition represent the environment at that point in time. Unfortunately, the Puppet device construct is fundamentally asynchronous to the remainder of the environment (servers).
A better way to manage an environment is holistically from an application or server perspective, which leads to our second school of thought. Here the storage, server, operating system, network, database, applications, and anything else necessary are configured as one manifest(s) chain. In this case, the framework that applies the manifest to the server also reaches out to the storage infrastructure to provision what it needs as part of the process of building the full stack.
As an example, consider the requirements for provisioning a database server: Why should there be a process executed by the storage admin and then some kind of a handoff to the server process? In a single process, the server database provisioning process can reach out to the storage, allocate what is needed, mount that storage, configure the logical volumes, create the filesystem, and mount the storage for the database. All of this can be completed in one configuration. It is assumed that the manifest would be continued and include the database installation, configuration, and any applications. One process gets the whole stack stood up. The one process can still be audited and revision controlled with tools like Git, but that will eliminate the clumsy steps between various groups that generally leads to longer provisioning times and greater likelihood of errors.
A complete Puppet implementation based on a robust API to the storage device allows for utilizing Puppet manifests from either the “server” or the “device” constructs. SolidFire has developed a Puppet module which can be utilized by both constructs, and we have provided an example manifest whereby we provision storage, mount the storage, make a filesystem, and simulate starting a database to utilize the storage without engaging another team.
The asynchronous nature of the Puppet device construct is a challenge not only for storage but for networks too. How can you get the application running if you aren’t sure if the network ports are allocated yet? I contend that cloud concepts have already exposed this kind of workflow, whereby the VM, storage, and networking are all provisioned together. Why not provide this ability for hardware or storage and networking not managed by a cloud?
Another challenge we were forced to overcome is one of providing authentication for the storage subsystem through both the Puppet “apply” and “device” constructs. Several methods are available and proper protection of passwords through Hiera or other means should be used when deploying. We’ll continue to look for ways of simplifying this in the future and as we get feedback from customers. Stay tuned.
I would encourage Puppet and the Puppet community to continue developing necessary modules for full provisioning of servers, iSCSI client control, logical volume control, and filesystem creation as these are lacking today and require more scripting than should be necessary. In the meantime, SolidFire is extending storage configuration to consumers through multiple Puppet constructs, providing both development and operations organizations with the ability to maintain the same level of control they currently have in other areas of system management.
For a demo on the SolidFire Puppet module, check out this video. You can also talk with us at PuppetConf later this year as well as Puppet Camp events around the country. In fact, join us at Puppet Camp Washington D.C. on June 15 where Josh Atwell will be presenting on putting storage features at developers fingertips. You can register here and see the SolidFire Puppet module in action.