by Bikash Roy Choudhury, Principal Architect, NetApp
Part 2 of 3
Following my first post in this series, I wanted to take the opportunity to explore in more depth the relationship between developer and the operations teams that support them, key industry trends that are impacting DevOps and what we at NetApp offer the DevOps community.
Many of us in the industry have already seen how cloud and open source has significantly changed the way software is developed. However, a less noticeable shift is in how these advancements have inextricably linked the roles of developers and their operations counterparts.
Integrating the Development Process
DevOps can be supported by as many as or as few processes and technologies as are deemed necessary by developers to do the job of building applications. A single developer can be responsible for every aspect of an application, or the job can be parsed out to specialists, automated, or adopted from existing code pools, both open source and proprietary.
The result is an integrated development process that provides continuous integration, testing, delivery, and deployment of applications. There is a tighter coupling between the developers who consume the infrastructure, and the system and platform administrators who maintain and support the infrastructure to achieve faster time to market and better software quality.
Sometimes thought of as the “crown jewels,” code is normally considered the intellectual property of an organization. But the developer is the “kingmaker” who demands autonomy and flexibility during the development and deployment phases of the DevOps process. Underlying this approach is the ability to provision infrastructure, manage resources, protect data, and provide a stable and reliable environment transparently without burdening the developer.
Cloud, Open Source and DevOps
There are some major industry trends that have caused a change in how software is developed and how the role of developers and their operations counterparts have become so inextricably linked:
Cloud – Increasingly the default starting point for new application development and migration alternative for existing applications
Efficiency and Cost – The trade-off between short-term advantages of starting in the cloud and the long term costs of staying in the cloud as applications and data growth scales
Open Source – Tools and process integrations allow PaaS as a continuous deployment offering with few barriers in production.
Partner Integrations – Provide more homogeneity and standardization with language and platform integrations with the workflow.
The DevOps Workflow
The two-part workflow of an ideal DevOps environment (seen below) starts with the design through the application deployment. In part one, termed the “business experience,” investments are made in resources, manpower, and infrastructure to achieve certain business goals or objectives. Part two-the “customer experience”-is when the application is consumed by end users within and outside the organization until it reaches end of life.
Together, the two parts constitute a complete DevOps experience. A standalone integration or an automation in isolation that does not connect the two phases does not define the spirit of DevOps. That would be considered “DevOps washing.” For the true DevOps experience, integrations and automation must happen end to end in the development workflow.
Automating the Infrastructure
Throughout the DevOps practice, multiple lines of code are written, and data is generated and grows as the application is consumed by end users. During this process, developers worry as little as possible about configuring and managing the infrastructure resources, which has given birth to the term “NoOps.” NoOps does not mean that no operations are needed; the term simply implies that developers should not have to spend time in the operations part of DevOps. The operations team is responsible for integrating the tools and for automating the infrastructure. Automating the infrastructure is a means to an end. System and storage administrators on the operations team get additional time to innovate and reform the infrastructure into a series of services (paid or otherwise) required in the development cycle.
Where NetApp comes in
NetApp is a leading data management platform for application development because of unique capabilities that make the process faster, easier, and more efficient. We also provide a stable and standard storage platform that is necessary for application development.
The DevOps phases illustrated in the top half of Figure 1 generate different kinds of workloads. Writing code and running unit tests involve a different workload than those running full builds in parallel. The ability to create user workspaces instantaneously enables developers to mitigate risk by running tests on code changes without affecting the integrity of the main code base. Some workloads might be sensitive to performance while the capacity needed to store large volumes of data might be a requirement in a different phase. Crash-consistent backups enable developers to restore if data loss or corruption occurs.
Data should not be locked in a specific format on heterogeneous storage silos. Data should scale and move beyond the walls of the physical constructs of storage controllers into a storage resource pool that can be provisioned on demand. Once the application is retired, all the artifacts can be archived for regulatory purposes.
NetApp can provide all of these capabilities and more to support the DevOps practice.
In the final part of this series we will look at transforming to a services-based application development model in the hybrid cloud on which to build the modern DevOps practice.