DevOps is intended to be a faster and more efficient way for enterprises to roll out software. It replaces the traditional stages of planning, development, testing and implementation with a concurrent system in which development, testing and implementation are merged to form a single, seamless whole. It’s a relatively new way of working—the system emerged from an Agile conference in 2008—but it’s taken off quickly, particularly among large firms that could benefit most from the efficiency savings.
In traditional software development, product managers would produce a list of desired features which they would discuss with the marketing staff and the software engineers. The result would be a release plan. Before each new release, support staff would collect feedback from users that would help to identify bugs for removal, flag up usability issues that could be improved and identify new features that could be discussed, planned and added. The developers would then write the code. That code would be passed onto the QA team and the team would look for the flaws. The code would return to the developers who would fix the bugs, and finally the system admins would take the code to the server and oversee its rollout and management.
In practice, things rarely went that smoothly, and when they did go smoothly, they rarely went quickly. Iterations didn’t always end with a production-ready release, and production-ready releases didn’t always end with implementation. The additional stages of code-bundling, product packaging and co-ordination with the day-to-day operation tasks didn’t always result in an iteration making it to servers. Too often, enterprises found that traditional development meant that an iteration would only reach a staging environment before it was killed off for the next iteration. As Daniel Greene, Director of Advanced Technology US at 3Pillar Global put it: “People began to realize that a ‘throw it over the wall’ mentality from development to operations was just as much of a challenge as we used to have with throwing requirements over the wall and waiting for code to come back.”
The solution came in the form of a conjunction of the coding created by developers with the server management tasks that make up the operational deployment work of sysadmins, with the testing traditionally performed by QA integrated into the process. Some DevOps experts have described the process as a conveyor belt in which testing and checking is conducted as the product rolls down the belt and the factory workers continue building. Any code found not to be up to scratch can be removed while anything that makes it to the end of the line can be considered ready for use.
One difference between DevOps and older ways of working then is that releases are likely to be smaller and contain fewer changes. But they happen more often, with less fanfare and in a constant flow so that the software is always improving. Risks that changes could crash the system or introduce new bugs are also lower when the changes are smaller. By making deployments consistent, constant and even automated, mistakes—and time wasted fixing them—are reduced. Because infrastructure issues can be seen during the development cycle, instead of after the software has been written, problems caused during the move from one environment to another can be reduced.
The way in which a DevOps process is implemented requires a toolchain, a set of tools that allows the DevOps team to: create and merge code; conduct tests to determine performance; place artifacts in a repository; automate the releases; configure the infrastructure; and finally monitor performance.
The most popular tools include Puppet, an open-source configuration management tool that runs on Unix-like systems and Microsoft Windows. Model-driven, it requires little programming knowledge and is used by Wikimedia, Mozilla and the New York Stock Exchange among others.
Vagrant builds virtual development environments. Written in Ruby, it also supports projects written in PHP, Java and other languages, and the latest version natively supports Docker.
Docker may be the most important part of the DevOps toolchain. Also open-source, the software automates the deployment of applications inside software containers that run within a single Linux instance. They don’t need entire virtual machines, and Docker uses a base image on which union filesystem layers are documented and created. Each layer can recreate an action and because only individual layer updates have to be propagated, Docker’s images are particularly lightweight. DevOps teams can easily build, deploy and run applications in a way that reduces friction with old-style sysadmins. The result should be an environment that is flexible and easily portable; Docker’s containers can easily be ported between clouds and servers.
DevOps isn’t the only alternative to older, sequential production models. The process is often compared to Agile which also uses cross-functional teams and flexible development but which lacks specific methods. Continuous Delivery also breaks updates into smaller, incremental improvement which are released faster and more frequently but while the deployment process is simple and repeatable, the product and implementation don’t have to be concurrent.
Recently, DevOps have run into difficulties caused by the lack of investment in teams that are more functional than creative, and toolmakers who haven’t kept up with the complex demands of developers and sysadmins working on overlapping tasks. The increased use of cloud computing too has taken away much of the work from developers. Now that they no longer have to install databases or worry about backups and redundancies, many of the problems that DevOps teams were built to solve have vanished.
The best way to see DevOps then, and the reason the process remains important, is that it’s a way of thinking. It takes tasks and responsibilities that used to be divided among separate teams and makes them the responsibility of the whole team. It forces different units to see the big picture and to treat the software they’re creating in the same way that customers use it: as a single tool. It creates a new kind of development culture, one in which building, testing and implementation happen organically as well as quickly and automatically.
Recent Comments