As enterprises move to adopt DevOps practices, there is a desire to measure where teams are on their journey. This sometimes leads to the development of maturity models used to assess teams’ levels and progress. While we are not going to pass judgement on what might be useful for certain organizations, we believe that the real key is to drive continuous improvement through data like Lead Time, Frequency of Deployments and MTTR (i.e. the key metrics from the State of DevOps Reports).
Any static maturity model will become outdated. What’s more important is that teams are taught concepts which can help them continuously evaluate what is stopping them from delivering more quickly so that they identify areas for improvement. Key to this is the lean concept of value stream analysis. If you walk up to a team and ask them why they can’t go faster, they will typically say they are waiting for something. They could be waiting for more work to flow into their backlog. They could be waiting for infrastructure needed for development or testing. Or they could be waiting for another team to develop a service they need to consume.
Many teams don’t spend time in retrospectives talking about how to overcome these types of blockers or wait states and developing counter-measures as continuous improvement initiatives. One reason for this is the lack of objective data that can be analyzed to provide insights on this. This is where the lead time metric comes into play. Historically, lead time has been defined as (last) code commit to deploy into production. While this is important, it only tells part of story because accelerated delivery and feedback really starts with a customer concept and measuring how long this takes until it is delivered into production so feedback can be provided.
One challenge to providing this kind of end to end metric, which spans the entire delivery value stream, is the lack of an integrated delivery pipeline from which data can be automatically collected. A common customer concept starts in data being entered in some type of Project Portfolio Management system. Once approved, this should result in features being created which should flow into a product backlog. Stories are created which can be pulled by the agile team into an iteration or sprint. This information is typically in some agile management system (e.g. Atlassian Jira or IBM Rational Team Concert). Then the team develops and builds the stories, tests them and deploys them into production via a deployment tool like UrbanCode.
An important aspect of value stream analysis is to differentiate time spent adding value to product being created (referred to a “process time”) and the time spent in wait states. An example of process time in the IT delivery value stream is time spent writing the stories, coding and testing. Once teams gain more insight into what is slowing them down, they can identify counter-measures that can be applied. But let’s take a look at how we might implement this.
We’ll look at three different execution approaches. For ease of reference we’ll refer to them as Small, Medium, and Large.
This approach is focused on organizations with the smallest amount of necessary investment in technology. This might be because you have a small organization or you are a small group within an organization, perhaps as a “pilot” group. Technology will be geared toward a centralized team perspective and tend toward simple tools and physical representations. This may be suitable, for example, for organizations that are technology averse for any reason.
There are tools that make sense to use regardless of the size or relative sophistication of your organization. For example Git and GitHub make sense to use for source code management whether you have 2 developers or 2,000 developers. The software is free to use and has enterprise level capabilities. For this approach, 3”x5” cards on a board (or even pinned on a corkboard or cube wall) may be enough. Consider the group, your culture, your budget and how committed you are to making this change. You will send clear messages, intended and otherwise with these choices.
The core practices of DevOps are source code control, continuous integration, automated testing leading to continuous delivery. These practices are well documented in several references like the DevOps Handbook. It is essential that organizations begin here to implement practices and tools to provide these capabilities. As noted above, a tool based on Git, Jenkins or Circle CI for continuous integration, automation using Ruby or Cucumber are some of the standards used for these practices. While there are tools like Jira and LeanKit (among others) for this, this can be also be done simply using GitHub issues if the majority of the work is done by a small team with few dependencies.
No matter what size organization you are, you need to be focused on code quality and security. An open source tool like SonarQube can be integrated into your pipeline to ensure that the build is considered broken if your code quality rules which you can configure fail when code is checked in. Likewise security tools like Fortify or AppScan, can be used to integrate into your pipeline along with a number of readily available open source tools that are free (or nearly free).
One open source tool every organization should consider implementing is the Capital One’s Hygieia open source tool. The brainchild of Topo Pal, this open source dashboard provides visibility to the status of work, builds, deployments and many other DevOps metrics useful for monitoring team activity and progress.
For small enterprises, there is a pipeline and deployment capability provided in the CI tools (Jenkins, Circle CI) that can be used for deployment. Of course, once the code is deployed the “OPs” part of DevOps takes place in terms of monitoring the software in production. This is where tools like Splunk and New Relic are used for alerting and logging of events that need to be addressed.