3

The Guide to Actionable and Accurate DORA Metrics

 1 year ago
source link: https://devm.io/devops/the-guide-to-actionable-and-accurate-dora-metrics
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

What is DORA?

DORA is a widely used set of four metrics that enable software engineering teams to increase the velocity of software delivery without sacrificing software quality. It’s a great starting point for leaders charged with managing trade-offs, reporting on engineering success to non-technical stakeholders and establishing more data-driven behaviors. DORA is also important to software developers because it helps provide an environment in which their work becomes unblocked, more work gets to production, and the race to go faster is tempered by an equal focus on quality.

The Four DORA Metrics

The four DORA metrics are deployment frequency, mean lead time for changes, mean time to recovery, and change failure rate. These metrics allow teams to determine the speed and stability of their engineering organization. DORA metrics are commonly used by teams starting to measure performance and are often a driver to adopt an engineering management platform.

Let’s dive into each of the four metrics.

Deployment Frequency

Deployment frequency (DF) measures the frequency at which code is successfully deployed to a production environment. It’s the measure of a team’s average throughput over a period of time and can be used to benchmark how often an engineering team is shipping new capabilities.

In agile or DevOps methodologies, engineering teams strive to deploy as frequently as possible to get new features into the hands of users. Some teams deliver smaller deployments more frequently, while others batch everything into a larger release that’s deployed during a fixed window. The standard for high-performing teams is to deploy at least once a week, while teams at the top of their game — peak performers — deploy multiple times per day.

Mean Lead Time for Changes

Mean lead time for changes (MLTC) helps engineering leaders understand the efficiency of their development process from coding to deployment. It measures the average time between the first commit made in a PR and when that PR is successfully running in production. The best teams have an MLTC between one day and one week.

Mean lead time for changes can be less than the average benchmark for a variety of reasons, including batching up related features and ongoing incidents, so it’s important that engineering leaders understand what is impacting MLTC. To reduce MLTC, leaders can analyze metrics corresponding to the stages of their development pipeline, such as time to open, time to first review, and time to merge, to identify bottlenecks in their processes.

Mean Time to Recovery

The mean time to recovery (MTTR) measures the time it takes to restore a system to its usual functionality. Peak performing teams are usually able to recover in less than one day, whereas average teams try to recover in under one week.

Failures happen, but the ability to quickly recover from a failure in production is key to the success of DevOps teams. Improving MTTR requires that teams improve their observability so that failures can be identified and resolved quickly.

Change Failure Rate

When changes are being frequently deployed to production environments, bugs are inevitable. The change failure rate (CFR) is a calculation of the percentage of deployments causing a failure in production, and is found by dividing the number of deploys that caused a failure in production by the total number of deployments. This gives leaders insight into the quality of code being shipped and, by extension, the amount of time the team spends fixing failures. Most DevOps teams can achieve a change failure rate between 0% and 15%.

This metric is an important counterpoint to the DF and MLTC metrics. A team may be moving quickly, but it is important to ensure they’re delivering quality code. Both stability and throughput are important to successful, high-performing DevOps teams.

How to Make DORA Metrics More Accurate

DORA metrics are a valuable framework for engineering leaders and developers, but calculating them manually can be frustrating and lead to inconsistencies. Instead, leaders should opt for a tool that aggregates and processes information alongside other engineering metrics.

In order to get the most accurate metrics, it’s best to ingest incident and deploy data via API from existing tools in addition to Jira rather than use proxy data. Teams that are measuring DORA in an engineering management platform (EMP) can dive deeper into what’s impacting DORA metrics by correlating them with other non-DORA metrics, such as cycle time, unreviewed PRs, and PR size. This highlights performance gaps so they can take action to improve DevOps processes. Examples of this are shown in the table below.

Fig. 1: Metrics pairing examples

Fig. 1: Metrics pairing examples

How to Take Action on DORA Metrics

To get the most actionable insights, teams should use a platform that surfaces the four metrics in context with non-DORA metrics. This is important because DORA metrics measure outcomes — they help you determine where to make improvements, and where to investigate further. When engineering leaders see this information alongside other metrics, they can see how other key software delivery lifecycle (SDLC) metrics correlate with DORA and pinpoint specific areas for improvement.

For example, viewing metrics in tandem may reveal that when you have a high number of unreviewed PRs, your change failure rate is also higher than usual. With that information, you have a starting point for improving CFR, and can put in place processes for preventing unreviewed PRs from making it to production.

Engineering leaders can coach teams to improve these metrics, like reinforcing good code hygiene and shoring up CI/CD best practices. Conversely, if these metrics comparisons indicate that things are going well for a specific team, you can dig in to figure out where they’re excelling and scale those best practices.

Boosting Engineering Speed, Stability, and Satisfaction

DORA metrics give engineering leaders insight into the health and performance of the SDLC, and are an essential starting point for improving software engineering processes. On their own, they help leaders see the tradeoffs between speed and quality, but to derive even more value, they should be measured in a system that tracks additional engineering metrics like cycle time, revert time, and team health. When combined with non-DORA metrics, leaders can understand and improve their engineering team’s speed, stability, and satisfaction.

Without this insight, developers can face obstacles like rework, unplanned work, bugs, and scope creep, which prevent new features from being deployed. These obstacles can put undue pressure on certain teams or people, and impact engineering culture. When engineering leaders use DORA to derive insights and take action, work becomes unblocked, and more quality code goes to production. This, in turn, boosts morale and developer satisfaction, leading to better engineering and business outcomes.

Madison Unell
Madison Unell

Madison Unell is a seasoned senior product manager with over eight years of experience leading cross-functional teams to build and deliver innovative software solutions that solve complex problems in complex industries. Her expertise lies in identifying customer needs and building user-centric experiences that solve those problems through technology, design, and operations. She has a proven track record of leveraging data-driven insights to inform product decisions and drive growth, especially in early stage products seeking product-market-fit.


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK