Key Takeaways
- Developer Effectiveness can be optimized by identifying a series of feedback loops that represent the main tasks in engineering; these can be measured and streamlined.
- Feedback loops can be of different sizes; some are micro-feedback loops, small tasks that developers do over and over throughout the day. Removing friction helps with focus, and will compound to larger measures.
- There is no magic effectiveness metric, but there are low level metrics that can be used as indicators, which should be applied in the context the team is working in; not every team should use the metrics or expect to hit the levels.
- Quantitative metrics should be used in conjunction with qualitative feedback from developers using the internal environments and tools.
- Effective companies are using platform teams to get economies of scales; these operate like product teams, applying the same principles and customer focus, except the product is a technical capability and the end users are developers.
We can think of engineering as a series of feedback loops: simple tasks that developers do and then validate to get feedback, which might be by a colleague, a system (i.e. an automation) or an end user. Using a framework of feedback loops we have a way of measuring and prioritizing the improvements we need to do to optimize developer effectiveness.
Tim Cochran explained this concept in a presentation at QCon London 2020 about Developer Effectiveness: Optimizing Feedback Loops. He also published the article Maximizing Developer Effectiveness in which he examines how some organizations have used feedback loops to improve overall effectiveness and productivity.
In the article he lists out the key feedback loops, such as:
- Validate if a local code change works
- Find root cause for defect
- Validate component integrates with other components
- Validate a change meets non-functional requirements
- Become productive on a new team
- Get answers to an internal technical query
- Launch a new service in production
- Validate a change was useful to the customer
He also introduced the concept of micro-feedback loops as small activities followed by validations that developers do tens of hundreds or more times a day.
Cochran gave an example of a micro feedback loop of a developer trying to do a first level of valuation for their code change:
This would be in their development environment. That means local on their computer or in a personal VM or cloud environment. The validation the developer desires is that the code "works" as expected. They validate this by compiling, and it can be deployed to the application server and then running in their client (e.g. browser or API client). They can also validate it by running the unit tests the developer wrote and the existing tests for that component.
There are other forms of small validations, but these are good ones to optimize, Cochran said. The benefits of optimizing add up quickly; they are also compounded because they keep developers in their state of flow, allowing less change for disruption, Cochran argued.
Cochran suggests that empowering developers is an important way to optimize micro-feedback loops. "Typically how micro feedback loops are tackled is through continuous improvement and fixing technical debt," he said. Developers can easily find the friction in their environment, because they spend so long at it. There are many articles written about the advantage of giving developers time to pay down this debt and make process tweaks, enabling them to come up with creative solutions to improving productivity, Cochran argued.
While he listed some key feedback loops, Cochran suggests finding your own that are important for your context.
You need to know what exactly is happening in the engineering organization. You have to take stock. Map out the complete software developer value chain. Talk to developers, OPS and QA about what they do and how they achieve their outcomes, their frustrations. This will show you the friction points.
For identifying the feedback loops that need to be optimized, it is important to look holistically across organization structures, as Cochran explained:
Too often we see companies trying to optimize a process that shouldn’t really exist in the first place if the organization was organized better. Resit the urge to dive into solutions and pick tools; often the solution lies in culture, communications and process.
Another gotcha is just focusing just on speed and automation. While of course it is important, a fast process is useless if you can understand the results or the results are inaccurate, so it is important to focus on the value you are providing to the developers and other technologists.
For feedback loops that are complicated, span teams and organizational boundaries. Cochran suggested using value stream mapping, to identify friction.
InfoQ interviewed Tim Cochran about optimizing micro feedback loops to increase developer effectiveness.
InfoQ: Can you give some examples of micro-feedback loops?
Tim Cochran: Often it is longer validation processes that can be shifted left, broken down and moved into fast micro-feedback loops. The key here is usefulness and empowerment. If the process is useful, accurate and easy to interrupt, and the team is empowered to optimize, then it will naturally get shortened. Developers will want to run useful validations more often, earlier in the cycle.
Take the example of regression tests that are tests owned by another siloed team, that are flaky, slow, run out of cycle and hard to figure out what is wrong in them. It is unlikely that they will get optimized, because the developers don’t perceive much value in them; whereas with a test suite based on the test pyramid that is owned by the team, is in the same code base, and is the gate to deployment to all environments, the team will come up with ways of improvement.
You can apply the concept of feed loops to different scales, for example, super small loops, e.g. when the developer is coding, what feedback can we give them to help and nudge them? How does your IDE inform you have made a mistake, or how does it help you find the syntax for the command you are looking for?
When we look at the developer flow, discovering information is a big source of friction.
In the Martin Fowler article I reference case studies at Spotify and Etsy. They are examples of companies that pay attention to the details, to the small things. Spotify created backstage.io, which is an example of an internal tool focused on developer experience that does many things to speed up developer flow. One of the things it really helps with is information discoverability: to provide technical documentation, API definitions and where to find a service owner to get help. They have included a fast contextual search and built-in feedback mechanisms to improve the documentation. I often see at companies that finding this basic information is surprisingly difficult, and can do a lot to distract developers from their main task as they chase down information.
InfoQ: How do you get buy-in on optimizing feedback loops, in particular the less visible micro-feedbacks loops?
Cochran: If you are having trouble justifying the time for those fixes, to get buy-in we have to raise awareness of the problem. This is where measuring the feedback loops and estimating the outcome of the improvements come in. There are tools that can measure the amount of technical debt. You can also use metrics such as bugs, time spent debugging, or outages; all these are good measures.
The other thing to raise awareness on is duplicative work - if each team is writing code to solve similar problems, or if they are using different third party tools to solve the same thing. Auditing this will allow you to reveal the problems and the opportunities.
What we are seeing now is when you have platform teams that are purely dedicated to improving developer productivity, they function like product teams. They should have research and product managers who are laser-focused on the experience of their users – the developers.
InfoQ: What are your suggestions for low-level measurements and how can they help to improve effectiveness?
Cochran: Once you have your list of feedback cycles, they should be measured. Other metrics might be length of time a PR is open, number of commenters on a PR, the amount of dependencies between teams, time to deploy to local development server, and amount of time the code is toggled off. We have to be careful not to index too heavily on one metric as it is the combination that matters. I suggest working with the development teams to find the right metrics for their specific contexts. And we must make sure we are measuring the end value for users, otherwise all the work to speed up development will be for nothing.
We often focus on metrics once a code is pushed to a server. It is useful to measure the cycles in the developer environment too, pre-commit. The metrics can also be misused. It is important that these measures be just for the benefit of the team; we are not trying to assess individual performance. It is about trying to find out where bottlenecks and churn can be, and making improvements.
Qualitative measures are important too. Motivation and frustration are big factors affecting productivity; not everything can be measured with a number. We should trust and listen to developer opinions. Using their experience and intuition to optimize will be a huge enabler.
InfoQ: You mention that the four key metrics and the research in the Accelerate book are a good place to start. How should they be used?
Cochran: I mention that they are a great to measure; there is so much advice, tools and techniques. It is good to have a yardstick. The research has shown a correlation between organizational performance and DevOps metrics. This is good ammunition when justifying improvement projects; you know where you are at and where you need to be. If you can show that your initiatives will improve the four metrics, then it’s easier to demonstrate the benefit to the broader business.
We are seeing organizations measuring using the four key metrics; they are automating the collection. This allows you to track improvements over time. One gotcha I see is making sure you are actually measuring the whole value chain, not just the part of it that is easy to measure.or the lead time metrics it should be from the point a developer checks in all the way to it being exposed by a user. This can be tricky because oftentimes CI/CD pipelines are set up to promote to a single environment, and you might have toggles in place that gate the user access.
InfoQ: Following the talk you did at QCon, you continued the research and wrote an article for Martin Fowler on developer effectiveness. How has it been received? What has resonated?
Cochran: The reception has been very positive. The best thing has been the discussion we have had. Many people have been thinking about developer flow, micro-feedback loops and the compounded effects. There is interesting tooling being built in this space. Unfortunately, the low effectiveness example spoke to the current situation for a lot of organizations. But it is still good to hear from many motivated technologists trying to do better. I have heard examples of the article being used to help strengthen a case for a program to remove technical debt or to sponsor wide-ranging developer experience initiatives. This was in part the intention of the article. A lot of people wanted more, more details, more case studies. My collaborators and I will be addressing those in follow up articles.
About the Interviewee
Tim Cochran is a technical director for the US East Market at ThoughtWorks. Cochran has over 19 years of experience leading work across start-ups and large enterprises in various domains such as retail, financial services, and government. He advises organizations on technology strategy and making the right technology investments to enable digital transformation goals. He is a vocal advocate for the developer experience and passionate about using data-driven approaches to improve it.