Tools for assessing teams

How analyzing and understanding a team's flow of work will help you ask better questions.

Jakob Wolman
10 min readMay 15, 2024

Disclaimer

Have you talked to them? That is always my first question when someone asks me how a team is doing. No matter the amount of data you have, nothing beats sitting down and talking to people. That is how you identify root causes and understand how to best help a team. Looking at data will not give you the answers. Instead, it will help you to formulate questions. As a system thinker, I see trends in data as signals in a system. They are something to be curious about, dig into, and understand better. But as with any data, you should not compare teams, or conclude directly from numbers on a dashboard.

In this article, I will share some of my tools for quickly understanding the flow of work for a team. They help me ask better questions.

This article assumes that the team is tracking their work in some way. If they are not, that is a great place to start the conversation.

Flow and process

The first thing I look at is the team board. This gives me an idea of what process the team follows. Are they working in sprints, kanban, or something else? What columns does the board have? A common view is something like this.

This picture tells me the team has not held a proper conversation about their process and flow of work. It is hard to say how the team is doing in their progress towards a goal.

Another thing I look out for is the potential outsourcing of work. It can be a testing or ready-for-release column. Who moves work from these columns? A dedicated person in the team, or perhaps in another team. I also look for waiting stages, or potential queues in the system. It can be columns called Waiting for test or Waiting for approval. These stages usually signal that the team has a dependency on another system or department.

Lead time and cycle time

The definition of lead time is the time it takes from a process start to its completion. Cycle time is a measure of how much time you spend working on a specific task. Waiting time is the time a task is in the system, but nobody is working on it. Lead time = cycle time + waiting time.

When looking into team processes I count lead time as the time it takes from a decision to work on a task until that code is shipped and in the hands of a customer.

The best way to measure is how long time a task spends in a certain state on the board. This will give you an understanding of how these measurements change over time.

Every increase and decrease in lead time is an opportunity for questions. Did something change in the system? Did the team work on specific tasks that were free of dependencies? Did the team introduce a new practice? The seasonality of changes is also interesting. Can you see peaks with certain intervals? Does the team have a specific process that creates these peaks or valleys? How is the lead time different for different types of tasks? Are bugs different from feature development? How does the team deal with expedited issues?

Finally, the length of lead time will give you good hints for planning purposes. If a team has an average lead time of say 10 days and runs two-week sprints, you will know that anything not started at the end of the first week is unlikely to be completed within the sprint.

Throughput

Throughput means the number of tasks completed within a certain time. I look at weekly throughput. As I find little value in estimation and the use of story points, I never look at the throughput of anything other than the number of tasks.

As with most measurements, the actual number of tasks completed is not interesting. A team’s throughput can vary due to the way they slice work, how they use tasks, and their definition of done. What is interesting are the trends, the peaks, and the valleys. Can you identify holidays by looking at the teams throghput chart?

If you see a peak, dig into what type of tasks were completed that week. Was it a bug bash or a highly productive week? What allowed them to be more productive? Similarly, look at the valleys. What made it hard to complete a lot of work that week?

A team’s throughput is also a great help when planning. If a team runs two-week sprints and has an average throughput of 10 items per week, and they have planned 30 items for a sprint, you know they are up for a challenge. If they have never completed more than 15 items in a given week, the planning is questionable.

Throughput and cycle time are intimately related, as expressed through Little’s law: Average lead time = Average WIP / Average throughput.

This means when your throughput goes up, your lead time decreases. Similarly, if your WIP goes up, your lead time increases.

WIP and aging WIP

WIP stands for work in progress (or process). It is the amount of unrealized value that exists in the system. More simply put, all the work that has been started, but not finished. To get an overview of WIP I count the number of items where we have invested work, but not shipped to production. You can both look at the WIP for each column on the board, but you should also look at the total WIP for the whole system. A good rule of thumb is to strive for a WIP that equals the number of people working in the system (or lower). If your WIP is higher, it is obvious that people have to do some kind of context-switching. A high WIP is the root cause of several common symptoms: long lead time (Little’s law), context switching, tasks being blocked, and bottlenecks in the system. Seeing high WIP and bottlenecks should raise questions to the team about how they collaborate and communicate when something is stuck or waiting. Are they helping each other? Are people highly specialized and can only perform tasks for certain parts of the system? Is there any automation that eases the bottlenecks?

Looking at how WIP is changing over time is also helpful, and gives you an idea of how the team is working.

Another view is to look at aging WIP. That means how long time a task has been sitting in a certain state. This will help you identify long-forgotten, abandoned tasks that have been invested in but never completed. These tasks will give you a clue as to how decision-making is done and its dependencies for the team.

There are teams that manage to keep a WIP of one, keeping a single piece flow throughout the whole system. They usually apply methods like ensemble programming to reach this state of focused work.

Delivery cadence

How often is the team shipping to production? The delivery cadence says a lot about a team’s ability to create short feedback loops, knowledge, ownership of their production environment, and dependencies on other teams. How long does a deployment take, and how much of it is automated? Some teams will be constrained by an ecosystem. For example, app developers have to wait for approval before they can ship a new version of an app to customers. The delivery cadence also tells you how quickly a team can get user and system feedback. Is the team able to ship single tasks to production, or do they have to batch them together in a release? Do tasks need to sit idle for a long time before being shipped? Is the team utilizing methods like feature flags to release features to customers? How easily and quickly can the team recover from a problem in production?

Backlog

Most teams hold ideas and descriptions of future work in a backlog. It should be somewhere accessible to all team members. The size of the backlog and the age of the items in it will tell you a lot about how the team works with their stakeholders. In many organizations, the backlog is a place where ideas go to die. It quickly becomes a graveyard of “things someone thinks we should work on”.

At what level are the ideas? Are they big and fluffy, or has the team been forced to invest time in breaking down, estimating, and detailing work? If so, you have found a big source of waste. How big is the backlog? How much time is the team spending on grooming and prioritizing the backlog? Does the team know their most pressing issues? How do new items end up in the backlog? At what rate? How do they leave the backlog? At what rate? How much time is the team spending on finding duplicates before putting something new in the backlog? A good practice is to clean out issues that are older than say 6 months. This makes sure the team always has a short backlog with the most current issues to work on. If the organization is overly worried about saving every idea and bug, it is a signal that you will find a lot of wasted work on prioritization, grooming, estimating, and breaking down work that will never actually get done.

Inflow of bugs

Most teams have a process for dealing with bugs (and if they don’t that is another great conversation to have). These can be bugs reported by the team themselves during development, bugs reported by employees, or bugs reported by customers. Bugs are usually found in some sort of backlog. At what rate are new bugs created? At what rate are bugs fixed? How is it decided when a bug should be fixed? How big is the bug backlog? If it is ever growing the team is risking ending up in a situation where they have thousands of reported issues, making it impossible for them to prioritize or understand where they have bigger, systematic problems. The inflow of bugs and how long time they spend in a backlog will also tell you how effective the team is in dealing with the quality of their product. A high inflow of bugs is also a signal to ask about how much time the team is spending just dealing with the flow (understanding the cause of a bug, communicating with customers or other departments, estimating and planning the work) before fixing the issue. In my experience, this can be a big source of waste as the team will have to context switch to investigate issues and communicate around them. Dealing with the inflow of bugs is also work that increases WIP and context switching.

Cumulative flow

Cumulative flow diagrams will give you an overview of how the team has been doing historically. They are not useful for finding current issues in the system but will give you a picture of how the situation has changed over time. Cumulative flow can be explained as this:

A cumulative flow diagram will give a good overview of how the size of the backlog grows, how sprints are planned, how WIP, lead time, and cycle time vary, and the queues being built up in the system.

Tools

Good tooling makes this type of analysis a lot easier. But getting to most of these insights is possible by exporting data from a tracking system, importing it to a spreadsheet, doing some basic data wrangling, and data visualization.

Many trackers include basic analysis tools you can configure to show what you need. The best tool I have come across so far has been ActionableAgile which has an integration with Jira. If your company is running GSuite you probably have access to Looker Studio (formerly known as Data Studio). If you are running Microsoft products you probably have access to Power BI. My favorite data analysis tool is Qlik, and perhaps your company already holds licenses. There is an endless amount of tools out there.

The most important thing about tooling is not to get lost. You want something quick and dirty to be able to do this analysis. Even if it is great to build dashboards that you can revisit over time, you will want to do a deeper analysis of how the team is working, and what they need, before investing in building something for their specific needs.

What are your favorite tools when looking at team workflows? Do you have a favorite tool? I would love to hear what you have found useful when performing this kind of analysis.

--

--

Jakob Wolman

Systems thinker and agile coach turned manager. Learn by sharing and discussing. Passionate about knowledge sharing.