A data-driven retrospective

Sérgio Vinícius de Sá Lucena
8 min readSep 17, 2020

Recently I was looking for a way to help my team to reflect about how we are performing and how we could do things better.

In every good team, it's very common to find people that might always think everything looks good, and many times it can be the case that you ask the team members what they think we could do to improve things, and they don't have an answer as, for them, maybe everything was already going well. Believe me… Over the years I've even heard things like:

I think we don’t need retrospective because we’re all doing well.

or even

We don't need to do retrospectives at the end of each cycle.

As I always say, the retrospective is the most important meeting to keep the continuous improvement spirit in the team.

Let's be honest, if someone thinks everything is fine, it's very likely that this is also an issue in the team and it's something for the Scrum Master to explore.

How to approach this then?

Well.. there are many ways, but today I want to focus on a data-driven approach. This is very straight forward: It's all about extracting valuable metrics related to the team itself, making it accessible to everyone and bringing awareness to the team.

If you bring the metrics and create a culture of showing the numbers to the team in every retrospective, they'll always be able to compare themselves with past cycles and discuss their performance.

Self-reflection :)

A quick example could be using the velocity. Let's say your team usually ships 40 story points per sprint in average, but in one sprint they delivered 28. Why did it happen? Maybe because there were too many bugs to fix(which you don't estimate so it doesn't have story points)? Maybe some team members were on holidays and there were fewer resources? Has this affected the sprint goals? How? Why didn't you guys plan for it in advance?

This was just a quick example, but I'm particularly not a fan of story points as a metric. Currently, I'm more inclined towards the cycle time, and my reasons are basically because I'm using Kanban and my team has a really nice CI/CD pipeline that allows us to ship things once the tickets are approved. Since we're using JIRA, it automatically generates for me these data, and it looks like the graphic bellow:

Control chart sample

The chart above shows us details of each ticket. I filter it to get the data for the tickets from the time they are moved to In Progress until they get shipped to production and moved to Done, as per our team's Definition of Done, considering a 2 weeks cycle (this is how we measure our cadences).

As you can see on the top, there are outlier tickets, and those are the good ones to be discussed, because they took way more time to be shipped in comparison with the average for the team. Why did this happen? In the above example, you can see that the ticket took 5h to be implemented, but stayed for 1 week in review. Why did it stay so long in review? Maybe the team is not focusing much on reviewing? Maybe they're starting too many tickets and having troubles to finish them? Maybe they should adjust the Work In Progress(WIP) limit? Maybe there were too many external dependencies or blockers? What can we do to reduce it? Was there anything they could have done to ship this ticket faster? Could they have split this into smaller tickets?

As you can see, there are many ways to explore it so, how to organize for it?

In short, the idea is to help the team to reflect on it and for running a data-driven retrospective, we're going to split the team into small groups, and give them some outlier tickets to discuss and bring some action items.

Preparation

Go to JIRA control chart (in the reports session) and check the outlier tickets. Depending on your team's size, avoid taking too many tickets. I wouldn't recommend more than 4.

Collect the metrics for these tickets. Something like:

<TicketID> <Ticket Title>: <Author> (this is just for helping you to organize the groups later)

  • In Development: 2d 6h 12m
  • In Review: 1w 2d 13h 38m
  • In Testing: 3d 9h 18m
  • Cycle time: 2w 1d 5h 9m

Split the team. Let's say you picked 4 tickets to be discussed.

You can decide on how to split them:

  1. 4 groups, each with 1 ticket.
  2. 2 groups, each with 2 tickets
  3. 2 groups, each with the same tickets

It helps a bit if the person who worked on the ticket is on the same group where the ticket will be discussed, but it's not mandatory. If they say they can't understand the reason why the ticket took so long, just because they didn't participate on it directly, maybe they were not paying attention on the daily meetings 🤔? (a-há! 😉, or maybe they were on vacation… who knows?). Let them free to find the root causes, investigate the Merge Requests, the comments on the JIRA ticket, and use as much information as they can.

Create a google document with the tickets data without mentioning the name of the person who worked on it — we are not blaming anyone and it should be clear! It can be as simple as this template.

And you should be ready to start now.

Set up an Ice breaker (15min)

An ice breaker is always a good idea to set the stage and let the team relaxed.

Here it goes a fun idea I met through a friend and also scrum master, that works really nice for remote teams:

  1. Ask the team to send a picture of their workstation to you.
  2. Share the images on the team's channel and ask them to guess who is the owner of this work station.

This was pretty fun and the team loved it. The interactions were pretty good.

Prime directive (3min)

Since we're exposing some data about work done by some of us, it's always important to remember that no one is blaming anyone and the main purpose is to improve the way we work. In this context, the prime directive is very helpful:

Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.

I also reminded the team that we always want to bring value to our customer as fast as possible, as this helps in the direction of why we should always look for reducing the cycle time.

Retrospective

Explain the dynamics. Tell the team that you've picked up some outlier tickets from the past X time, and you split them into some groups, and you want them to discuss to find out the root cause for why the tickets took longer than the average. Ask them to bring at least 1–3 concrete actions we could take to avoid this from happening again. If you use the template I provided, you can just share the links with them.

I used the breakout rooms in Zoom (as we were doing it remotely), so as a host/cohost we were able to setup rooms for the specific groups and I could also jump from one room to the other to observe the discussions. I liked what I saw.

I suggest 2 tickets per group and 30m for discussions (it can be reduced depending on the size of the groups).

When the time is up, bring them all together in the same room (or virtual room), and ask one of each group to present their findings.

Make sure that everyone should respect the decision of the groups, even if someone thinks there are other root causes.

Next, group the findings from them, organize the action items and make commitments.

I suggest 30m for it, as there might be good discussions to summarize the action items, and that's it 👌.

My experience

My team is currently working remote and we are a bit bigger than usual:

  • 2 technical leaderships
  • 1 PO
  • 1 QA
  • 1 Designer
  • 5 Software Engineers
  • 1 Scrum Master

We did the ice breaker as I mentioned before, but 15m was not enough time (you know.. we are a big team 🙈).

I split them into 2 groups and we used the breakout rooms from Zoom for it as I mentioned earlier. I tried to put people in a way where they'd be encouraged to participate, so mixing people based on their characteristics.

I picked up 4 tickets and I gave the same to be analized by both groups, without telling them they'd be analyzing the same tickets.

Once they started it, I was able to join the breakout rooms and observe the discussions. I found it quite good as I could even see how people improved their agile mentality.

It was also easier for me to moderate and control the dedicated time for it.

When we were back together to discuss the results, they didn't have time to analyze all tickets (4 was too much), but still, the discussions they had were meaningful.

They presented the root causes and action items for it. I had planned 20m for this step, but since 4 tickets was too much for my team, unfortunately we didn't have time to go through all action items. After they presented it, I gave them 5m so that I could summarize the items and group them into What and list the actions as How. Something like:

What: Code review process

How:

  • Improve tickets clarity (clearly definition of scope)
  • Avoid assigning tickets to specific people so we let it open to volunteers who can quickly start to work on it

In the end of the retro, I got some great feedback from the team and I'm planning to have this format of retro at least once per quarter (maybe twice, let's see).

An improvement idea I had for the next time is to suggest that the person who participates less in the group discussions should be the one who present it at the end when we bring the groups together. It's just a way to get people engaged.

Conclusion

Using data is extremely important to help the team to reflect on how they are performing.

We don't need to always run data-driven retrospectives, and the frequency for it really depends on the team. However, it's a good practice to always show at least the team metrics on every retrospective meeting.

Be careful with the metric you choose. Never compare internal metrics with other teams. This is why you should always show the numbers to the team so that they can compare it against themselves in the past cycles/sprints, but never against other teams, as each team has its own reality.

We should always reinforce that we know that everyone did the best they could given the current situation (prime directive), and the main purpose is to keep constantly improving the way we work so we can, as a team, achieve our goals and bring more value to the customers faster.

Lastly, make sure you have at least 3 concrete action items. If your team spend too much time on discussions, choose less outlier tickets to be discussed so that you can really get something out of the discussions.

Have you tried any other data driven retrospective format? What metrics/data do you use? I'd be happy to know! Share your thoughts in the comments. 😄

--

--