Maturity tracker with Ritual Dissent
15 February 2023
This is one of the examples of a maturity assessment from one of our assignments. Reasons not to go with a massive survey have been explained in our previous article.
The overall process
Process and part of slides used to get initial on boarding
Each team evaluates itself on given criterias which have been given as a starting point, then evolved over time. Each team may end up with different criterias.
Self assessment is a team evaluation, not individual. We ask the people, as a team, to join an initial workshop to set the benchmark for future improvements. So it can’t be a one 2 one or an individual answer to a question. Self evaluation needs to be shared inside teams.
Metrics update/new metrics
Each metrics gets updated regularly, usually during teams’ retrospectives. Teams may decide to create new metrics, to update existing ones or to discard what doesn’t help anymore.
We offer our view on the team, based on experience, shared with everyone. This is useful to give the team different angles or perspectives.
Impediment board (or pain points)
What needs to be resolved is usually a team only action. Sometimes leaders need to be involved in some actions. This is made possible by sharing an impediment board (a normal kanban board, a report, whatever fits the need of transparency and accountability at all levels without adding unnecessary control over the teams).
The starting point
In order to start the process we defined a reference for the teams. A set of statements and an evaluation guideline to kick off the conversation. We defined 4 different areas of assessment: Team, Product ownership, Practices and Agility. We decided to keep statements, in each area, as far as possible from silly questions like “do you use story points” and similar.
Some reference to practices had to remain to match the current level of maturity.
This stage was one of the most important, take your time to define your own questions. Ask yourselves why you want to ask that question.
|Team is balanced and cross-functional
|Team works in an environment that fosters collaboration
|Team is able to tackle interdependencies and constraints in a proactive manner
|Team reinforces and evolves their working agreements
|Squad’s mission clearly articulated to team members
|Product owner attends regularly team events or is available as needed
|Business KPIs are clear and measured
|Work is clearly linked to business kpis
|Team defines, estimates, and selects their own work (stories, tasks, anything)
|Key metrics are reviewed and captured during each retrospective
|Team take appropriate actions based on whatever has come up in retrospectives
|Each iteration has a clear goal
|Backlog refinement performed regularly, more than enough stories ready before plannings
|Strong, clear and comprehensive Definition of Ready agreed on and used
|Strong, clear and comprehensive Definition of Done agreed on and used
|Work is not added by the Product Owner during the iteration
|Different work item types visualized
|Workflow show actual process
|Kanban Wip limit set and monitored
Why statements instead of questions? We wanted to spark a conversation, possible evaluations could be on a scale similar, as concept, to the Fibonacci series. The idea was that any step on the scale would need increasing commitment and increasing conversation.
The evaluation scale we have been using
All statements we used, alongside the evaluation scale, are available for download.
First workshop, ritual dissent
How did the self evaluation really happen? We decided to base our workshop on the ritual dissent facilitation technique, created by Dave Snowden. You may find all details on the workshop and credits on the official Cynefin page.
Example of a workshop script
Important: ask people to take notes of all dissents. It will be useful for final reporting. Notes may be written on cards, one for each statement:
Example of card with areas for dissent (attack), use what fits your context
Outcomes at team level
A typical result of the first workshop is shown in the following picture. All statements have been evaluated and discussed (and attacked through ritual dissent). Each card (printed and given to participants at the beginning of the workshop) has a self evaluation that is considered the team view on themselves, through the lens of chosen statements (or metrics).
Results have been hung on the wall (team’s decision) and are transparent to everyone. The message was: this is what we think of ourselves and where we think to start improving from.
Team, Product Ownership, Practices and Agility areas
The improvement process started immediately, autonomously from the management and in a way the team itself thinks important. So, every team had different improvement actions on different metrics.
Team chose (dot voting) to start working on interdependencies
An email from a Scrum Master
Reporting at leadership level
What about managers and the rest of the company?
First level of reporting was definitely the cards on the walls that were able to say that teams were working on their self improvement and highlighted any updates on self evaluation (which is, in different words, a continuously updated status report without any pmo effort and no need to align on slides).
Second level of reporting, more formal, was carried out by coaches, leveraging more than just numbers. We added the context by collecting, categorising and showing the ritual dissent in order to give some meaning to numbers and guide the leaders in taking, or not taking, actions.
We then added, with full agreement with teams, the coaches’ point of view on the same metrics.
In the download you may find an example of possible attacks and an example of how we created categories.
Data was then shown to the leaders (HR Director and Head of product in this case) with context and help to understand it.
Red area: actions from managers are needed
Green area: team is autonomous, “don’t worry, they can handle it”
Context, what’s behind numbers and how senior leaders can help
When reporting is made with context and, as in our case, in person we could be able to convey messages like the one in the next picture: team is not 100% on self evaluation but that’s perfectly fine and managers don’t need to worry about it. Numbers are for the team’s self improvement or to ask help from leaders. If no help is needed or requested, we are fine whatever the numbers may be.
Reporting example: This squad is not 100% but it is autonomous => doesn’t need any action from leaders, 20% on practices is just fine
Much more reporting has been produced over the course of about 10 months. Maturity tracker was piloted over 4 teams, then extended to more than 40 and eventually it helped to give meaning to the existing massive survey we could not get rid of.
This was a coaches team effort, not counting all the availability of people.
The effort was usually well perceived by people because they could find value in the process so as in the improvement actions, they were about them. To give an example, one of the issues the company was facing with the massive survey was that… it didn’t get filled. Our workshops have always been fully participated.
We designed them in order to take just a fraction of the people’s time and to include them, for example, in the retrospectives.
The reporting was effective because it was made in person, without sharing slides before a meeting in order to educate managers to a correct interpretation of results. This is a key factor to avoid misunderstandings.
A lot (a lot) of effort needed to be spent in order to get senior leaders on board and to fight the stance of one of the big consultancy firms that just wanted to apply the massive survey and move away.
This part is probably worth another article.