About maturity – an intro
15 February 2023
Over the years, working in Agile transformations, we have been asked to answer the same questions from the sponsor, on a monthly basis: How much Agile are we? A first answer could be made by more questions: are faster in answering market needs? Are any of the business indicators improving?
Such answers, made of further questions are anyway a bit naive for a number of reasons:
- A direct link between teams and overall business results is hard to make: teams are more often than not strictly connected, or dependant on each other or have general constraints that can not be avoided;
- Necessary elapsed time to get a real benefit from Agility doesn’t match, at the beginning of the transformation, quarterly report’s timing;
- External suppliers in the teams are a fundamental driver that makes teams’ progress hard to evaluate (people move in or out, budget pressures etc);
- The very question (How Agile are we) gest asked from shareholders or people on their behalf, too far from the teams to allow a conversion to be made;
Said that, the question is legit. Money gets spent and having a way to measure improvement is mandatory. The real question is: was all the investment worthwhile?
In order to start answering the question we should be able to find a way to measure improvement.
Let’s keep it in the field of Agility. We have been used to large scale surveys for example: tools that allow us to gather answers from many people and that scale very well even on thousands of people. Such tools offer a wide range of reporting summary, beautiful graphical aggregations and several science grounded algorithms to get information out of data.
On the top of the tool we may find a series of Maturity Models with a predefined set of questions.
Everything is so attractive to top managers because… it makes total sense. There’s a process, a tool, reporting techniques aimed to give answers.
However, such tools and frameworks come with a few risks.
The school report risk
A large scale assessment could be perceived as a way to evaluate people. After all, it’s about answering questions, isn’t it?
Biases are difficult to avoid in such a context and you can’t really address individual issues. Spikes or glitch are just filtered away as anomalies. People are likely to think carefully about their answers because they might think in terms of right or wrong answers.
When you know you will be judged, and ranked, you might be wondering what “right” answers look like
Even some more evoluted ways of asking questions with the introduction of shades instead of yes/no answers (ie, Lickert scale or others) don’t completely eliminate biases and the need for post-production of answers.
The self-fulfilling prophecy risk
A large scale assessment can be turned into comparisons amongst people or teams. One of the most notorious (or infamous) comparisons may be the use of story points to compare good teams vs bad (or slow) teams.
What is likely to happen in this case is that, as soon as the measure is used to compare people on performance… performances will tend to align. We experienced several times that, when teams realised that story points were used by management to make comparisons, all teams aligned to the same level of delivered story points in a couple of sprints. Not because of an increased productivity but mainly because of a different starting point.Teams get to the same number of delivered story point by changing their reference unit
So, if you want to measure in order to control and compare, you might end up with the same values from everyone. And no information.
The distance and delay risk
In an assessment, numbers are usually read by people who are far from the teams, in a different moment in time and with a lack of context. Reds signs might ask for a fast action or not, depending on the context at team level.
Even when an action is needed, it might come too late with respect to the moment the answers get collected (usually once in a quarter or less).
Getting details on ground situation is not that easy from the outer space
Remember the initial question? We stated that’s legit and we need to answer avoiding or minimising the risks. In order to do that we need to start using frameworks, processes and a mindset which are genuinely aimed to:
- Keep the measurement local to be useful to teams for their self improvement (we are talking Agile after all)
- Reject the “averages” logic of big assessment
- Add a way for managers to interpret data with the correct context
- People that evaluate themselves are the same people that take actions
Following you will find two different examples on how we tackled the issue and managed to answer shareholders’ questions and, at the same time, maintained the maturity assessment as a tool for the team first for its own self improvement and, second, a tool for the organisation to help the transformation.
Maturity tracking and ritual dissent will take you into a way to measure based on people interactions and coaches’ help to get meaning from data at top manager level.
Principles maturity is an evolution entirely based on Agile principles.