Start by listening. The transformation assessment.
25 January 2022

In many aspects of our life, whenever we need a professional’s help, we expect them to come to our place, take a look at the work and then give us advice on what is doable and, usually, at what cost.
When we are hired by clients this is a crucial step: the initial moments when we can get in touch with the company in order to:
- Define what the goals are
- Understand reality
- Define possible options and refine goals
The above steps are usually defined as assessment.
The bigger the goals or the system, the more complex the assessment. Assessing a team in order to understand how to help it to improve towards, just as an example, a better delivery is way different from assessing a whole company that wants to start a transformation towards Business Agility.
Two different scales that require different approaches. This article focuses on understanding reality and defining possible options assuming goals have already been defined. For the above options we will give 2 different examples, from our experience.
How an assessment works
We will take a look at the assessment process through two different angles: expected outputs (outcomes) and execution modes.
Expected outputs (outcomes)
An assessment process is supposed to deliver a precise outcome: a navigator to the change or next steps, a list of possibilities based on data. So, the choice is the outcome. This is usually delivered through the use of reports, visualizations and all sorts of reporting techniques that might support decisions. Outputs might contain different information though depending on the nature of the relation between supplier and client.
- In the case of a consultancy company, an intervention plan is produced together with milestones, supported by best practices and industry benchmarks that allow to reach the desired condition.
- In the case of a coaching contract, a person-centered development plan might be produced, but the person manages the agenda and objectives at the same time.
- Finally, in the case of an agile coaching company, what you get is a hybrid, somehow precarious balance: a mix of consulting and coaching. The output will contain a list of options and even some advice based on experience. What you will not find, usually, is a list of milestones. Options are presented as a backlog of options.
But how does an assessment take place?
We need to talk to people, of course, to read documents if needed, take a look at the company structure and processes. We focus here on the relationship between us and the people of the organisation we are assessing.
A common pattern: the massive survey
The massive survey technique is the most frequently used. Everyone has probably been asked to answer a list of questions sent via email or through a more sophisticated software (360 or others). It consists of a set of questions, usually based on a wider set, to investigate specific areas of the reality of a company.
Pros are quite easy to understand:
- they provide a scientific basis and a data-oriented model of synthesis;
- they are very useful (indispensable?) when large numbers are involved;
- they offer many dimensions of analysis and even some intelligence on data;
- they give the decision makers a lot of evidence to build actions on.
There are a few cons:
- surveys are cold and engagement of people is at risk (“nooo, one more questionnaire”);
- the degree of concentration of people is doubtful;
- avoiding bias is extremely complicated (normalisation efforts are necessary);
- we lose the “colour” of the answers, the mood, what people are saying at different levels (listening levels 2 and 3).
The “colour” of a company is based on relationships and feelings
“Colour” plays an important role in a more complete evaluation of a department or an entire organisation: it allows to understand the relationships between people in the same environment; it helps to better understand the domain; it allows to be careful on activities or paths of change that would clearly have more difficulty than others in being accepted.
Our approach to the assessment: conversations and observation
In Agile Reloaded we have experimented with and adopted the assessment “as the coach would do”. As an alternative to questions and instead of depending on the structure of the client’s company, we prefer to initiate conversations by listening to the surroundings (interview pattern), or to use the technique of observing the context.
We favour conversation in order to capture the colour of the answers, the elements that allow us to frame the answers within the context. This choice has a price that we have consciously decided to pay: giving up the pros of massive surveys and limiting the number of interviews. The mix of conversations and observation is chosen according to the customer’s domain and requirements.
Case study 1
A fintech inside a wider bank group asks for help to “move towards Business Agility”, with a particular reference to the delivery area.
After an initial conversation, we can understand that the company’s need is, more technically, to improve the system’s ability to adapt to change, make decisions and release value quickly.
We wanted to help the client understand its own reality and how the stated goals could match people’s needs and visions. We created a reference schema as follows:
- Define people to interview and why
- Define set of questions and interview scheme
- Validate with sponsor (to some extent)
- Execute interviews
- Elaborate result and return them
- Make hypothesis on the transformation backlog
1. Define people to interview and why
We had the initial goals setting sessions with the CEO and his top managers. We sensed that some misalignment amongst departments could be at stake. We asked them to interview every c-level executive and every head of department. Plus we asked a list of key people to add to the interview.
We could add more people by asking the interviewed ones to nominate another person to interview and to state the reason why. This way, starting from a dozen people, we could talk to about 40 employees (out of about 150).
2. Define set of questions and interview scheme
Starting from the stated goal, we focussed on interviewing the system to understand to what degree it could be supportive to the goal itself. Being the system made of client, suppliers and holding, we added people from all parts of the system itself: suppliers and holding.
We then applied a few areas of enquiry to the system: Strategy, Decision-making, Goals and measurements, Speed of change, Suppliers relationship, Delivery, Culture, People and engagement.
On each of the areas we defined a set of statements with a Lickert-like scale in order to minimize the biases and not to highlight right and wrong answers. The statements are not presented visually (so there is no perception of red/green areas).Designing the assessment
3. Validate with sponsors (to some extent)
Sponsors were supposed to be part of the interview process. So we could not share statements in advanced. What we shared instead were: the system to be assessed (who), the areas to assess (what) and the execution mode (one2ones) and managed to get an agreement on those.
4. Execute interviews
Three coaches have been working for about 15 days to execute the one2ones, gather data and elaborate results. How was a single session structured?
We share an agreement on confidentiality at the beginning, we ask for expectations, together with an appreciative part.
We do not interpret inconsistencies in answers but show them back to the person for their awareness. Each coach could, and did, ask more questions in order to clarify answers.
The stance here is a coaching one with a small difference: the goal of the session is given by the interviewer. At the end of the interviews, a radar is produced, alongside a series of analogical information (that can be possibly modified by the interviewees):
5. Elaborate results and return them
The return of the results is the most sensitive moment and with the highest risk of judgement; the conversation must be protected. Results need to be introduced in a facilitated conversation. In no cases they should be shared via an email which doesn’t give enough context and, again, doesn’t return the “colour” of the conversations.
Overall results with some (possible) areas of improvement
6. Make hypothesis on the transformation backlog
Once the areas of intervention have been identified, the assessment ends with the most consultative part: the first coaching backlog is drawn up.
The initial transformation backlog, options only and not yet a plan
Options might be developed, or not. Any selected one is then enriched with some details to better understand impact and define any mvp-like initiatives.
Lessons learnt
- Conversations allow you to empathise with people and gain different points of view:
Tip 1: use conversation first and then evaluation.
Tip 2: ask respondents whom else they would reasonably include in the process and why.
Tip 3: use a follow-up mechanism for the interviews (do you agree with it? what would you change?) - Restitution should be done carefully to prevent people from feeling judged for their work.
Tip 1: Use words carefully (“they told us” and not “we pointed out”).
Tip 2: filter and normalise to avoid bias.
Tip 3: don’t do it alone. Other coaches will greatly help you to avoid biases.
Case study 2
The tech department of a utility company. The system is made of 6 different teams, spanning over three time zones. The customer’s request is “let’s do Scrum better”. Quite a weird request maybe but it means that the process would be the same as Scrum by the book: do every meeting, use all the Scrum practices and, above all, “don’t waste people’s time”. What was needed was a sort of benchmark to start improving from.
We decided to base all the assessment on the following schema:
- Create a benchmark
- Observe and return observations
- Identify an action plan
1. Create a benchmark
If “good” Scrum was the goal, we needed to define what good looks like and we decided to do so in two ways: by following the scrum guide (the letter) and by keeping in mind values and principles (the spirit). This way we could report observations with some context like “daily standup is not technically great but principles are” or the other way around. We knew that without such a context we would have found ourselves in the position of telling people what to do which is never a great stance if you want to help a client.
For each of the team we asked to join all Scrum expected meetings as observers only and took notes. We use two levels of reading for each meeting: a checklist of Scrum events and the behaviour of the teams in relation to the spirit of Scrum. The information obtained is shared on a board to which everyone has access, as well as notes and memos, for a climate of transparency. We maintain a small one-to-one section only with Product Owners and Scrum Masters, led by a small interview schedule on Scrum concepts.
Example of canvas for a Sprint Planning meeting
2. Observe and return observations
Insights, beyond an always accessible board, have been shared in a more formal way using three different channels: a formal one with status slides with managers, one2ones with Product Owners, Scrum Masters, Tech leads and when we had the possibility to facilitate a few retrospective meetings.
The shared information didn’t differ amongst channels while some adaptation depending on the roles was needed: some executive-like summary for managers (in order to let them know if they needed to do anything), the wide board in one2ones for a broader conversation, some details of the board in retrospective for better focus.
![]() | ![]() |
Managers’ reporting examples
3. Identify an action plan
The action plan looked pretty straightforward. Being the observations always available to all and having a fast feedback cycle (weekly) we could arrange an experiment board, with impediments, for manager while leaving teams autonomous on their own improvement actions.
Managers’ board with impediments (need action from managers) and running experiments
Lessons learnt
- If the observation mode is respectful, people welcome you well;
- The immediate and transparent sharing (board) of your considerations creates commitment;
- The pure observation mode is slow and based on the rhythm of sprints.
Tip 1: it makes sense to use a partial and iterative coaching backlog and to start before the end of the observation
Tip 2: clarify with the sponsors the pace that will be possible to keep
The (internal) evolution of assessment
Assessment methods are constantly being defined. In the last year, we carried out about 10 such assessments that allowed us to: testing, changing approach, experimenting.
Currently, an engineering process is underway, organised and managed by an internal guild, which involves planning different scenarios and building a catalogue of questions. The result is a potential set of different areas to investigate, from which you can start to create your own assessment and where you can find the right questions for your context.
Read more posts from the same category: More from Agile
Or explore other categories: Bring everyone on board Enable people Engage people More from Agile