Defeating the Misuse of 5 Whys

Share
Reading time ~2 minutes

I’ve tried using 5 whys techniques on many projects and with teams and individuals in a variety of situations.

It’s generally used for root cause analysis but often (I think) misused for other situations. Something about it has always bothered me.

When you see it described in theory it makes so much sense. But most examples are already solved (and I suspect refactored for post-rationalization). When you use 5 whys techniques in practice it never quite hits the mark.

In most cases where I’d have previously considered a 5 whys line of questioning for understanding cause I’ve been inclined to use an Ishikawa (fishbone) diagram instead. This allows us to dig into multiple lines of questioning and link across related areas.

19th July 2013: Output from the retrospective

But this still only works well in understanding problems, not goals and reasoning.

Thanks to a great session from Paul Field at the Cambridge Agile Coaches Camp last year – I’ve finally been given a 5 whys alternative that actually works properly outside the context of examining root causes.

Paul’s own article goes into the full detail on the lines of questioning and the techniques involved so I strongly encourage you to give this a read.

A large part of the improvement here is in replacing the question “why” with directed alternatives.

“Why” in psychotherapy (and some coaching circles) is seen as a dangerous question. From a purely human perspective it can be confrontational and seen as judgmental. A good therapist or coach will instead ask very specific questions based on context  that encourage consideration and introspection in a safe and well-judged manner.

Much like asking a child “Why did you punch little Davey in the face?”, asking “why” – particularly in challenging, negative, or political situations is unlikely to yield a well-reasoned answer. You’ll get a knee-jerk and/or self-justifying response.

Put another way. Repeatedly asking “Why” is like using a sledgehammer to open a jar of candies.  You’ll open the jar but I wouldn’t recommend putting the results in anyone’s mouth – and you’ll still have to go fishing through the damaged results to find anything of real value afterward.

Next time you’re considering 5 whys, try asking “and what will X get from that?” instead.

Shy? Try Exercising Your Asking Muscles

Share
Reading time ~4 minutes

Some Friday morning amateur social philosophy.

One of my current team is as shy as I am. We both agree that “we’d rather not deal with people”. It’s somewhat funny that even from the first day of my degree I had to work in teams and interact with external stakeholders. My ability to do so is my strength (but my dirty secret is that it’s also my biggest fear).

I’ve read and re-read loads of stuff on being an introvert, on being shy and assorted other musings for the “socially challenged”. I’m not socially awkward – I might be a bit of an over-sharer but once I’m up and running I can be the life and soul of the corner of the kitchen at a party, I enjoy sharing, I enjoy interacting with people but I’m like a dynamo – I need some winding up to reach that state.

I think my favorite term comes from Doc List who describes himself as an “Ambivert”. I think his observations closely define me as well.

Most people that I interact with in public or at work would never believe I’m shy. It’s one of the joys of the agile/conference scene – you know most people are just like you. They wear a mantle of bravery and a mask of confidence but after a week of conferences will happily spend the weekend recovering under the covers. Over the years those people become friends and the interactions become easier.

I gain short-term energy from talking to people but I’m exhausted afterwards. I find nothing more draining than leading planning sessions, demos and retrospectives but most people will never see it. In some camps that counts as introverted, in others extroverted – Let’s stick with ambivert.

Anyway. A challenge with being shy/introverted or even an ambivert is asking. Asking for help, information or even asking someone to do something you know is their responsibility or even pleasure to do for you.

As a project manager or leader this is a bit of a crazy situation. Every day I need to ask my team or external stakeholders for an assortment of things. The emotional effort needed to ask can vary from easy to terrifying and descends into experiences that are often addressed with Cognitive Behavior Therapy.

  • You believe it’s unreasonable to ask busy people for more of their time to meet needs you think are yours.
  • You feel (or have been told) you should ask someone for help or input.
  • You’re trying to make a difference, you know other people care but you don’t know how to approach the subject.
  • The list of doubts, “Shoulds” and “aughts” goes on.

From my experiences I’ve discovered the ability to ask is like any other mental muscle. If you use it regularly it gets better. If you don’t use it it wastes away. 

Unfortunately that shy moment freezes you up. It means you’re more inclined to not ask until it’s critical so you end up making life harder for yourself and the person you need to ask. All the while you’re procrastinating you’re expending mental cycles on stress and doing nothing.

But there’s more to it than that.

Each person you need to ask something of is a different muscle.

When I first started working as a PM in our DevOps group a few years ago I was new to the company, I had few existing working relationships and a number of key stakeholders I needed to learn to interact with.

How did I build up my asking muscles?

I reviewed my working relationships in the past to understand what I found easy and what was more difficult.  I found – even with my own team – that after about 3 useful interactions in a short period of time things became significantly easier so I decided to try an experiment.

There’s a neat idea in software development known as the “rule of three” – once you’ve needed to use the same thing 3 times then it’s probably time to make it generic and standardize it.

I bought a small (A3) sized whiteboard and placed it on my desk. On the board I drew a grid. Each horizontal lane represented a person I needed to learn to interact with. I wrote each name in the left-hand column. Each column after that was a placeholder where I’d mark an X every time I had an asking or collaborative interaction with that person. My goal was to get 3 X’s in each row over the space of about a month.

It worked – when there were only one or two X’s I’d figure out how I could build another interaction with that person that didn’t have to involve asking for anything significant.

One of those people came over to my desk and spotted my board – they asked what the “X’s” next to their name were. I explained and they immediately empathized. I wasn’t alone. That conversation counted as yet another X – It became easier to ask!

The tricky thing is once you’ve built those muscles up you need to kep them running.

If you neglect your asking skills for any time you need to rebuild them again (sometimes the second time is easier). If like me your projects and roles change a lot over the years you’ll probably find that by the time you’ve just got comfortable with everyone you need to ask that the world changes around you and you need to start again.

This is a good thing – it keeps us exercising our asking muscles.

Who do you want to ask something of today?

Go talk to them.

Have a great weekend

Simon

The Pitfalls of Measuring “Agility”

Share
Reading time ~7 minutes

This post expands on one of the experiences I mentioned in “Rapunzel’s Ivory Tower“.

I presented these lessons and the story at Agile Cambridge back in 2010.  It’s taken nearly 5 years to see the light of day in writing on here. I hope it’s not too late to make a difference.

I and my team hadn’t been in our roles long. We’d been given a challenge. Our executives wanted to know “which teams are agile and which aren’t” (see the Rapunzel post for more). We managed to re-educate them and gain acceptance of a more detailed measurement approach (they we were all Six Sigma certified – these people loved measurement) and I’d been furiously pulling the pieces together so that when we had the time to work face to face we could walk away with something production ready.

Verging on quitting my job I asked James Lewis from Thoughtworks for a pint at The Old Spring. I was building a measurement system that was asking the right questions but there was no way I could see a path through it that would prevent it being used to penalize and criticise hard-working teams. This was a vital assessment for the company. It defined clearly the roadmap we’d set out, took a baseline measure of where we were and allowed us and teams to determine where to focus.

My greatest frustration was that many of the areas teams would score badly were beyond their immediate control – yet I knew senior management would have little time to review anything but the numbers.

James’ question he left me with was:

“How do you make it safe for teams to send a ‘help’ message to management instead?”

I returned to my desk fuelled by a fresh pair of eyes and a pint of cider. I had it!

At the time many agility assessments had two major flaws.

1 – they only have a positive scale – they’re masking binary answers making them look qualitative but they’re not.

2 – They assume they’re right – authoritative, “we wrote the assessment, we know the right answers better than you…”

  • What if the scale went up to 11 (metaphorically) How could teams beat the (measurement) system.
  • And what if 0 wasn’t the lowest you could score. What would that mean?

The assessment was built using a combination of a simpler and smaller agility assessment provided to us by Rally plus the “Scrum Checklist” developed by Henrik Kniberg, the “Nokia Test“, the XP “rules” and my own specific experiences around lightweight design that weren’t captured by any of these. As we learned more, so we adapted the assessment to bake in our new knowledge.  This was 2009/2010, the agile field was moving really fast and we were adding new ideas weekly.

The results were inspired – a 220 question survey covering everything we knew. Radar charts, organizational heat maps, the works.

The final version of the assessment (version 27!) covered 12 categories with an average of about 18 questions to score in each category:

  1. Shared responsibility & accountability
  2. Requirements
  3. Collaboration & communication
  4. Planning, estimation & tracking
  5. Governance & Assurance
  6. Scrum Master
  7. Product Owner
  8. Build and Configuration Management
  9. Testing
  10. Use of tools (in particular Rally)
  11. Stakeholder trust, delivery & commitment
  12. Design

The most valuable part was the scale:

  • -3 We have Major systemic and/or organizational impediments preventing this (beyond the team’s control)
  • -2 We have impediments that require significant effort/resource/time to address before this will be possible (the team needs support to address)
  • -1 We have minor/moderate impediments that we need to resolve before this is possible (within the team’s control)
  • 0 We don’t consider or do this / this doesn’t happen (either deliberate or not)
  • 1 We sometimes achieve this
  • 2 We usually achieve this
  • 3 We always achieve this (*always*)
  • 4 We have a better alternative (provide details in comments)

a radar chart from excel showing each category from the assessment

The assessment was designed as a half-day shared learning experience. For any score less than 3 or 4, we would consider & discuss what should be done and when, what were the priorities, where did the team need support, what could teams drive themselves and what were the impediments. Teams could also highlight any items they disagreed with that should be explored.

Actions were classified as:

  • Important but requires management support / organizational change to achieve
  • Useful, low effort required but requires more change support than low hanging fruit
  • Potential “low hanging fruit”, easy wins, usually a change in practice or communication
  • Important but requires significant sustained effort and support to improve

As a coaching team we completed one entire round of assessments across 14 sites around the globe and many teams then continued to self-assess after the baseline activity.

Our executive team actually did get what they needed – a really clear view on the state of their worldwide agile transformation. It wasn’t what they’d originally asked for but through the journey we’d been able to educate then about the non-binary nature of “being agile”

But the cost, the delays, the iterative approach to developing the assessment, the cultural differences and the sheer scale of work involved weren’t sustainable. An assessment took anything from an hour to two days! We discovered that every question we asked was like a mini lesson in one more subtle aspect of agile.  Fortunately they got quicker after the teams had been through them once.

By the time we’d finished we’d started to see and learn more about the value in Kanban approaches and were applying our prior Lean experience and training rather than simply Scrum & XP + Culture. We’d have to face restructuring the assessment to accommodate even more new knowledge and realized this would never end. Surely that couldn’t be right.

Amongst the lessons from the assessments themselves, the cultural differences were probably my favourite.

  • Teams in the US took the assessment at face-value and good faith and gave an accurate representation of the state of play (I was expecting signs of the “hero” culture to come through but they didn’t materialize).
  • The teams in India were consistently getting higher marks without supporting evidence or outcomes.
  • Teams in England were cynical about the entire thing (the 2-day session was one of the first in England. Every question was turned into a debate).
  • The teams in Scotland consistently marked themselves badly on everything despite being some of our most experienced teams.

In hindsight this is probably a reflection on the level of actual knowledge & experience of each site.

Partway through the baseline assessments after a great conversation with one of the BA team in Cambridge (who sadly for us has since retired) we added another category – “trust”. His point was all the practices in the world were meaningless without mutual trust, reliability and respect.

It seemed obvious to us but for one particular site there was so much toxic politics between the business leadership and development that nobody could safely tackle that he had an entirely valid point. I can’t remember if we were ever brave enough to publish the trust results – somewhat telling perhaps? (Although the root cause “left to pursue other opportunities” in a political power struggle not long before I left).

Despite all this the baselining activities worked and we identified a common issue on almost all teams. Business engagement.

We were implementing Scrum & XP within a stage-gate process. Historically the gate at which work was handed over from the business to development was a one-way trip. Product managers would complete their requirements and them move on to market-facing activities and leave the team to deliver. If a team failed to deliver all their requirements it was historically “development’s fault” that the business’ numbers fell short. We were breaking down that wall and the increased accountability and interaction was loved by some and loathed by others.

We shifted our focus to the team/business relationship and eventually stopped doing the major assessments. We replaced them with a 10 question per-sprint stakeholder survey where every team member could anonymously provide input and the product managers view could be overlaid on a graph. This was simpler, focused and much more locally & immediately actionable. It highlighted disconnects in views and enabled collaborative resolution.

Here’s the 10 question survey.

Using a scale of -5 to +5 indicate how strongly you agree or disagree with each of the following statements  (where -5 is strongly disagree, 0 is neutral and +5 is strongly agree)

Sprint

  • The iteration had clear agreement on what would be delivered
  • The iteration delivered what was agreed
  • Accepted stories met the agreed definition of done
  • What was delivered is usable by the customer
  • I am proud of what was delivered

Release

  • I am confident that the project will successfully meet the release commitments
  • Technical debt is being kept to a minimum or is being reduced

General

  • Impediments that cannot be resolved by the team alone are addressed promptly
  • The team and product manager are working well together

If you’re ever inclined to do an “agile assessment” of any type, get a really good understanding of what questions you’re trying to answer and what problems you’re trying to solve. Try to avoid methodology bias, keep it simple and focused and make sure it’s serving the right people in the right ways.

Oh – if you’re after a copy of the assessment, I’m afraid it’s one of the few things I can’t share. Those that attended the Agile Cambridge workshop have a paper copy (and this was approved by the company I was at at the time) but I don’t have the rights to share the full assessment now I’m no longer there. I also feel quite strongly that this type of assessment can be used for bad things – it’s a dangerous tool in the wrong circumstances.

Thanks – as always – for reading.

2 Visual Management Tools to Help Evaluate Your Options

Share
Reading time ~4 minutes

Back in December 2012 I moved to a new team – from what was DevOps to .NET (same 3 roles as before – Dev Manager, Project Manager, Head of Project Management).

As part of my ramp-up on the team, products, customers, market and current challenges I was briefed by my new Product Manager and Division Head on the areas they’d like me to focus for the next few months –  and asked to deliver some draft plans for each of these.

I scratched around online to find a suitable A3 problem solving template as a thinking tool and struggled find anything that was quite suitable. The majority of A3 problem-solving tools are around defects/quality and root-cause analysis.

There’s a lot of wisdom in performing an RCA for a problem but sometimes you just need to move forward (see my “stopping the line” article).  In my particular case the change of teams would not be confirmed until I could satisfy my new potential boss that I was capable of covering their needs. In order to prevent confusion and disruption, this meant “going it alone” on my first iteration of a plan without interviewing the team on their thoughts. That makes performing an RCA problematic.

Here’s the A3 template I used.

Click here for a link to the PPT version

This got me far enough to complete the review – a win!

After discussion, I moved onto deciding next steps. Moving form vision and strategy to plans and actions and found that whilst I had a set of options, my timing was bad. The following week was “Down Tools Week” for all our development teams. Nobody was freely available to hassle about current projects and products and I didn’t want to interrupt the amazing things everyone was doing.

Rather than waste a week, I moved onto priority number 2 – increasing the team’s customer understanding. This needed less input but was also less well-formed in terms of approach, options and actions. My original plan had suggested one possible option but during review, it was felt I needed to provide and evaluate some alternatives.

I had the information to achieve this but hit a wall.

After half a day of thrashing with a blank slate I approached my new Division Head and asked for his help. I explained I was blocked. I had the right information but couldn’t move things forward. We spent 10 minutes discussing the problem with him essentially working in “Rubber Duck” mode.

After talking through the blockage we established a way forward. We discussed the (now much longer) list of options and an initial set of evaluation criteria for these. And I agreed to turn this into “some kind of evaluation matrix”.

As soon as I returned to my desk things fell into place…

We regularly borrow a few visual management techniques from Lean (we had a former Toyota UK guru spend a day a week here for a couple of years consulting with us on a wide variety of management topics).

One particular technique that has proven exceptionally effective is the visual representation of “Good/OK/Bad” using a colored circle, triangle and cross. When we were first introduced to this notation we were sceptical. Now, 3 years later we have a common visual form that all managers understand with patterns that can be seen at a glance and the approach is part of our routine reporting across all areas of the business. Even the monthly roll-up report for all projects I track across the entire development portfolio uses this same notation.

I took the list of options we’d been discussing and both positive & negative evaluation criteria, applied them to a matrix and used the (then new) notation to capture my thoughts.

Here’s the outcome.

Without any reading at all, the 3 strongest options stood out. Better still, this technique didn’t need anything more than expertise and instinct. I’m free to follow up with further research and evidence if needed but at that point we were after a first-cut and a quick narrowing of options to focus our efforts.

There’s an important subtlety in this notation to be aware of. A yellow triangle means “OK”. It’s not a problem but we think we can do better. A green circle means “Good” – we mean really good – not just “fine”. This ties back to an article I wrote back in 2011.

When we report something as “good” on a project status, we expect supporting evidence. When something is “ok”, we ask “how can we help or improve this”.

What this evaluation is saying is that we have a couple of really strong options that would help us achieve our goal. They’re strong rather than just the best of what we have.

Next time you’re trying to figure out what to do, try these 2 visual management and thinking tools as a way of unpicking and reassembling your options into something simpler and more cohesive. If you’re lacking options that have enough “good” points, try some more alternatives, don’t just big the best of a bad bunch.

(In the end I spent a year working with the .NET team. and shared a sample of the results here. )

 

200 Questions to help bridge the Product Owner divide

Share
Reading time ~6 minutes

A year or two before I first started writing publicly, I read an interesting, varied and quite long essay by Jeff Patton on the Product Owner role in Agile development (particularly Scrum)
http://www.agileproductdesign.com/blog/2009/product_owner_and_problem_shaped_hole.html

Whilst there are some areas I agreed with, the discussion on “heroics” was an area that left me cold. In my opinion, great software teams don’t have space for individual heroics and (in my opinion again) the sustainable pace ethos generally also plays down team heroics (but that’s a debate for another time).

There was a comment in Jeff’s article that resonated with me that I made a note of at the time. Years later it’s still highly relevant and I wanted to re-raise it this week in the context of recent challenges I’ve been facing.

“For the product you’re working on right now, how will your company benefit after the product has shipped? If you understand how, do others?”

Although this question was aimed at Product Owners it’s something I’d ask all development team members to really think hard about.

Now go further. Are you solving technical problems and a series of requirements to ship a product release or do you really understand the value of the release you’re working on, its reason for being developed and its impact on specific customers, the market or the company?

How many times have you or your team examined the reason for delivering a release with a specific set of features and understood their aim, questioned the backlog and vision and better still, properly understood what you’re trying to deliver in terms of solving the needs of the user?

When I say “understood”, I mean really understood – not just prettied up to follow one or another user story format.

Consider this an opportunity to pause for thought.

Back in 2013 I attended a Retrospective Facilitators Gathering and was introduced to an amazing concept over lunch by Willem Larsen that he described as “200 Questions”. He explained an idea that comes from animal tracking.

When tracking an animal you explore the environment in a way that an every day person doesn’t even consider. You look at every twig, leaf and blade of grass and ask questions of how it came to be in its given state (and why) in order to determine if you’re on the right path.

My translation of this to my own work is to try and develop a genuine and deep curiosity for every item you look at. (Even if you don’t have time to answer all your questions)

At the same lunch, we tried it with a salt cellar

How many people have handled it since it was made?

Who designed it?

What inspired the design?

What’s it made of?

Where did the salt come from?

What process did the salt go through?

(we lost count of how many questions we asked)

Applying this same healthy curiosity to incoming product features or requests is incredibly powerful.

You know things are wrong when every request coming in is essentially a specific solution being proposed.

And you know how translating feature requests back to user stories in anything except a shopping cart or finance trading application seems to be really poorly explained in the agile community?

Try this as an experiment for getting unstuck…

  • Set yourself a target of asking 200 questions about the next big item coming in to your team.
  • Try different “lenses” for asking the questions (much like 6 thinking hats)
    • The “User” lens
    • The “Investment” lens
    • The “Functional” lens
    • The “Market” lens
  • What questions logically follow from those first “level 0” questions?
  • Decide which of those are important and valuable to actually answer in order to further your understanding towards your goal.
  • Work on answering them – collaboratively.

In no particular order (but with some logical grouping around lenses) and in the very rough form I’ve been actively using in my current team, here’s over 50 of the “level 0” questions I’ve started experimenting with.

Feature Info

  • Name – what do we call it?
  • What is it? (2-3 sentence user-facing description *and* 2-3 sentence product description)

User info

  • Describe how a user would do this now? (without new code)
  • What types of users does this help?
  • How does it help them? (what does it enable, unblock, simplify etc.)
  • What is a user trying to actually achieve when they do this?
  • Why?
  • What constraints are they facing at that moment?
  • Who can we talk to about their goals and needs for this? (a selection of people willing to talk to us more)
  • How complex/accessible (to the user) is it?
  • How many/what proportion of users will benefit from this?
  • How frequently might we expect different types of users to use this?
  • How might this detract from the product for other users?

Investment/Product/goal alignment

  • How does this improve the product?
  • How big/expensive is it?
  • How much time do we want to invest in initial investigation?
  • If an initial investigation is inconclusive, what next?
  • How much are we willing to invest in this? (and how flexible is that decision)
  • Is there anything obviously more important/valuable/beneficial to do before this?
  • Would we be fools not to do this in the next 90 days? – why/why not?
  • If this were the only thing we did in the next 30-90 days, is this the right thing to do?
  • Why is this valuable to the team/company?
  • How does it align to current product/company goals?
  • How does it further our journey toward those goals?
  • Would you describe this as a tactical or strategic change?
  • Why?
  • Does this really fit within the product we have already?
  • (how) will it help differentiate the product from the competition?
  • Could this be a new product?
  • Are there any other products it may integrate with or impact? (ours or elsewhere)
  • Does this align more closely to another product than this one?
  • Do any of our competitors already do this? (and how)

Functional info

  • What solution options exist already?
  • Can you give an example of how that would work?
  • Under what situations might that not be suitable/successful?
  • What additional solution options can we think of
  • What’s the smallest simplest thing that could possibly work?
  • What’s “good enough” to meet customer needs?
  • Is there an 80/20 solution option for this?
  • How can we make this more discoverable, useful and valuable to users?
  • Is there an ingeniously simple solution for users?
  • What would it take to make a user say “wow!” (and is it worth the extra effort?)
  • What is the best solution in terms of cost/functionality/usability/quality balance
  • What performance and quality criteria do we have for this?
  • What is explicitly out of scope?
  • Are there any cross-cutting or performance risks?
  • Are there any other significant risks to be investigated?
  • What dependencies exist?
  • If we ship an incomplete solution, what is the follow-on plan and forecast cost for completion?

Market

  • What is the market demand for this?
  • Is there any market or customer sensitivity around timing for this?
  • What marketing is planned or expected for this feature (if any)?
  • How do we expect our competitors to respond to this?
  • For what we’re planning to deliver, what do we expect from our users on social media?
  • How will we launch this feature (alpha/beta/freq update/big bang etc)?
  • How will this impact our position in the market relative to our competitors?
  • What impact are we expecting on sales/renewals by announcing / releasing this feature?
  • What impact are we expecting on our reputation by announcing / releasing this feature?
  • How will we sell / price this?

It’s a lot like writing 10 minute test plans – so quick and so powerful.

Give 200 questions a try on your next big feature request.