2 Visual Management Tools to Help Evaluate Your Options

Reading time ~ 4 minutes

Back in December 2012 I moved to a new team – from what was DevOps to .NET (same 3 roles as before – Dev Manager, Project Manager, Head of Project Management).

As part of my ramp-up on the team, products, customers, market and current challenges I was briefed by my new Product Manager and Division Head on the areas they’d like me to focus for the next few months –  and asked to deliver some draft plans for each of these.

I scratched around online to find a suitable A3 problem solving template as a thinking tool and struggled find anything that was quite suitable. The majority of A3 problem-solving tools are around defects/quality and root-cause analysis.

There’s a lot of wisdom in performing an RCA for a problem but sometimes you just need to move forward (see my “stopping the line” article).  In my particular case the change of teams would not be confirmed until I could satisfy my new potential boss that I was capable of covering their needs. In order to prevent confusion and disruption, this meant “going it alone” on my first iteration of a plan without interviewing the team on their thoughts. That makes performing an RCA problematic.

Here’s the A3 template I used.

Click here for a link to the PPT version

This got me far enough to complete the review – a win!

After discussion, I moved onto deciding next steps. Moving form vision and strategy to plans and actions and found that whilst I had a set of options, my timing was bad. The following week was “Down Tools Week” for all our development teams. Nobody was freely available to hassle about current projects and products and I didn’t want to interrupt the amazing things everyone was doing.

Rather than waste a week, I moved onto priority number 2 – increasing the team’s customer understanding. This needed less input but was also less well-formed in terms of approach, options and actions. My original plan had suggested one possible option but during review, it was felt I needed to provide and evaluate some alternatives.

I had the information to achieve this but hit a wall.

After half a day of thrashing with a blank slate I approached my new Division Head and asked for his help. I explained I was blocked. I had the right information but couldn’t move things forward. We spent 10 minutes discussing the problem with him essentially working in “Rubber Duck” mode.

After talking through the blockage we established a way forward. We discussed the (now much longer) list of options and an initial set of evaluation criteria for these. And I agreed to turn this into “some kind of evaluation matrix”.

As soon as I returned to my desk things fell into place…

We regularly borrow a few visual management techniques from Lean (we had a former Toyota UK guru spend a day a week here for a couple of years consulting with us on a wide variety of management topics).

One particular technique that has proven exceptionally effective is the visual representation of “Good/OK/Bad” using a colored circle, triangle and cross. When we were first introduced to this notation we were sceptical. Now, 3 years later we have a common visual form that all managers understand with patterns that can be seen at a glance and the approach is part of our routine reporting across all areas of the business. Even the monthly roll-up report for all projects I track across the entire development portfolio uses this same notation.

I took the list of options we’d been discussing and both positive & negative evaluation criteria, applied them to a matrix and used the (then new) notation to capture my thoughts.

Here’s the outcome.

Without any reading at all, the 3 strongest options stood out. Better still, this technique didn’t need anything more than expertise and instinct. I’m free to follow up with further research and evidence if needed but at that point we were after a first-cut and a quick narrowing of options to focus our efforts.

There’s an important subtlety in this notation to be aware of. A yellow triangle means “OK”. It’s not a problem but we think we can do better. A green circle means “Good” – we mean really good – not just “fine”. This ties back to an article I wrote back in 2011.

When we report something as “good” on a project status, we expect supporting evidence. When something is “ok”, we ask “how can we help or improve this”.

What this evaluation is saying is that we have a couple of really strong options that would help us achieve our goal. They’re strong rather than just the best of what we have.

Next time you’re trying to figure out what to do, try these 2 visual management and thinking tools as a way of unpicking and reassembling your options into something simpler and more cohesive. If you’re lacking options that have enough “good” points, try some more alternatives, don’t just big the best of a bad bunch.

(In the end I spent a year working with the .NET team. and shared a sample of the results here. )

 

200 Questions to help bridge the Product Owner divide

Reading time ~ 5 minutes

A year or two before I first started writing publicly, I read an interesting, varied and quite long essay by Jeff Patton on the Product Owner role in Agile development (particularly Scrum)
http://www.agileproductdesign.com/blog/2009/product_owner_and_problem_shaped_hole.html

Whilst there are some areas I agreed with, the discussion on “heroics” was an area that left me cold. In my opinion, great software teams don’t have space for individual heroics and (in my opinion again) the sustainable pace ethos generally also plays down team heroics (but that’s a debate for another time).

There was a comment in Jeff’s article that resonated with me that I made a note of at the time. Years later it’s still highly relevant and I wanted to re-raise it this week in the context of recent challenges I’ve been facing.

“For the product you’re working on right now, how will your company benefit after the product has shipped? If you understand how, do others?”

Although this question was aimed at Product Owners it’s something I’d ask all development team members to really think hard about.

Now go further. Are you solving technical problems and a series of requirements to ship a product release or do you really understand the value of the release you’re working on, its reason for being developed and its impact on specific customers, the market or the company?

How many times have you or your team examined the reason for delivering a release with a specific set of features and understood their aim, questioned the backlog and vision and better still, properly understood what you’re trying to deliver in terms of solving the needs of the user?

When I say “understood”, I mean really understood – not just prettied up to follow one or another user story format.

Consider this an opportunity to pause for thought.

Back in 2013 I attended a Retrospective Facilitators Gathering and was introduced to an amazing concept over lunch by Willem Larsen that he described as “200 Questions”. He explained an idea that comes from animal tracking.

When tracking an animal you explore the environment in a way that an every day person doesn’t even consider. You look at every twig, leaf and blade of grass and ask questions of how it came to be in its given state (and why) in order to determine if you’re on the right path.

My translation of this to my own work is to try and develop a genuine and deep curiosity for every item you look at. (Even if you don’t have time to answer all your questions)

At the same lunch, we tried it with a salt cellar

How many people have handled it since it was made?

Who designed it?

What inspired the design?

What’s it made of?

Where did the salt come from?

What process did the salt go through?

(we lost count of how many questions we asked)

Applying this same healthy curiosity to incoming product features or requests is incredibly powerful.

You know things are wrong when every request coming in is essentially a specific solution being proposed.

And you know how translating feature requests back to user stories in anything except a shopping cart or finance trading application seems to be really poorly explained in the agile community?

Try this as an experiment for getting unstuck…

  • Set yourself a target of asking 200 questions about the next big item coming in to your team.
  • Try different “lenses” for asking the questions (much like 6 thinking hats)
    • The “User” lens
    • The “Investment” lens
    • The “Functional” lens
    • The “Market” lens
  • What questions logically follow from those first “level 0” questions?
  • Decide which of those are important and valuable to actually answer in order to further your understanding towards your goal.
  • Work on answering them – collaboratively.

In no particular order (but with some logical grouping around lenses) and in the very rough form I’ve been actively using in my current team, here’s over 50 of the “level 0” questions I’ve started experimenting with.

Feature Info

  • Name – what do we call it?
  • What is it? (2-3 sentence user-facing description *and* 2-3 sentence product description)

User info

  • Describe how a user would do this now? (without new code)
  • What types of users does this help?
  • How does it help them? (what does it enable, unblock, simplify etc.)
  • What is a user trying to actually achieve when they do this?
  • Why?
  • What constraints are they facing at that moment?
  • Who can we talk to about their goals and needs for this? (a selection of people willing to talk to us more)
  • How complex/accessible (to the user) is it?
  • How many/what proportion of users will benefit from this?
  • How frequently might we expect different types of users to use this?
  • How might this detract from the product for other users?

Investment/Product/goal alignment

  • How does this improve the product?
  • How big/expensive is it?
  • How much time do we want to invest in initial investigation?
  • If an initial investigation is inconclusive, what next?
  • How much are we willing to invest in this? (and how flexible is that decision)
  • Is there anything obviously more important/valuable/beneficial to do before this?
  • Would we be fools not to do this in the next 90 days? – why/why not?
  • If this were the only thing we did in the next 30-90 days, is this the right thing to do?
  • Why is this valuable to the team/company?
  • How does it align to current product/company goals?
  • How does it further our journey toward those goals?
  • Would you describe this as a tactical or strategic change?
  • Why?
  • Does this really fit within the product we have already?
  • (how) will it help differentiate the product from the competition?
  • Could this be a new product?
  • Are there any other products it may integrate with or impact? (ours or elsewhere)
  • Does this align more closely to another product than this one?
  • Do any of our competitors already do this? (and how)

Functional info

  • What solution options exist already?
  • Can you give an example of how that would work?
  • Under what situations might that not be suitable/successful?
  • What additional solution options can we think of
  • What’s the smallest simplest thing that could possibly work?
  • What’s “good enough” to meet customer needs?
  • Is there an 80/20 solution option for this?
  • How can we make this more discoverable, useful and valuable to users?
  • Is there an ingeniously simple solution for users?
  • What would it take to make a user say “wow!” (and is it worth the extra effort?)
  • What is the best solution in terms of cost/functionality/usability/quality balance
  • What performance and quality criteria do we have for this?
  • What is explicitly out of scope?
  • Are there any cross-cutting or performance risks?
  • Are there any other significant risks to be investigated?
  • What dependencies exist?
  • If we ship an incomplete solution, what is the follow-on plan and forecast cost for completion?

Market

  • What is the market demand for this?
  • Is there any market or customer sensitivity around timing for this?
  • What marketing is planned or expected for this feature (if any)?
  • How do we expect our competitors to respond to this?
  • For what we’re planning to deliver, what do we expect from our users on social media?
  • How will we launch this feature (alpha/beta/freq update/big bang etc)?
  • How will this impact our position in the market relative to our competitors?
  • What impact are we expecting on sales/renewals by announcing / releasing this feature?
  • What impact are we expecting on our reputation by announcing / releasing this feature?
  • How will we sell / price this?

It’s a lot like writing 10 minute test plans – so quick and so powerful.

Give 200 questions a try on your next big feature request.

Take It Off-Site (Part 1) – The Bad and The Ugly

Reading time ~ 3 minutes

Offsite meetings have a checkered past. Back in 2006 a colleague shared this painful article with me.  To quote from the main article:

“…only about 10 percent of executives consider offsites truly valuable. Half said they aren’t worth the time or money.”

Over the last 7 years I’ve seen the exceptionally high value offsite meetings can bring but also the stress and pain that often leads up to them.

At my last employer, the entire global executive and senior management team would have a 6-monthly operations review offsite that ran for 3 days. We booked out significant portions of a large hotel in the US and fly the entire team together from over a dozen sites around the world. Over 3 days, the leaders from each site would present their statuses, finances, roadmaps, portfolios, visions, strategies and plans to a group of about 50 of us. We’d put them under the grill and review their work in depth both as bosses and peers.

We’d take time to focus on specific strategic issues that benefitted from face-to-face focused collaboraton. (When your management team are distributed around the world. A few intense days of co-location often delivers far more decisions and clarifications than 6 months of project work and conference calls.)

The general approach for these was: Prepare, Pitch, Review, Collaborate, Revise, Re-pitch.

Sounds pretty sensible? – It is…        … if you know what you’re in for. 

The offsite workouts were usually brutal but exceptionally valuable. By the end of the week everyone’s plans were in far better shape, we all understood what each other were doing, found areas for shared benefit and made a lot of difficult decsions.

Better still, we spent a few days working, eating and drinking together. 3 days of deliberate convergence in order to stem the behaviour issues associated with extreme divergence kepts us functioning as a successful management team.

Whilst the outcomes were great, they took their toll. The pain behind the scenes generally involved a bunch of us working 2 weeks of long evenings and weekends to pull all the data, presentations and spreadsheets together. ($100m of software projects contain a lot of moving parts & data) Often we’d still be revising our work during the start of sessions and whoever went first bore the brunt of painful lessons and questions whilst everyone else frantically updated their slide decks to accomodate the new knowledge.

And here’s the thing – the lines of questioning…

Our Senior Exec at that employer was an exceptionally sharp, experienced, bright guy. After working through multiple iterations and having joined the operations team on the organizing side of the offsites I saw what made him tick. He had a brain full of powerful questions that he’d learned from experience and was constantly adding to them. He knew what to ask and when that would lift the lid on just the right barrel to find a body (if there were bodies to be found).

As a spectator, sometimes it’s satisfying seeing the blood on the floor when someone you think is an ass-kisser takes a roasting. It’s not so fun when it’s your own team.

I even helped develop a set of “difficult questions” for our Head of Operations in the weekend prior to one of these offsites. They were great questions to ask. The thought and effort put into developing them was a lot like spending an hour writing test cases up-front. We learned quickly what we really needed and wanted to know (and why). We were confident we could get under the covers of even the most prettied-up project report.

But we could have made the whole thing so much less brutal.

Much like writing test plans, if we’d just put the questions out for participants to review in advance they’d have adjusted their research and reporting to accommodate them rather than playing project question battleship on the day.

So here’s an action for you.

Next time you’re reviewing a project or a piece of work, capture the questions you regularly ask.

Now…

Share them

Share them with the people you’ll be asking beforehand.

Now they can answer those questions sensibly first and you’ll all have time to dig into more valuable insights.

I have a few more posts planned around “critical questioning”, plenty around running offsites and I’ll be sharing where I’ve been for most of the last year so keep your eyes open for more soon.

 

A Year of Whiteboard Evolution

Reading time ~ 2 minutes

Back in December last year I started supporting Red Gate’s .NET Developer Tools Division. As of this month, we’ve restructured the company and from next week, the old division will be no more (although the team are still in place in their new home).

When I joined the team things were going OK but they had the potential to be so much more so I paired up with Dom their project manager and we set to work.

The ANTS Performance Profiler 8.0 project was well under way already and the team had a basic Scrum-like process in place (without retrospectives), a simple whiteboard and a wall for sharing the “big picture”.

I spent the first week on the team simply getting to know everyone, how things worked and observing the board, the standups and the team activities.

We learned some time ago here at Red Gate that when you ask a team to talk you through their whiteboard, they tell the story of their overall process and how it works. Our whiteboards capture a huge amount about what we do and how we do it.

I attempted to document and capture at least some key parts of the journey we’ve have over the year in which we released over a dozen large product updates across our whole suite of tools. This post is picture heavy with quite limited narrative but I hope you’ll enjoy the process voyeurism 🙂 If there’s any specifics you have questions about, please ask and I’ll expand.

Next time, I think I’ll get a fixed camera and take daily photos!

The end result? Multiple releases of all 5 of our .NET tools including a startup and quality overhaul for our 2 most popular products, support for a bunch of new database platforms, full VS2013 support (before VS2013 was publicly released), Windows 8 and 8.1 compatibility and a huge boost for the morale of the team.  See for yourself if you’re interested!

Of course this is just one aspect of what I’ve been up to. You might notice the time between photos over the summer grew a little. See my last post for more insights into what happens at Red Gate Towers.

Kipling and Keogh on Requirements

Reading time ~ < 1 minutes

A short thought today…

From age 11 to 16, written on the wall of my school sports hall was the following quote.

I keep six honest serving-men (They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.

It wasn’t credited, nobody spoke about it or referenced it. I never really thought about its source, I wasn’t a reader of much classic literature or poetry (preferring fantasy and comics) and I didn’t really consider its value but it stuck with me.

Some years later I looked it up.

Here’s the full version

I keep six honest serving-men (They taught me all I knew); Their names are What and Why and When And How and Where and Who.

I send them over land and sea, I send them east and west; But after they have worked for me, I give them all a rest.
I let them rest from nine till five, For I am busy then, As well as breakfast, lunch, and tea, For they are hungry men.

But different folk have different views; I know a person small- She keeps ten million serving-men, Who get no rest at all!
She sends’em abroad on her own affairs, From the second she opens her eyes-

One million Hows, two million Wheres, And seven million Whys!

 

The next time you’re looking at requirements, use cases, user stories or similar, consider…

Who is wanting to achieve what and why?

Where would they normally do this, when is the right time and how would they do so?

Take this a step further. Ask “why?” just once more.

And if you’re still writing user stories as “In order to … I want… so that…”, then take a read of Liz Keogh’s recent post.

Cracking Big Rocks

Reading time ~ 3 minutes

Bramber castle (side perspective view)

Things have been rather busy over the last couple of months. I’ve joined a new division at my current company and have the basis for another dozen articles slowly developing but that’s just the start. I have some really exciting news!

After over a year of development with the exceptionally talented Johanna Hunt (@joh) through field testing, workshops, paper prototypes, conferences, conversations, and peer reviews I’m very pleased to announce the launch of crackingbigrocks.com.

The Concept

In trying to solve problems of our own, we found challenges that we now identify as “Big Rock” problems – those things that when faced alone cause us sleepless nights and illogical stress. The mental load associated with Big Rock problems can be so taxing that every time we go to tackle them we find ourselves procrastinating and avoiding or exhausted through trying. A deep breath, a step back, a few pointers, a second brain and some support can help us get back on track.

We’ve faced these problems in both our personal lives and professional careers. For example the series of articles I wrote on “The Oubliette” describes the multiple strategies I used for reducing a major defect backlog.

(I’ve spent the last 2 weeks reusing the oubliette strategies along with a few new ones to help my new team get our quality back under control and keep the motivation to improve high.)

 

Simple Patterns & Coaching Cards

Between us we’ve taken the “Simple Patterns” concept and our combined experiences  of solving large, difficult problems and developed a set of over 50 simple problem solving patterns. In particular, we’ve captured the essence of big mental hurdles and how these can be overcome in ways that have resonated with almost everyone we’ve shared with. (Of the 150 or so people who’ve explored the concepts so far, only two didn’t find they had “Big Rock” problems of their own.)

We’ve produced a limited first edition set (only 50 decks) of  high quality coaching cards containing 45 patterns.  (Within a week of the box being delivered, with no marketing at all we’re already down to just 35 decks left!)

As of about June 2013, the first edition has sold out however the expanded second edition is now available on amazon.

It turns out the ideas and concepts we’ve captured and the formats we’ve chosen are significantly more popular than we expected. Back in August 2012 after a weekend of hand-trimming and cutting (causing a few injuries and RSI) we produced nearly 40 paper prototype decks with unedited wording and far fewer patterns. We gave all of these away and left a few attendees disappointed after running a hugely successful workshop to a packed-out room of nearly 80 attendees at Agile 2012. A month later we had to produce a few extras for a re-run at Agile Cambridge (and took the opportunity for some edits whilst we were at it).

We delivered our first bulk-order for nearly 200 decks for attendees at Agile Cambridge in 2013 and ran another packed-out session. Our good friend Olaf regularly takes a few sets out with him to Play4Agile in Germany.

Even whilst developing and trimming the original prototype decks, we actually used the patterns within the decks to “keep rolling” – the idea of having to manually cut out and collate over 1500 paper playing cards is quite a “Big Rock” in itself. We measured our throughput rate, adjusted our cutting process, optimised our flow and working practices, banked our results in suitable sized batches and made sure our pace was (mostly) sustainable!

Here’s a quick preview of what’s in the box…

Photo of cracking big rocks cards

The majority of patterns in the deck are unique to us but there’s a couple that are better known (such as the “Rubber Duck” shown above). What’s unique here is the format, approach, style and most intriguingly, the community we’re aiming to build. We want to share the ideas and experiences everyone has using these cards as coaching and problem-solving tools.

If you’d like to find out more, head on over to crackingbigrocks.com and take a look.

OK. Marketing over – I generally find selling or marketing what we do a little crass so I hope you as readers don’t mind too much!

Stopping The Line

Reading time ~ 4 minutes

A few weeks ago the Company I work for celebrated its 13th birthday. As part of the celebrations we were each given the latest copy of the “BoRG” – the company book.  On reading through the pages I found one of the teams I’m responsible for received an award.

This isn’t quite as positive as it sounds and I’m pretty sure the incident will become lgendary within the company. The lessons learned, what we did afterward and the forward thinking attitude of our senior management are however truly worth celebrating.

The (rough) Story

Over the summer one of our teams was working on some updates to our product deployment tools (we deploy upwards of 50 new releases across our product portfolio every month). Part of the automated process involves uploading a packaged installer for our software to a download location and updating our web site to point to the update.

Due to a mix-up between environments and configurations, one of our internal tests made it into the outside world. The problem was spotted and resolved fast but something was clearly wrong for this to have been possible.

This alone would have been rather embarrassing however this was about the 6 or 7th significant incident that had come from our operations teams in as many weeks. We’d recently restructured team ownership of parts of the codebase, were making a large number of significant infrastructure, library, test and build changes to our systems – mostly legacy code (code without sufficient tests). Moreover, we added a whole new team onto the codebase with a very different remit in terms of approach and pace, the volume of churn in the code had massively increased.

This was all during the height of holiday and conference season so many of us weren’t fully aware of the inner carnage that had been occurring. Handing over from one manager to the next repeatedly meant we’d not seen the bigger picture.

I returned from Agile 2012 to a couple of mails from my boss (who was now on holiday) to fill me in on what had happened with the (paraphrased) words: “Can I leave this safe in your hands”.

Over the first 2 hours of my return I was briefed by managers and team members on the situation. Everything was sorted, no problems were in the wild but the team’s credibility had taken a beating.

I’d seen similar things happen in other companies and had always been certain of the right course of action. This was the first time I actually felt safe to lead what I knew was right.

I donned my “Lean” hat and started my nemawashi campaign with our senior managers.

I spoke to each manager individually – they were already well aware of the problems which made things much easier. I simply said.

“These problems can’t continue, we’re going to ‘stop the line’. All projects are going to stop until we’re confident that we can progress again safely.”

I went a step further and set expectations on timescales.

We’d be stopping development work for nearly 20 staff for at least a week. We’d monitor progress daily, and approval to continue would be on the condition that we were confident problems would not reoccur.

By lunchtime I had unanimous support. It was described as a “brave” thing to do by our CEO but all agreed it was right.

A side-benefit of Lean is the shared language it provides. In every case when I approached our management team and explained that I wanted to “stop the line” they immediately understood what I meant plus the impact, value and message behind such an action.

Now of course you can’t prevent new problems with hindsight but you can identify patterns of failure and address these.  In our case I had a good understanding of what had been going on.

Initially I was strongly against performing a full root-cause analysis.  There were half a dozen independent incidents and a strong chance of finger-pointing if we’d gone through these. I was already “pretty sure” where our problems lay. The increased pace had led to a fall in technical discipline coupled with an increased pressure to deliver faster and a lack of sufficient safety net (insufficient smoke tests).

I divided the group into 3 teams to focus on 3 areas.

  • “before release” – technical practices
  • “at the point of release” – smoke tests
  • “after release” – system monitoring

With an initial briefing and idea workshop I stepped back and left the 3 teams to deliver.

The technical practices team developed a team “technical charter”.  We brought all participants together for a review, revised and then published this. Individuals have since signed up to follow this charter and we review it regularly to ensure it’s working.

The smoke testing team developed a battery of smoke tests for the most critical customer-facing areas (shopping cart, downloads etc). These are live and running daily.

The monitoring team developed a digital dashboard (that I can still see from my desk every day). This shows the status of the last run of smoke tests (and history), build status, system performance metrics and a series of alerts for key business metrics that would indicate a potential problem with the site – e.g. a tail-off in volume of downloads or invoices.

They also implemented some server-side status monitoring and alerts that we subscribe to via email.

Since these have been in place we *have* had a couple more incidents but in every case we’ve spotted and resolved it early.

Subsequently a couple of the teams have self-selected to perform a root-cause analysis on a couple of issues. This is exactly the behaviour I love to see, it’s not a management push, they simply wanted to ensure we’d pinned things down and done the right thing. Moreover, they published the results to the whole company.

The award…

 

 

Project Envisioning with Six Stickies

Reading time ~ 5 minutes

This is an approach I developed a couple of years ago whilst working with a team in a difficult (but quite common) situation at my previous company.

Bizarrely, about a week after I started drafting this article (a long while ago now), another blogger independently described an almost identical tool and approach, I just wish I could find it.

 The Context

The Product Manager for the team I was supporting was responsible for a small portfolio of products of varying age and quality and like many Product Managers I’ve met, he had insufficient budget, time or delivery capacity for “everything”.

One product in particular was causing pain. Commitments had already been made to at least one large customer that a “radically overhauled” version of the product would be delivered to address all their reported complaints.

(You know – the kind of commitment made when you’re on a customer site and they’re tearing you a new one over a multitude of frustrations and you just want to make them happy, right?)

This commitment had been made on the spot prior to any investigation and without consulting the development team.

Sound familiar?

Did I mention how much I now enjoy working for a company that only sells software that already actually exists?

This overhaul wasn’t just being developed for that one customer. They wanted to re-launch the product as an advanced graphical web-client with full multi-user query/read/write/print features. (The original was a read-only system)

When the team started to explore the “product”, they realized just how customized the customer’s version was and how unloved, limited and just plain sick the original was.

  • The UI sucked
  • The underlying code was the result of years of consulting engagements strapped together
  • Consultants has to wire the internals together to make it a running product (Out-of-the-box it didn’t work)

It was basically a collection of disparate tools in a very jumbled toolbox.

Despite all of its ugliness, many customers were actually using it. It was typically thrown in free with most enterprise licenses and was far easier and cheaper to roll out to the majority of users than the seriously advanced thick-client CAD-like system.

The team had their work cut out for them…

All they had to start from was the usual laundry list of features – 1-liners. Fortunately this was enough to be clear already that they were never going to deliver “everything” (or possibly anything) on time.

Fortunately their Product Manager was a great guy. Masses of domain and customer knowledge, supportive, collaborative, made time to be available regularly despite being on the far side of the world and experienced enough with software development to know when to listen to his technical team.

The team highlighted the impending scope & schedule disaster as soon as it reached their shores and within a week the Product Manager was in the UK ready to hit the reset button.

The Goal

We needed to re-frame the project, determine what scope could be cut or postponed, what could be delivered sooner, what was a priority and how the team would approach delivery.

The Process

We arranged 3 days of workshops with the product manager and full development team in the room together.

 Background

Before we dived into details, we worked with the product manager to give us the real story on why the product existed from our business perspective:

  • What revenue did it provide?
  • How else did it support our sales?
  • What was the longer term direction for the product?

This was restricted to 1 sticky per question.

We then captured (on 2 stickies) what the users normally did with it – what were the most commonly used functions and what was the frequency and duration of these activities?

Vision

To start things out with the best possible foundation, we spent the first morning on “Envisioning” (or perhaps “Re-Visioning”).

I’m not convinced the Product Manager was particularly sold on the value of the session but he was willing to play ball – he trusted the team and I to do the right thing. (and after all this was just 2 hours of warming up for 3 days of workshops).

As we started it became apparent from team conversations that they were struggling to establish exactly what the product they were working on was really expected to do, why and who for.  (pretty fundamental stuff)

I led the team through a series of facilitated discussions around 6 areas of the project/product. The order we approached these and the constraints used for documenting them were a critical part of the process. We constrained the documentation to the smallest, simplest thing that could possibly work. One sticky per conversation theme.envisioning with six stickies

  • Needs – What top priority business needs is the product/process addressing?
  • Wants – What do the end users want from the product/process , what specific tasks are they trying to achieve?
  • Business Process – What business process are we aiming to model, support or improve?
  • Scope – What’s the explicit planned scope (features & fixes)  for this release? (limiting scope to a single sticky is a great way of starting out small!)
  • Immediate Goals – What is the top priority single goal of this particular project/release?
  • First User – Who will be the first actual consumer of what we’re delivering and when do they need it?

 Scope

We knew the product manager already had an idea of what scope he wanted and that we were going to need some kind of polite reset conversation.

Our approach was to collaboratively define what the “minimum marketable feature set” (MMF)** for this release would be.

**At this company, MMF was loosely defined as “the bare minimum set of functionality that would be added/improved in order to deliver incremental customer value and be worth announcing to the market.”

Through scope exploration, we established the team needed to deliver a lot to make the product viable. We were however able to trim a number of very high complexity/risk items in order to bring things in sooner. Better still we’d had a powerful scoping conversation with the product manager to set clear expectations on what would be highly unlikely to be delivered and had sequentially prioritized every other feature in his original release wish list.

Putting each item on a sticky, and having an explicit MMF marker was a great way of visually defining the backlog. (This also kept the number of items in the backlog small enough to summarise quickly)

Acceptance

The final piece of the puzzle…

Having defined context, scope and vision for the release, we defined what “acceptance” of a new release would entail for the end user. The power of this was that we could correlate current and later scoping conversations around the fundamental user acceptance expectations without our Product Manager getting emotionally attached to the scope we’d originally talked through. Once again, we constrained the acceptance to keep things small and simple. (one sticky was plenty) .

What next?

After this initial few hours, the team spend the remaining 2 and a half days working hard with the product manager to flesh out all the top priority features/epics in sufficient detail that they could start de-risking and developing by the end of the week. They didn’t need my help for those bits 🙂

 

Note, the quality of writing on this article may not be up to my usual standards.  This has been blocking my backlog for quite some time. I needed to get the story written from my rather hazy memory and move on more rapidly than normal. I hope however that it’s still valuable.

Why You Should Stick To Using Whiteboards & Stickies

Reading time ~ 4 minutes

If you aim to improve, inspect and adapt on a frequent basis in a highly unconstrained way, stick to a whiteboard (as large as possible) and stickies (and possibly scissors, tape, card, paper, pens – did I mention that many Agile coaches I’ve met have an addiction to stationary that stems back to their childhood :)). Your process will adapt to the project significantly faster with a manual board.

As an example, here’s the original board used on my current project (it’s now cleared down as we migrated to a better space) :

a blank scrum board
(Thanks to Andrzej for the rather disturbing portrait)

 The board was too small and constrained (much like many electronic tools) so we switched to something better. We reused the board layout and approach from a previous project (see 5S your Scrum board) as a kick-start but less than a week later we had already moved forward significantly from where we’d started. Our needs on this project were different enough that we had to adapt.

Here’s what the current board looks like this week:

a highly adapted scrum/kanban board
(Thanks to Ellie for the donated parrot)

 The mass of stickies across the bottom of the board is where we cut scope for this sprint as the result of an over-commitment. This was spotted as soon as we migrated to this board and I started plotting additional information for the team around the edges.

Admittedly what we have here could potentially be implemented electronically as a board and a series of “widgets” but that needs development skills and time – this would slow down our speed of adapting.

In our example, adding avatars was a 15 minute job with scissors & tape, adding a capacity planning check took 2 minutes and adding new charts and graphs took 10 minutes. Better still – an unplanned adjustment – when we have a success story from our users, one of the team will bring the evidence along and tag it to the board in whatever format they wish.

There’s some further changes needed to our process this week. One of the horizontal streams of work is a (roughly) repetitive series of activities so we’re going to start tracking cycle time on these and moving to a Kanban model as we need to start setting expectations to our users for these areas. In parallel, the less predictable work will be continuing Scrum-style for the development team. We’ll be ensuring the board captures these stats for us to see every day.

As soon as you start using electronic tools, there’s an immediate speed barrier to the changes you want to make plus there’s often the constraint of a small screen (or investment in a large one), how to add related information in meaningful ways without underlying data model support, user experience, data entry and the ability for non-technical team members (my current team is 50% sales & marketing staff) to make changes.

Don’t get me wrong, when you have a globally distributed team, you’ll almost certainly need an electronic tool as a single point of truth but it’s just not tactile or flexible enough to support the level of interaction and adjustment that a constantly evolving project and process needs. Many companies adopting electronic tools push for standardisation to keep processes consistent, costs under control and sustain support for reporting aggregation. This really stifles making adjustments to the process to suit project and environment context.

I’m not entirely down on electronic tools. I’m actually quite a fan of Trello at the moment and use this for sharing our bigger picture with the spectrum of stakeholders we have around the business that cannot be co-located. At least it’s a tool aimed at users, (rather than many commercial electronic boards whose capabilities tend to target management reporting instead) however for now Trello is limited to swimlanes, a constrained card format and the need for a screen. It’s not quite tactile or ubiquitous enough. Extending it requires time, technical skills and screen real estate rather than simply a process gap and a creative team member.

In my time at a very large US corporation we did a great job with the constraints we had. We used giant smart boards with virtual card walls, high-spec videoconferencing and large TV screens. At the time it really was state-of-the-art  stuff but it still limited our visual management capabilities. We only ever really had a shared basic card wall (the reporting and metrics weren’t particularly visible to the teams). All the other peripheral information you can get from a great board during your standups wasn’t visible.

The teams actually developed and maintained physical boards in each location and ended up using the electronic tools as a synchronization point.

Whilst we could virtually move cards around on a giant touch-screen, changing information on the cards themselves required reverting to a keyboard, detaching from ongoing conversations and manually editing within the tracking tool. It worked but it really was a compromise.

Contrast this to our current board – if something needs adding or updating, the active conversation continues whilst a team member grabs a pen and starts writing. If our process changes, we update the board format the same day.

I just had a passing chat with my colleague David (another of our DevOps team) about electronic vs physical boards. He summed it up brilliantly; “I don’t know why… …but it’s just not the same”.

We also tailed off into the value of an entirely co-located team. A rarity for many these days but a real game-changer in the performance of your teams – I’ll cover this another day.

So in summary, even where you have distributed teams, work with a physical board for as long as possible to allow your processes to adapt and develop to the context and project around you. When you start using electronic tools you’ll find the pace of process improvement will significantly decrease.

If you’re hunting for more whiteboard examples, you may also want to take a look at “A Year of Whiteboard Evolution” and “5S Your Scrum Board”.

Telling Vs Coaching

Reading time ~ 4 minutes

Before I start, a thanks to @fatherjack for being the first person to request a topic from the backlog. If any of you want more of the same, just shout!

The Story

Up to a certain point in my career, my success was defined largely due to my ability to find creative and often tangential solutions to difficult problems. For anyone that’s completed a Belbin assessment in the past, I’m mostly classified as a “plant“. My major strength has changed very little in 15 years although my complementary strengths and views have all shifted a lot (I might discuss this in more depth another time).

With this strength in mind, I found that when working with others I often jumped past a lot of the detail and rapidly offered solutions and alternatives. The sheer volume of options I can provide means many did stick and work. However, using this approach risked those seeking support or assistance becoming dependent on my problem-solving rather than developing knowledge and learning to solve problems themselves.

When I became responsible for other staff I recognised that many of the strengths that got me to that point were not appropriate to leading or coaching others.

I spent a little time learning basic coaching skills, the GROW model, coaching through questioning and other simple tips. Pat Kua also steered me toward the Dreyfus model of skill acquisition (which in hindsight for me is an important missing link for coaching) but I found that my instinct to solve and help often overrode my learned coaching practices. Coaching others is hard! (or at least it is when you’re normally a problem solver)

Having led a number of teams, managed a full spectrum of technical staff, implemented organizational change programs and most recently being responsible for a company-wide community of practitioners, my coaching skills have become more and more critical to my role. Coaching Dojos have helped significantly – using coaching tools repeatedly as a deliberate practice but there’s still something not quite right. I still have those problem-solving skills going to waste, there must be something I can do with them.

The Lesson

So here’s the thing. Just because you’re coaching doesn’t mean you should only ask questions, it doesn’t mean you shouldn’t direct or tell and it doesn’t mean you shouldn’t get to have the fun of solving problems for (or with) others. You just need to understand more clearly when it’s appropriate to do so and when it’s not.

Learn to spot when you’re “telling” when you should be “coaching” and vice-versa. This can be really tricky to achieve when you have all the answers and ideas.

Fortunately for me, my current employer really invests in their staff. All managers are trained and encouraged in doing just this…

The Tools

The coaching and leadership model we use is “Situational Leadership” – in particular “SLII”. (here’s the explanation of  why it’s II)

I can’t cover the full depth of the model in a blog but here’s the basic conceptual framework – this should be plenty to help you recognise when to coach and when to “tell”.

There’s a direct correlation between the style of leadership you (as a coach/leader/mentor/manager/team member/person) use and the development level of the coachee/seeker/mentee/staff/team member/person/team.

Important note – this applies just as much when leading or coaching teams, not just individuals.

We model this as four “Development” levels (D1-D4) and 4 corresponding “Styles” (S1-S4) This might seem a bit jargon-y but putting this into practice really does work (see the diagram below).

The suggested style to use is based on a composite of the motivation of the individual and their competency level.

As an analogy, consider learning to drive a car. Most new drivers are really keen, think this is going to be easy and can’t wait to be out under their own steam (Level D1). As an instructor, you need to let this play out, give them the space to try and succeed (or more often fail) but you do need to be quite prescriptive in what they do for their own safety (and that of others) (Style S1). When things get hard and motivation wanes (Level D2), you continue to tell them what to do but in a coaching style (S2). As competency develops, the trainee becomes more competent (D3) and your style will need to follow. Eventually they will (hopefully) become self-sufficient (D4).

Our regular trainer actually talks us through a lot more than the textbook model. The diagram below is my interpretation of the model with the additional tips we’ve learned.

SLII on a page

Extended representation of Situational Leadership II

There’s a few really important points that help us use this as a thinking tool.

  1. The model applies to each specific task. If a person has never performed that specific task before, re-assess their development level. Some complementary skills may apply but don’t assume competence in one area translates directly to the task at hand.
  2. Watch for transitions in motivation as a guide to levels of support to offer. When individual motivation is low, the coach/leader must be more supportive – more guiding and questioning. When motivation is high, less support is needed.
  3. When individual competency in the specific task is low, the coach/leader should be making the decisions on the course of action (even if leading through questioning). When individual competency is high, the coachee makes the decisions but may still occasionally want to validate these with the coach.
  4. A mismatch between leadership style and development level can be harmful. The further apart the difference, the more dissonant the leadership style will be.

Extensions:

There are a couple of important extensions to the model that need consideration.

In many work environments, there are times when a person may have high expertise in an area but not be motivated to actually work in it. Similarly, someone who reached a high level of competence in an area but is ignored may lose motivation. In these instances, they have actually regressed around the model (from D4 to D3). Your leadership style needs to change!

In other situations, you may have someone with little or no motivation to work on a new task and little or no competency. Rather than starting at development level 1 (D1), you’re actually starting at D2. You need to work with the other person to build motivation and competence. At this point they either develop to “D3” or first to “D1” and then back through the cycle.

And Finally

Like all frameworks, this is a tool only. Use with caution. The more you understand how to use this, the better you’ll manage with it. If you’re interested, get trained properly, don’t just rely on what I’ve presented here.