2 Visual Management Tools to Help Evaluate Your Options

Reading time ~ 4 minutes

Back in December 2012 I moved to a new team – from what was DevOps to .NET (same 3 roles as before – Dev Manager, Project Manager, Head of Project Management).

As part of my ramp-up on the team, products, customers, market and current challenges I was briefed by my new Product Manager and Division Head on the areas they’d like me to focus for the next few months –  and asked to deliver some draft plans for each of these.

I scratched around online to find a suitable A3 problem solving template as a thinking tool and struggled find anything that was quite suitable. The majority of A3 problem-solving tools are around defects/quality and root-cause analysis.

There’s a lot of wisdom in performing an RCA for a problem but sometimes you just need to move forward (see my “stopping the line” article).  In my particular case the change of teams would not be confirmed until I could satisfy my new potential boss that I was capable of covering their needs. In order to prevent confusion and disruption, this meant “going it alone” on my first iteration of a plan without interviewing the team on their thoughts. That makes performing an RCA problematic.

Here’s the A3 template I used.

Click here for a link to the PPT version

This got me far enough to complete the review – a win!

After discussion, I moved onto deciding next steps. Moving form vision and strategy to plans and actions and found that whilst I had a set of options, my timing was bad. The following week was “Down Tools Week” for all our development teams. Nobody was freely available to hassle about current projects and products and I didn’t want to interrupt the amazing things everyone was doing.

Rather than waste a week, I moved onto priority number 2 – increasing the team’s customer understanding. This needed less input but was also less well-formed in terms of approach, options and actions. My original plan had suggested one possible option but during review, it was felt I needed to provide and evaluate some alternatives.

I had the information to achieve this but hit a wall.

After half a day of thrashing with a blank slate I approached my new Division Head and asked for his help. I explained I was blocked. I had the right information but couldn’t move things forward. We spent 10 minutes discussing the problem with him essentially working in “Rubber Duck” mode.

After talking through the blockage we established a way forward. We discussed the (now much longer) list of options and an initial set of evaluation criteria for these. And I agreed to turn this into “some kind of evaluation matrix”.

As soon as I returned to my desk things fell into place…

We regularly borrow a few visual management techniques from Lean (we had a former Toyota UK guru spend a day a week here for a couple of years consulting with us on a wide variety of management topics).

One particular technique that has proven exceptionally effective is the visual representation of “Good/OK/Bad” using a colored circle, triangle and cross. When we were first introduced to this notation we were sceptical. Now, 3 years later we have a common visual form that all managers understand with patterns that can be seen at a glance and the approach is part of our routine reporting across all areas of the business. Even the monthly roll-up report for all projects I track across the entire development portfolio uses this same notation.

I took the list of options we’d been discussing and both positive & negative evaluation criteria, applied them to a matrix and used the (then new) notation to capture my thoughts.

Here’s the outcome.

Without any reading at all, the 3 strongest options stood out. Better still, this technique didn’t need anything more than expertise and instinct. I’m free to follow up with further research and evidence if needed but at that point we were after a first-cut and a quick narrowing of options to focus our efforts.

There’s an important subtlety in this notation to be aware of. A yellow triangle means “OK”. It’s not a problem but we think we can do better. A green circle means “Good” – we mean really good – not just “fine”. This ties back to an article I wrote back in 2011.

When we report something as “good” on a project status, we expect supporting evidence. When something is “ok”, we ask “how can we help or improve this”.

What this evaluation is saying is that we have a couple of really strong options that would help us achieve our goal. They’re strong rather than just the best of what we have.

Next time you’re trying to figure out what to do, try these 2 visual management and thinking tools as a way of unpicking and reassembling your options into something simpler and more cohesive. If you’re lacking options that have enough “good” points, try some more alternatives, don’t just big the best of a bad bunch.

(In the end I spent a year working with the .NET team. and shared a sample of the results here. )

 

You Are Not A Lathe Operator

Reading time ~ 3 minutes

This weekend my replacement copy of “The Goal” arrived. (My last copy escaped).

I wrote the following article in mid-2009 – some time before I read Goldratt’s “The Goal”. This was eventually published in 2010 internally at the company I was working with at the time. Today felt like a good day to refresh it and share more widely.

________________________________________

If you haven’t yet done so – read up on the theory of constraints. In fact I recommend that every manager in any size of organization obtain and read a copy of “The Goal” discuss it with your peers & teams and make copies available for your staff.

In Spring 2009 I had the good fortune to attend a training course in Florence, Italy – a hub for European manufacturing training for a large Oil & Gas company.

One of my working team during the week was in the process of implementing a lean production line for the local factory floor. Their layout and travel spaces were all mapped out but the reason for him attending the training was to cover the human aspects of such a radical change.
He had factory floor staff who had been performing the same expert role for 20-30 years or more. Now they were being asked to radically change their working practices and he was feeling the pain.

An operator producing bearings may traditionally be measured on their production volume (and quality).  Their single-minded goal is to maximise their own output. The more bearings produced, the better an operator they are. In some cases, particularly where waste is not well managed, volume may even trump quality.

In a production line, operators are contributing not to the production of bearings but to the production of complete working units as part of a team.
They have to look both up and down-stream and assess the state of flow through the entire team. If there’s a bottleneck in the line, they’re expected to adjust or stop their own activity to help out and “level the load” in order to maximize the performance and throughput of the entire assemblies they’re working on in the factory.

The challenge here was that some operators had learned over years of training and measurement to sub-optimize around their own roles and personal output even though the real unit of value was in the whole team’s output.

His challenge was not unique and for this particular company, lean manufacturing was already a major focus. The striking synergies between lean & agile meant I was able to share insights from my own experiences with him.

Now let’s take his example and consider some quotes often heard during software development.

  • “It’ll be far more efficient if I do all the coding now in one go.”
  • “Seeing as I’m already doing some surgery here it makes sense to add these few other things that I’m pretty sure we we’ll need”.

Often these are suffixed with “it’ll be ready in when it’s ready”.
I’ll raise my hand and admit to this behavior in the past – particularly when in the midst of a major refactoring exercise.
But…

  • What about everyone upstream and downstream of my code?
  • What’s the impact on the testers of me delivering 3 months worth of code over 2 days worth of code?
  • Do I have people waiting for work downstream?
  • If I changed my delivery practices would I get feedback sooner?
  • If I got feedback sooner would those bugs be less of a pain to context switch and fix?
  • Would the overall throughout of the team be greater?
  • If requirements change how much of my work would be wasted?

This is where agile and manufacturing meet.

Admittedly in manufacturing, the concept of a multi-skilled operator is far more prevalent than in software however the software industry equivalent is often described as the “Generalizing Specialist” – that is: whilst a member of the team may primarily be role X, they are able to add support and value in role Y – even if that’s not their job title.

This may be a simplification of reality but some skills are exportable or importable across roles and allow us to level the load by having others do the heavy lifting.

It is the responsibility of every member of a development team to consider the entire value-stream, the activities of all skills and roles that contribute to a working product increment and ask questions like:

  • “Am I working on our top priority item? – if not, should I contribute to it in some way?”
  • “Could we be delivering more finished work sooner if I helped out elsewhere?”
  • “I’d really appreciate someone covering these areas around the edges of what I’m working on”
  • “Here’s a problem that keeps slowing us down, let’s figure out how to remove the bottleneck.”

Think about your current role in a development team.

  • Are there opportunities for members of the team covering different roles to support each other better?
  • Is anyone over-producing in comparison to capacity of other areas and causing headaches downstream?
  • Could some conversations be had sooner, be much shorter and save pain?

Take some time out this week and consider how your teams and individuals could improve their cross-role collaboration, share some of the heavy lifting and deliver as a more cohesive unit?

Improving Your Retrospectives

Reading time ~ 3 minutes

Inspired by Carl Bruiners – he described many unsuccessful retrospectives as “Happy Clappy” or “Toothless”.

– I love the term “happy clappy”

Last weekend  I hosted/facilitated a short session at the UK Agile Coaches Gathering on “How To Improve Retrospectives”. A valuable hour or so of shared challenges and lessons. Here’s the highlights I was able to capture.

Let the Team Vent

  • If you have a very frustrated team with a succession of issues, give them time for catharsis. 2 or 3 “venting” sessions to let off steam rather than retrospectives needing concrete action may actually be necessary. After the sore points have been drained, circle the team back around to actually solving some issues
  • Facilitate the venting to keep it constructive, not personal and ensure they are actually draining issues, not fuelling flames

Think Lean

  • Prevent batching and waiting by raising issues sooner with the potential to address improvements rapidly before a retrospective even happens
  • Capture your data points and lessons as soon as they come up – on the board for the team to see every day. Discuss them as part of the stand-up
  • Track activities, lessons and problems on a timeline during the project or
    sprint, not at the end. There will be less head-scratching and forgotten issues in the retrospective. It’ll be faster to get moving, you’ll have fixed more and people will feel the time spent has been more productive
  • Consider setting a threshold / count of the number of issues or lessons raised as a trigger for a retrospective rather than just sticking to a sprint rhythm

Figure out what to do with “stuck” actions

  • If there are recurring problems beyond the team’s control, take them off the table. There’s no point in self-flagellation. Raise them in a forum other than retrospectives
  • Define a clear, workable escalation path for when a team needs support to resolve a retrospective action
  • Work with “management” to clear at least 1 stuck retrospective action in order to build team trust in both the management and process – prove it’s working, worthwhile and supported

Keep it interesting & fresh

  • Once a team is used to the general structure of retrospectives and using these relatively effectively, use variety to keep things fresh and interesting – see the “Agile Retrospectives” book and associated mailing list for ideas
  • Retrospect on your retrospectives, – e.g. using start/stop/continue

Beware of US/UK social differences

  • Thanking and congratulating each other is not a common cultural behavior in the UK. Get teams comfortable with other activities and working as a team before approaching some of the “softer” areas
  • Recognise your team members may not be happy talking about how they “feel” about projects, each other, values or any other “hippy” concepts

One of attendees at the coaches gathering described a painful retrospective experience where a single member of a team effectively shot down a retrospective by refusing to participate in sharing feelings. I’ve had similar derailing experiences where a single team member didn’t want to take ownership of any actions or decisions and pulled a team with them. In both cases we just called a halt to that part of a retrospective and moved on. There’s always next time.

Deciding What To Do

  • Capture actions for the team and actions for “management” independently
  • Use dot voting or similar consensus-building activities for selecting actions
  • Encourage people to vote for what they’re motivated to address and capable of resolving, not just items they think are important
  • Limit the number of actions to commit to & address, you can always come back for more
  • If you complete all the committed items, consider pulling one more new action through

Agree When & How actions will be covered

  • Some items are “just go do”, others have more cost. Address just go do items as they come up
  • Consider use of stories for larger actions in backlog and prioritization
  • Try “sprint tasks” to keep “customer” activity/velocity independent of “non-customer” actions and improvements
  • Identify owners for taking management actions to the management (typically a coach, team lead or scrum master)

Thanks to Sophie Manton, Mike Pearce and other attendees for the insights & lessons they shared at the Coaches Gathering.

5S Your Scrum Board – (Part 2) – In depth

Reading time ~ 7 minutes

I’m a big fan of Goldratt’s Theory of Constraints for getting important improvements made in the most suitable order for a team. Our current scrum board is the result of focusing on the most evident, soluble problem each day for a couple of weeks. No grand plan, just good emergent design.

Note this is a very long article as some readers of part 1 asked for all the details as soon as I could get them written, today’s the first time I’ve been near my blog in a week. I’ll switch back to shorter posts after this for a while.

Here’s an in depth look at each of the areas of our board…

scrum board with placeholders

Using the peripheral space for useful stuff

 

Feedback

A single sticky with a quote regarding some feedback. The last 3 sprints we’ve left one up there that says “It shouldn’t be all that hard” . A reminder not to generalize (“it” was much harder)

Theme

One sticky. Our current theme is “Contact Management”, The next will be “Dashboarding” or similar. Just a little reminder of what the overall focus of the sprint or sprints is. This helps when clarifying or controlling scope, questions, debt & defects.

Capacity

During sprint planning we look at who’s in, when and what other commitments they have (beyond the usual overheads and maintenance activity). Capacity is calculated in half-day (4-hour) increments for each of us, totalled up as a whole team in hours. We then subtract 40% for overheads & support.

We track a second capacity number under the first which is the actual total effort hours of tasks completed in the last sprint. Generally this is a fair bit lower than the forecast.

The differences between these two figures are useful input for understanding our capabilities & overheads.

We limit tasks during sprint planning to around 70% of our maximum delivery capacity or close to our previous delivery capability  in order to leave space for unknown / discovered activities and hurdles. If planning another story would take us over this level, we hold off until we’ve actually been able to deliver the first – there’s no point in breaking stories down into tasks if they’re highly unlikely to happen in the next 2 weeks.

If we really do well, we can always reconvene.

Tags

At the moment we have 2 sets of tags on the board; “Blocked” and “Please Test”. The team no longer uses the “please test” tags so they’re likely to be removed this week. We assume coding tasks include developer-led testing activity and any other testing is planned in as scheduled tasks. This has reduced dev-test handover as the whole team focuses on the set of tasks required to complete a story, not their individual tasks.

I’m likely to add “red tags” soon for tasks that need particular focus or expediting. We’ll limit the use of red tags to one WIP item at a time and only when needed.

Key

A color key for our tasks. Currently we have

  • blue : code & test
  • green : design & ux
  • yellow : test
  • orange : management noise
  • pink : support & bugs

Stories

 The top horizontal swimlane of our board contains stories and story activities.  The left-most column of this contains story cards in priority order from top to bottom for stories planned for this sprint. The space here is deliberately small. The last 2 sprints we had only 2 stories listed, in the next sprint we’re actually reducing it to 1 and having the whole team pull single stories through. (Update – We ended up pulling single stories as a flow after this point right to the end of the project)

“Extra”

A placeholder for “extra” stories.

If we really do clear all the stories on the board, this area is a placeholder in rough priority order for the next 1 or 2 stories we might play. These are not broken down into tasks unless there’s certainty that we’ll actually work on them in this sprint.

Physical Holders

A more traditional lean technique. We have an individual holder for 1-2 pads of each color of stickies. We can quickly see if any are running low and replenish stock. We also have a holder for story cards although that’s rarely used (we generally don’t add stories when stood at the board).

There’s also a holder for pens. We ensure there’s enough marker pens in the holder for everyone to have one during our standup and we ensure that there’s one of each whiteboard pen color needed for drawing graphs and totals on the board.

Hours (on stories)

This is the initial estimate in effort hours of all tasks we identified prior to & during sprint planning that went onto the board at the start of the sprint. We compare these with total hours completed at the end of the sprint to learn from and improve our tasking and estimation.

Over time we hope to gather enough data to see the typical ranges of hours spent on stories of each size (we expect a bit of overlap)

Sprint Tasks

Our second main horizontal swimlane is “sprint tasks”. These are all the activities required by us as a team in order to be able to complete the sprint and usually deploy our working software to production.  We also tend to add extra technical debt cleanup items into this swimlane.

This approach seems better at the moment than having specific (and artificial) stories for these activities.

It’s up to us how much trade-off we have between sprint tasks and story tasks in a sprint. Since creating this swimlane we’re finding 40-60% of our effort is going into this area however this is because we’ve been prioritising some technical debt and deployment improvement activities. That rate will adjust over the next couple of sprints.

Support/Bugs

The bottom swimlane on our board is for support issues and bugs. Right now we’re not using this very much as our support and old bug fixing activity has been relatively low.

We’ll see how useful this lane is longer-term.

Note, bugs related to work we’re actually doing are either fixed immediately and not added to the board at all or are added into the stories swimlane relevant to the story they’re found in.

Todo/Next/WIP/Done

Todo, WIP and Done are the typical status lanes for tasks you see on most boards these days. “Next” is an addition we added in very early on and has been a bit of a game-changer in managing the flow of tasks effectively here’s a link to the full article on the “next” column.

Retrospective Input

I talked a little bit about this at the UK Agile Coaches Gathering this weekend and I’ll expand more in a standalone article shortly. Put simply, we found that getting potential retrospective input visible every day during the daily standup has been more effective in driving continuous improvement and makes it easier to collate retrospective data at the end of the sprint. Mike Pearce also recommended something similar over the weekend – maintain your release and sprint timeline with pitfalls and lessons so that there’s less head-scratching at the end.

Graphs:

We’re tracking burn-down of task hours for both sprint tasks and story tasks in parallel (different colors). Then on the same graph we’re also tracking the burn-up of total capacity. It’s working okay for us at the moment but I can’t help thinking it’s still not quite right so I’m sure we’ll revisit this.

Beneath the burn-up/down we’re tracking per-sprint velocity as a basic bar chart. Straightforward, simple, easy to complete and easy to understand.

Sprint Schedule / Countdown boxes

As of the end of the last sprint we removed the sprint schedule, the list of historic dates wasn’t adding anything for us. We have the current sprint dates at the top-left of the board instead and have used this space for a set of countdown blocks 10..1 with effort hours to do and forecast remaining each day. This gives us immediate feedback as to whether we may have a problem in meeting the end of the sprint with current task scope. I’ll write up the experience on the countdown boxes separately once we’ve been using them a while.

“Done Done”

This serves a couple of subtle purposes…

There’s something very satisfying about taking tasks from the board and dumping them in the “done done” box.

We clear down the board into “done done” at the end of each sprint. This makes it clear that we’re not really done until we’ve completed everything and cleared the decks.

Although the box itself is basically a trash holder, it serves as a constant reminder that we’re not done until we’re done 🙂

We’ve added a few more things since I drew the original picture of our board…

Running Totals

After the daily stand up we sum up the total effort hours for each horizontal/vertical swimlane square/block and scribble those on the board. We can then transpose these into tracking graphs.

Right now we only track todo by horizontal lane and overall done but we have the data available for cumulative flow tracking if we feel the need.

Debt

We’ve added a very small space for newly accumulated or discovered debt related to our current work. It’s deliberately small to keep things minimal. When it fills up we’ll need to do something with it (if not sooner). It gives us one more line of slack if things go unexpectedly in a sprint for some reason whilst we still focus on deploying working software. This is still a running experiment but the intention is to treat this like credit card debt – it’s a short-term postponement mechanism, not a backlog.

Next Sprint Tasks

In much the same way as we have “extra” stories, we’ve added a placeholder for tasks we need to do in the following sprint but don’t have capacity for right now. This stops us overloading the current sprint tasks lane once things start moving without forgetting important future activities.

An Update (November 2013)

I’ve been rooting through my old photos and found a picture of the board toward the end of the project (September 2012) so you can see what it *actually* looked like as it evolved beyond what I’ve described above.DevOps Email Marketing Project Board Sept 2012

If you’re interested in even more whiteboard designs (and how they evolved over time) take a look at “A Year of Whiteboard Evolution”.

Closing Comments…

Don’t be afraid to adjust your working area if things don’t seem to be flowing right.

This isn’t the end for our board adjustments, what we’re using will certainly be different in a few weeks time however today it serves our needs well and must remain simple enough for new team members to understand.

I strongly recommend reading Henrik Kniberg’s recent book. I read the review draft last week (July 2011) and saw close similarities in the board structures the PUST teams at RPS were using and what we have here. I also share the same view as Henrik that each team – even within the same group – should be free to choose their own board style.

Try taking a few of these ideas away to experiment with but you probably won’t need them all.

5S Your Scrum Board – (Part 1) – A Place for Everything

Reading time ~ 3 minutes

Some weeks ago I joined a new team. I don’t like causing disruption but I do like to get the pace of improvement going on my teams. Here’s how we started out together…

During my first week I observed my new team’s activities. They were trying hard and doing pretty well but hitting some hurdles. Their most obvious challenge was the impact of covering support activities and cleanup tasks in addition to new story delivery.

Other than the loss of expected delivery capacity, this unplanned work was causing problems during stand-ups. The flow and continuity of useful conversation was being lost as the team stopped to hunt around for the right coloured super-stickies, a working marker pen and figuring out where to put the tasks. (but… they were aiming to get these tasks visible!)

So…

Borrowing some 5S tricks from Lean, I started work…

  • Seiri (Sort); eliminate all unnecessary tools, parts, and instructions.
  • Seiton (Straighten); a place for everything and everything in its place.
  • Seiso (Shine); clean the workspace, and keep it clean.
  • Shitsuke (Sustain); don’t revert to old ways, continue looking for better.

I skipped Seiketsu (Standardize).

Each team I’m working with is establishing their own natural, effective way of working within a common framework. I don’t believe seeking further standardization is something that will help them right now. (Although I’ll be setting up a community of practice for sharing successful and failed ideas shortly).

We added a small new change to our board, stand-up and/or tracking every day over a week – generally this involved scissors, tape, markers and old cereal boxes!

We reviewed & cemented the successes and added a couple of bigger changes during our retrospective.

Here’s the current result. – I’ll post a photo in future (I’m writing this at home on the weekend).

scrum board with placeholders

Using the peripheral space for useful stuff

Clearing the decks, having specific right-sized places for consumables (colored sticky pads  pens, story cards) and swim lanes for support items & cleanup tasks solved a number of problems quickly…

  1. The support/new work split is clearly visible and we’re able to balance it better.
  2. For new work, we’ve reduced the number of planned stories on the board to a minimum and have a clear priority order.
  3. The team are now in the habit of making all new tasks visible on the board without interrupting the flow of the stand up – it’s not such an effort to add new tasks any more.
  4. There’s always enough of each coloured sticky pad and some pens right by the board. If the holders are looking empty, one of us restocks.
  5. Progress and status are clearly visible at a glance so we know if we have a problem and whether to adjust.
  6. The team felt like things were under more control. Allowing them to focus on doing their best work and to try other improvement & collaboration activities.

When the team started their next round of sprint planning we had another 5S lesson – Seiri…

The great thing with this team is that all stakeholders are on-site and easily accessible. Having faced a painful hurdle on their previous sprint with a story that turned out to be significantly harder than expected, they discussed and adjusted priorities with their sponsor.

They pulled the part-completed story off the board, moved it right down the backlog and picked up something cheaper.

The team completely reset their board at the end of the sprint – even with work started or planned but unfinished – just to remove all the noise.

Interesting (but obvious) side-note – Perceived value is influenced by perceived cost. If something looks too expensive, previously less-valuable items become worth discussing and implementing.

IF the team were going to continue unfinished work, they would re-evaluate the remaining and  completed tasks, understand why they didn’t finish, learn and re-plan appropriately rather than assuming what’s on the board was correct.

Longer-term, I’m expecting this approach to also encourage more flow/pull-scheduling of stories. I’m aiming for us to only pull new stories onto the board when there’s a to-do slot free.

Finally I stepped back for a while to let the team self-organize and see if they could sustain the changes (a trick from my friend Carl).  About 80% of the changes have stuck with no support needed. Progress tracking and clearing the board down have needed some revisits. There were specific causes for these falling over but they’ve been recognised, addressed and are back on-track again.

In Part 2 I go into more depth on each of the areas of the board, what they do and most importantly why use them.

Agile Is Just A Means To An End

Reading time ~ < 1 minutes

A couple of months ago I posted that software is just a means to an end.

Here’s an equally commonly lost point – in fact it’s almost identical.

Agile (or Lean, TOC, whatever) is a means, not a solution.

Our customers, users and stakeholders don’t want “agile”, they want “success”. Once they have success they’d quite like a means of making that success more repeatable but ultimately they simply want success.

We seek to promote our way of working (one of our goals as an agile community) but risk missing the actual goals of our stakeholders?

Our conversations should move away from Agile by name and onto:

  • how do we best attain our stakeholders goals?
  • how do we effectively identify those goals?
  • how do we attain consensus on what those goals are?
  • what do “success”, “good” and “OK” look like for everyone involved?

If we step back, agile is just a marketing term – a simple pattern for a collection of mostly proven ways in which we believe we can work effectively. Where we need that marketing or verbal anchor, let’s use it – (much like we’ll use whatever agile practices and culture we know are useful in attaining our stakeholders goals) – but let’s ensure we’re not having methodology and culture conversations for the sake of methodology and culture alone.

Before diving into “agile” discussions, step back and (re-)establish what success should look like for your customers and users from their perspective.

SMART Goals and The Elephant Test

Reading time ~ 2 minutes

Just under 4 years ago I set myself a goal to “socialize the concept of technical debt” within my organization. I had a strategy but no visible means of measurement. When I’d achieved my goal it was obvious that I’d succeeded but I had no direct evidence to prove it. – and why bother? – I succeeded.

Thanks to Luke Morgan (Agile Muze) for the inspiration of the “Elephant Test” – I’d never heard of it until last week.

For years everyone I know has been indoctrinated into using “SMART” goals (as defined in the early 1980s). As a line manager, employee of multiple large corporations and one-time domain expert in learning and performance management systems I too bought into and supported the “SMART” mnemonic.

Here’s a challenge – try running a 5 whys exercise on each of the attributes of SMART.

  • Specific,
  • Measurable
  • Attainable
  • Relevant
  • Timely

I can develop valuable meaningful responses (not excuses) to most of these except measurable.

I’ve spent enough years working for corporations that love to measure to have a very good handle on the values and dangers of measurement. But until my inspiration from Luke, I never had an alternative.

Today I do!

Why measure something when you implicitly know, trust and recognize what you’re looking at?

The Elephant Test “is hard to describe, but instantly recognizable when spotted”.

A big leap in agile management is trust. Trust your teams to do the right thing.

  • If you trust your team to set and accomplish their own reasonable goals, you must also trust their judgement.
  • If you trust their judgement, they must be able to recognize when they’ve achieved a goal.

Measurement is the most brute force way of recognizing something – but not the only way.

Software development and management is a knowledge activity. We tacitly know what’s right and wrong and we openly share that recognition. Occasionally we choose to measure but much of the time we trust our judgement and that of our teams.

So if the team says they saw an Elephant, chances are they saw an elephant.

Or don’t you trust them?

If that Elephant happens to be that your team believes they’ve met their goal and their stakeholders agree, why must that goal be explicitly measurable?

I know when I’ve made a difference and those around me know when I’ve done a good job. That team consensus is far more rewarding and more trustworthy than preparing measurable evidence – it’s also a lot harder to game and a lot harder to sub-optimize your behavior to group perception than around numbers.

Next time you’re looking at goal setting. Don’t go overboard on making them all measurable. If they can pass the elephant test, that should be more than sufficient.

Try starting out with a clear simple vision, good direction, a suitable time window, a strategy and some commitment to do the right thing and work from there. You’ll know amongst yourselves when elephant-testable goals have been achieved (and delivered in good faith). These may also be some of the most valuable impacts your team can have.

Your Leaders Are Not Gods

Reading time ~ < 1 minutes

It’s lonely at the top, but who makes it that way?

In Large companies there seems to be a myth – often perpetuated at the middle tier – that senior leaders are somehow “gods” that cannot be spoken to or at least not in the same way as mere mortals.

The business leaders that I’ve had the pleasure of talking to have been very smart, politically astute, personable, socially aware and most of all they care what people have to say. Admittedly they’re strapped for free time but they’re still human beings.

A casual conversation, sharing of thoughts and opinions or mail exchange should be possible at any level. In fact that no-nonsense, relaxed, open and honest communication is a breath of fresh air from the political games and data feeds faced most of the day.

In any organization that claims to be lean or agile, isolating communication with our leaders to single PowerPoint slides and 2 minute bursts of data defeats the entire point of a true lean corporate culture.

“Go see” also means listen, share, learn, coach, mentor, teach, act, support and most critically interact.

Most leaders understand this (they all started out somewhere) but you may have to cut through a layer of defense to get there and re-educate along the way.

When your leaders do go see, make sure they really see and understand. It’s not all a parade no matter how your local glitterati might want to make it one.

Remember no matter where you are in the food chain, a truly agile organization values individuals and interactions.

Stop Working With Blunt Tools

Reading time ~ 2 minutes

Clarke Ching introduced me to a story of 2 woodcutters – one worked furiously but finished late whilst another stopped frequently to sharpen his tools and finished early.  (He tells it better than I do) – I think it’s actually based on an Abraham Lincoln quote:

“If I had eight hours to chop down a tree, I’d spend six hours sharpening my ax.”

Let’s think more about developing software the Lincoln way

What tool sharpening should you do before you start chopping?

If you paid off or prevented some of the debt you were facing before new work started, would it enable your project to run faster, more smoothly, with reduced risk, a lower chance of defects or a lower maintenance cost?

A previous employer had a great strategy. Every new release of the product had 2 top priority named features on the priority list for “cleaning up” and “levelling up”.

Cleaning up:

  1. Get all unit and regression tests passing (and keep them there).
  2. Address all build failures and warnings (and keep them under control).
  3. Delete all functionality, code and tests that will have been deprecated for more than 3 releases. (and add alerts to functionality that will be removed in the next release)
  4. Fix all defects that put us below releasable quality before we’ve even started (and keep them there).

Levelling up:

  1. Raise the tools we use to those that are best supported, newest in the market or offer improvements to our working conditions.
  2. Raise the libraries we use to the latest supported versions and address any issues.
  3. Raise the platform versions we’ll support when the product ships and address any issues. (and remove support for obsolete or out of date platforms).

No major release started full-tilt on functional work until these were cleared.

Like all good practices, this isn’t new thinking, it correlates to 3 components of the lean 5S strategy; sort (seiri), straighten (seiton) and shine (seiso). Rather than just describing what was done, here’s some tangible benefits for the clean up & level up approach…

  1. There were no unpleasant surprises for our customers on a new release. We had a standard platform, support and deprecation policy and kept to it. Our customers liked it when we did predictable things.
  2. For the development teams levelling up was a common risk. Addressing this at the beginning of a project was a valuable de-risking activity. Where we hit critical problems, we could make clear, early upgrade decisions and where there were fewer issues we would develop full-time on updated versions throughout the project with regression on older supported platforms available from prior development cycles.
  3. Removing old parts of the product made life significantly easier for the teams. The reduced testing, regression and maintenance load allowed us to speed up development – much like scraping barnacles off the hull of a boat to help it run faster. Cleaning up also allowed us to take some sensible baseline code metrics before any new work started.
  4. Giving teams space and time to clean and level up before starting functional work had a positive impact on morale. We felt that we were trusted to “do the right thing” rather than “just ship it”. This empowered us to continue doing the right thing throughout the rest of our work.

Give your teams some time out to sharpen their tools and sort, straighten & shine the workshop before new work starts. It will make a difference to the performance of your team and the quality of the end result.