Swimlane Sizing – Complete & Fast Backlog Estimation

Reading time ~ 5 minutes

Thanks to Adrian Wible for sharing this tip when I was in Nevada a couple of years ago. It’s now one of the most powerful simple tools I have at my disposal. I use it frequently with teams to rapidly size their backlogs and recalibrate their sizing.

This can be used with teams at any experience level but works with least pain if teams aren’t used to story points beforehand.

Teams new to story-points (or even working together) often get hung up on the numbers themselves and instinctively want to start converting to hours, ideal days, elapsed days or similar. The value of story points for teams during estimation sessions is in the relative sizing, not the actual size.

This approach keeps numbers entirely out of the mix until the end and allows participants to focus solely on the relative sizing of stories.

Once we’re over the relative sizing hurdle, adding numbers becomes a straightforward activity.

I’ve done this same activity with electronic tools but I’ll describe the manual version here…

Preparation:

Get your deck of incomplete user stories on readable cards or stickies.

If you also have a couple you have completed, these are useful triangulation points but not mandatory

In a room with your team, get yourself a large flat clear surface. I recommend tabletops in the middle of a space as you can swarm your whole team around all sides – this isn’t so simple with a wall.

Using string or sticky tape mark out 8 “swim-lanes” on the table (Requires 7 lines)

Your table should look something like this.blank table

Don’t put any labels on the swim lanes

 

 

 

 

Activity:

Divide the deck of stories amongst your team and ask members to spend a maximum of 5 minutes silently placing their stories in the swim lanes – with stories of increasing size from left to right.  E.g. smallest stories in the left-most lane.

You might want to lead the team through the first 1 or 2 random stories to give them some anchors or triangulation points to work from.

Don’t say anything about story points or numbers, if anyone asks just reiterate we’re looking for size relative only to each other.

If needed; guidance on complete unknown stories at this point should be to assume they are “very large”. The next round of sequencing will allow others to move them if they feel confident in doing so.

Once all stories are placed, you’ll probably have some “clumping” of stories, as in the table here. (assume X’s are story cards).

Give the team 5-10 minutes to again silently move any stories to different lanes where they feel their size relative to others makes them more closely aligned to those in another lane (stories must be wholly in a single lane – no overlaps).

Encourage team members to quickly look at and consider the location of each story on the table.

A team may occasionally find a deadlock where a story is “stuck” moving back & forth between lanes, you may intervene, pull the card out for discussion or allow the time boxing to encourage the card into a single location. Generally if there’s uncertainty, I choose the larger option but the last step of the process will resolve these.

Your table should now look something like the example here.

Once all the cards are placed and reviewed, ask the team to rate their satisfaction/confidence on the relative sizing and placement of the cards (use a scale of 1-5). For any team members indicating a low satisfaction level (1 or 2), ask them to describe their concern and for the team to decide upon how to act (either move a card or validate its current position). Time box this to 2 minutes per team member if possible. Use this opportunity to re-place any deadlocked stories – if a team really can’t decide, be cautious and go for the larger option.

Once relative sizing is agreed with a reasonable level of confidence for the whole team it’s time for a couple more questions.

For all the stories in the left-most column, do you think you’ll be working on anything smaller?

If the answer is “yes”, call this column 3 or 5; if the answer is “no”, call this column 1 or 2 (a single number from these options – use your judgement and instinct).

Now assign remaining numbers across the top of the columns using a subset of the common modified Fibonacci sequence used for story point estimation: (0, 0.5, 1, 2, 3, 5, 8, 13, 20, 40, 100, !/?)

This will give you a table of stories much like one of the following…

 

 

 

 

 

 

(I usually prefer the far-right column remains unsized to be broken down further to make it clear that this is a coarse exercise and cannot be used for commitments).

That’s it! You’ve just estimated & sized (using story points) your backlog in well under an hour.

It’s coarse, quick and dirty but chances are that even when you learn more, most of these won’t change relative to those in other columns .

More Depth:

The use of “silent sorting” is a powerful practice for agile teams. It removes debate and ensures focused thinking however in this example its use coupled with the lack of numbers is to avoid a common problem with inexperienced estimation teams during planning poker exercises known as “anchoring”. This is where an individual on a team will speak up with their view on what size a story should be before the remainder of the team have selected their view on size therefore influencing and anchoring all others estimates for the same story to a given point.

If you reuse this approach for additional rounds of backlog items I recommend preserving some of the older stories in the swim lanes to provide triangulation points for newer stories. Without these anchors your team’s velocity and estimation may see large shifts and greater unpredictability based on the shape of the new backlog.

Teams will quickly get used to which numbers match which lanes. This practice is still valuable however you may need to encourage teams to remain focused on relative sizes and not the numbers.

I generally hold the following views (and have found these actually do transfer well across teams)

  • 5 is typically a “medium” sized story for a sprint
  • Anything greater than 8-13 requires further decomposition
  • Seriously consider also decomposing stories of size 8-13
  • Anything greater than 20 is in reality an unknown needing further investigation and decomposition before it can be sized sensibly.

These views can in themselves anchor others’ estimates towards some specific numbers. As a facilitator an awareness of the range of numbers used and their meanings is useful but be cautious that this information does not influence the outcome of the team’s process.

Update: March 2015 – 4 years after posting this article it’s still the second-most viewed page on the site. I’ve now added a second article offering some more theory to help explain how swim lanes and story points hang together with traditional estimation concepts.

Escaping the Oubliette (Part 1a) – Debt Prevention

Reading time ~ 2 minutes

This is a partial re-post of Escaping the Oubliette (Part 1). I’ve split the article into smaller readable components.

Great, I’ve got my incoming defect strategy nailed,

Now how do I prevent defects and debt in new code?

In 5 words…

Continuous attention to technical excellence.

Here’s my top 7 (there are plenty more)

  1. Acceptance Criteria – Be really disciplined on your acceptance criteria & acceptance tests, team up with Analysts, Testers, Product Owners if you have them and attack your stories from every angle. A good approach to this is a “story kick-off” where the whole team dismantles a story before starting.
  2. Thinking Time – don’t just start coding right away, task things out, try the 10 minute test plan, discuss your approach with your peers and for more complex or large items, try the “just enough design” approach.
  3. TDD – It’s hard to start but has an immense impact.  I’ve just seen a team complete their first project using TDD. 3 weeks into their final round of post feature-complete testing, their defect run-rate hasn’t had the testing spike seen on prior projects. In fact they’re keeping on top of all new incoming defects and have time to start paying down the historic backlog.
  4. Pair Programming – Do it in half-day trial chunks if you don’t have the stomach for going full-tilt. I’ve performed remote pair-programming with colleagues across the Atlantic using decent phone headsets and online collaboration tools for hours at a time. The net result of 2 days of remote pairing was finding and fixing about 10 extra defects in a thousand lines of code that neither of us would have found coding alone.
  5. Peer reviews – there is still a huge space for these in agile teams. But here’s the thing. Be really tough. A peer review is not a hurdle. It’s a shared learning exercise. Functional correctness is actually the smallest component of a peer review. You should trust your developers that far. But there’s a whole series of other aspects to review. See the joy of peer reviews.
  6. Small tasks – I once worked with an outsourced team who when taking work would disappear into a hole for 2 weeks and return with a single task in our configuration management system containing edits to 200+ files and multiple condensed edits to the files. My rule of thumb is one reviewable task per activity. If you’re going to add new functionality and refactor, that’s 2 independent tasks that can be identified and reviewed separately. This means you should be able to easily deliver 2 reviewable, closable tasks per day.
  7. Fast Builds – make it insanely simple for a developer to perform an incremental build that validates new code against the latest main code line. (small tasks are a big help here).  This includes the right subset of unit and functional tests. Aim for a target of a 30 second response time or less between hitting the button and seeing the first results.

In the next article in this series I’ll focus on “Tailing” – How do you start reducing the old defects.

Breaking The Seal (Part 2)

Reading time ~ 2 minutes

In my first article on “breaking the seal” I described how this pattern applies to managing WIP on teams. There’s also a work/social concept that fits the same name with a different pattern…

Name: “Breaking the Seal”, “The Lid is Off” etc.

Analogy: When you open a new pack of good coffee there’s that great smell that comes out – suddenly everyone wants a brew.

Concept: Socially, many people are unwilling to speak up in a crowd or be the exception either in positive or negative situations. Fortunately for experienced agile teams, the social norm of staying silent has often become disrupted but you’ll need to break it back open once in a while and as a coach you’ll need to find ways to introduce it.

Being the first to speak up triggers team inertia; suddenly others’ voices will also be found.

How many times have you sat in a meeting where someone uses a term or concept you have no idea what they mean but you don’t speak up? How many other people in the room also have no clue but stay silent? This might be inertia, it might be fear or just an unwillingness to appear stupid. Particularly with technical teams where your career goal may be “technical guru” – being seen as wrong or not clued up may be a sign of weakness. Having a “wise fool” on the team breaks the seal on this but needs some caution applied.

In some organizational cultures it may not be socially acceptable to question more senior staff. This really gets to me. In fact, I’ll write another post dedicated to this.

Occasionally speaking up might be risky, particularly if there’s obvious management issues. (Nobody likes to speak about the elephant in the room when the elephant is in the room) In these cases arrange with a few like-thinking peers to take it in turns to be the one to raise issues so it’s not always you. This will also ensure that you’re not speaking alone with others relying on you to take the risks up every time.

On the more positive side, the “red cards” tool relies on this same concept for group self-facilitation. Where once it becomes socially acceptable to halt a problem or challenge others the team’s self-organizing capability steps up another notch.

Practice this in your own teams – challenge yourself and your peers to ask a dumb question or plug a rat-hole at least once a week.

Breaking The Seal (Part 1)

Reading time ~ < 1 minute

Following on from my last post; “Communicating in Patterns” here’s the first of my regularly used concepts – alluded to in “Don’t Open More Barrels Than You Can Consume“.

Name: “Breaking The Seal” or “Cracking Open” etc.

Analogy: (This one makes me think of the campfire scene in blazing saddles even though it’s only partially relevant)…

You only open the lid on a new can of beans when there isn’t enough in the current can to feed the family.

Underlying Concept: One of the key ways of delivering maximum throughput on teams is to limit WIP (work in progress/process). Teams inexperienced at this tend to start additional items or “break the seal” on new work when blocked or when a team member has completed their last personal task. We need the team to take a hard look at the work at hand, consider swarming around a given item or story and only open the lid on a new item if there really is no additional value to be gained from another member of the team helping out on the current top priority item.

This works on many levels – here’s a few…

  • Every time you open a new can you risk not finishing it all and having to throw the leftovers away.
  • Opening too many cans and forcing the family to eat them all causes bloating.
  • Eating excess beans takes longer and leaves no room for dessert.
  • Unfinished cans in the refrigerator tend to get pushed to the back and go moldy.
  • Very few people like cold beans for leftovers. (actually – sometimes I do)

10 Minute Test Plans

Reading time ~ 2 minutes

Struggling to introduce TDD or Acceptance Testing? Here’s a really powerful simple first step that you can perform immediately – How to get your development teams “thinking” about tests and how to test with little or no negative impact and a few seconds setup time.

This assumes you’ve already broken out your story/feature/activity into developer-sized tasks.

  1. Push the keyboard away and get a sheet of A4 or Letter paper and a pen or pencil.
  2. Draw 2 lines to Divide the paper into 3 roughly similar sized areas.
  3. Pair up with the developer who’s going to work on a task.
  4. Start a timer –  you have 10 minutes.
  5. Spend 3 minutes listing out as many “pass” cases as you can think of between you – all those simple “happy path” things.
  6. Spend 3 minutes listing out as many “fail” cases as you can think of – bad inputs, exceptions etc.
  7. Spend 4 minutes thinking of all the boundary cases you can think of – e.g. date/time/time zone, rounding, location etc. Often these boundaries depend on your problem domain.
  8. That’s it – you’re done!

OK, this isn’t going to give you perfect code and it does take a bit of practice. Try not to agonize too much about whether these are unit, functional or acceptance tests for now. We’re still learning to fly here.

Much as standing up collaborating at a whiteboard uses a different part of your brain to coding; paring up, brainstorming and writing with a good old pencil & paper does a similar job. The discipline of not touching the code for a short period of time and thinking about how you’re going to test and what could go right or wrong will significantly improve the quality of what you produce – it usually makes the coding easier too.

Better still – you should now have your first unit tests to write! – pick the simplest ones you have on there and get them working! (I usually start with input validation. I know it’s not directly related to end user value but once I get a good grasp on inputs the rest fits in my head more easily.)

What’s on Your Radar?

Reading time ~ 2 minutes

This is a great tool that I first saw used by Thoughtworks for showing changes in technology trends over time…  http://www.thoughtworks.com/radar/ (I don’t know who invented it). The TW example is very busy – there’s a mountain of new & changing technology trends out there!

But. This is a fantastic simple tool for tracking changes in your own domain or environment. And it doesn’t have to just be technology, this could be customers, prospects, types of work, focus areas, anything.

Here’s a sample I’ve put together for agile coaching . The Arrows show a change in focus since the last review (typically quarterly).  This is deliberately not an exhaustive sample but gives an idea of the what you can achieve.  It’s a great clarifying tool for both your coaching teams and your stakeholders.

Detail on the numbered items in this example:

1: TDD – Introduce, train & coach TDD practices. Ensure teams have the tools available to do so and the space in their schedules to do a decent job. Performance will turn the corner after an initial productivity dip so this needs a lot of care & attention.

2: Code Smells – Train teams on identifying code smells and when to act. We need to back this up with being polite & positive – perhaps some collaborative walk-throughs.

3: Refactoring – With TDD & Code smells, teach the “right” level of refactoring. There’s the natural refactoring needed during development and then there’s the open heart surgery of bad legacy code. We could refactor entire products and move nowhere (@see Netscape). Need to make sure this pragmatically taught.

4: Embedded coaching – with the major increase in XP and technical practices over scrum, we greater technical embedded coaching capacity.

5: No new code without tests – Make it socially unacceptable to check-in without tests unless there’s a real reason. (“It’s not testable” is usually an excuse, not a reason)

6: Shared code ownership – It might have been your baby once but it’s time for others to see how ugly it is and help you pretty it up. Nobody “owns” code any more, no matter how much of their creative heart & soul is invested.

7: Zero defects – We still have a crazy defect backlog. Let’s stop the bleeding this year and get it under control. Longer term we’re looking to get down to a stable level of entitlement.

8: Feature Teams – we’re working as product delivery teams and communities of practice right now which is ok but we need to get to a point where we can deliver fully working features through the product suite as a cohesive team without handovers.

9: Scrum – teams are all now using scrum as their overall operating framework, observing the “rituals” etc. We still need to watch & adjust but the main effort is over, this is now normal operation for the teams.

10: Agile Metrics – We’ve taught the teams and managers how to understand the new data they’re seeing, use it to their advantage, to make reasonable forecasts and highlight problems early. Again this will stay just over the horizon, it’s not going away but not something we plan to revisit for a while.

Just Enough Design

Reading time ~ 3 minutes

Some years ago at a prior employer I had the luxury of working with a team delivering a large green-field Java & Oracle project. The requirements were complex and the interfaces, APIs and business logic all needed some pretty exotic thinking to make everything work.

Prior to that project we’d delivered plenty of relatively simple work and been through requirements, design, code, unit test, integration, system test, documentation etc many times. We were generally a “pretty good” team.

We hired a new member – a very experienced and hands-on architect. He brought a whole load of knowledge we were looking for to the team and more…

After being on board about 2 weeks he called a meeting with the entire team. Hauled us into a room and pointed out just how poor we were at proper design. Moreover he took control of the situation, developed a series of design spec templates, guidance and examples, got the team fully ramped up on UML, capturing design decisions, practices, patterns – the works.

Using our new design knowledge and tools, we moved onto the first critical phase of our green-field project in 2 groups.

Group 1 had to get a working proof of concept to the customer in a matter of weeks, group 2 needed to start designing the way more complex second round of features.

For group 1 (a small pilot team of 2), one of the team did about a week’s research, wrote up the basics and hit the ground running (no real design). Group 2 were not allowed to touch a line of code until the designs were complete!

From memory, start to finish; that first phase took about 3 months.

After the initial work was completed, both groups 1 & 2 progressed onto the next round of features based on the design efforts group 2 had completed.

After about 2 weeks we realized we were having to sacrifice one of the team (our feature lead!) almost full-time to maintain the designs. Coding was completed in a total of about 6 weeks. The fastest coding turnaround we’d ever had for something of this scale and the functionality was way harder than the first round of work.

After our crash-course in the pain of full-on software design,  our architect reconvened the team to lead a design practices brainstorming session.

“OK, now you know how to do proper software design; of the tools, practices and documents you used, which do you want to keep and which do you want to ditch?”

Our management seemed to have had the foresight to allow our architect this social experiment knowing full-well that the net result would be a major overall team improvement (the same manager also helped us develop successful business cases for major refactoring efforts – a pretty forward thinking guy).

So what did we keep and what was our philosophy?

Philosophy first:

The greatest value in design after the fact is not in what was implemented but why we chose to do it that way (and why not another way).

The second greatest value in design after the fact is for team members (especially new joiners or maintainers) to get a foothold into the codebase and be able to navigate around safely.

With these cornerstones in mind we kept a few things…

1: High level architecture – a verbal or pictorial summary of the general concept and approach. Often just a photo of some legible whiteboard sketches. – our first foothold

2: Top level flow – a sequence diagram defining the overall flow of responsibility between actors. – our main “ladder”into the codebase.

3: Design decisions and rejections. (in a wiki/threaded discussion) – why did we choose to do things and why did we chose not to do others. – since learning the “why & why not” approach we saved days of ramp-up and maintenance pain on projects.

4: Complex algorithm annotations – for the really gnarly bits. (Avoid this where possible) – draw pictures for these where you can.

5: Public interfaces – as peer and tech-author-reviewed Javadoc post-implementation. I like public interfaces – they’re a great long-term commitment to communicate in a given stable way. In this case they were also a commitment to our customer. Doing a decent job on these saved a world of support pain later.

6: Unit & functional tests – yes, these are design too!

That’s it! – we ditched a whole world of class diagram hell and parameter definitions. We could still sketch out basic class diagrams when needed but not the level of depth needed to generate code from a CASE tool. We ditched all the noise and blurb and we made it clear why the product was written and behaved the way it did.

So – give your teams an easy leg-up into your code and then explain why it does things rather than telling people what it should do.