Recovering The Ability to Self-Organize

Reading time ~11 minutes

At the beginning of this year the team I’ve been supporting hit a wall. After a deeply troubled project last year they’d started researching a “new” feature in December with a plan to get rolling in January.

After the holiday break, a couple of weeks into January we had a near-mutiny and hit the reset button.

Things are now back on track and working in a radically different way but here’s the story.


Early August – Halfway through a deeply troubled “rewrite” project. One of the paired project managers left and I was asked to support the remaining PM.

Mid August – Working with the remaining PM we visualised the entire scope of the UI on the team’s whiteboard and immediately excluded all dialogs and rarely used parts of the UI from the project. Essentially halving the scope of work

Late August – The second PM moved back to a development role elsewhere and I joined the team full-time. I started digging deeper into the scope and data. After reconciling the stories the team had on the wall with those in the electronic backlog we discovered as much additional scope as had been removed 2 weeks earlier – literally “hiding in the walls”.

Early September – Having reviewed the state of the project I talked to our engineering manager and product owner and said “this project is f***ed, do you want to kill it?”  The response was “No, we believe this is the most important and valuable thing we can be doing right now.”

Late September – After the first attempts at recovery still hadn’t achieved enough I re-asked the same question – the reply was the same

Mid-October – And again. All the while the team are still working away trying to deliver.

Late October – Our Product Manager (not the Product Owner) finally decided the project needed a reset. The forecasts had finally tipped him over the edge and the total cost wasn’t acceptable. (This coincided with our push to get an Alpha release into the hands of users for concrete feedback.)

Mid-November – We shipped an alpha release to validate our new UI technology and started building plans and options for the finished product release.

Mid-November – We sought input from the team on options on how to exit the project gracefully. (there’s an entire article on this experience waiting to be written)

End November – We killed the project and I started on building a plan to move on – leaving the team with a new manager and have them successfully working on a new release by the end of January

December – A general lull. Recovery time. We had Down-tools week, holiday period, some bug fixing and a couple of the team started investigating the next feature. My replacement joined the team and we started on a gentle 2-3 month handover.

January – When the team returned from holidays we decided to start on the “new” feature (actually finishing a feature started 2 years earlier and never finished). One of my team had the task of reviewing current behaviour and delivering a demo to the team. This was eye-opening. The scope that had been implied from PO conversations didn’t address the fact that the feature was barely usable.

Mid-January – Over the 3 weeks this new feature had started (as a “3 month” project) scope had ballooned again to include rewriting a fundamental aspect of how data would be stored.

Mid-January – Team mutiny (During a retrospective). The feedback was honest and painful but most importantly it was actionable and had cemented the worries I already had – things were broken again and we needed to hit the reset button a second time. Although he was already on board, this was not the time to hand over to my replacement.

End January – Reset time. Feedback in hand, we had to act fast. I redesigned the entire way work flowed into and through the team, gave some direct and difficult feedback to our new product manager, engaged him in the reset and briefed the team on changes.

Mid February – With the reset well under way and team morale already on the increase I could safely hand ownership over without feeling guilty about the state of the project.

But what went wrong?

There were a catalogue of problems raised by the team. Here’ the points the team raised…

  • We talk about “quick wins” but never do them
  • Poor control of scope for the last 2 major projects
  • A team of 10 with 4 managers is consistently underperforming another team of 3 with no management – why is that?
  • The project backlog is only technical tasks, not user stories
  • We still don’t know what the focus of the next product version is
  • We’re deeply worried about the quality of the next feature (based on current scope)
  • We’re not bought into the new feature work – we’re repeatedly told users don’t need it and it’s just a checkbox
  • We want to be more reactive to support requests
  • We’re not delivering quickly to users or responding to their feedback
  • We want to feel proud of the product and what we’re delivering
  • The work isn’t structured to be delivered iteratively
  • There’s a complete lack of user perspective on upcoming work
  • Despite 4 managers, we have no leadership.

Brutal – but completely valid

For the last 6-12 months we’d been entirely focused on helping the team deliver what the business had been asking for. This was a good team. Sure they’d had a lot of technical problems on the previous project but the new one should have been different.

Both projects started with a technical requirement – rewrite X, finish Y.

This built some odd behaviour into the team. They were at the mercy of the backlog. They couldn’t self-organize, they couldn’t deliver iteratively, they couldn’t manage scope, they couldn’t even decide the best solution for our users. Everything was pre-defined.

I hadn’t spotted the problem for months. I’d been so focused on delivery of what we’d been given that I failed to identify the biggest issue was the incoming work itself!

After killing this second project, the next feature on the list was called “Implement Z” – where Z was a half-written product (again).

I now have two “stop” phrases added to my list of dangerous projects;

  • “it’s just…”
  • “3 months”

These are gross generalisation phrases much like “finish X” or “Implement Z”, they’re the smell of unbounded requirements. Taking them at face-value and accepting work into the team is a recipe for failure.  Both failed features started out on the same two flawed premises but beyond that, take another look at the list of issues raised by the team.

We’d let the team down. We were managing poorly, not leading and the features were not user-facing. They might once have been but not by the time they reached the team.

How do things look now?

First we clarified who in the management team was responsible for what. Who was “in charge” from the business, who was “supporting” and who was “managing”.

Then we restructured our view and approach to incoming work into 4 balanced streams with clear constraints, policies and ownership for each.
This is probably best explained using the presentation I ran with the team…

We asked the team during the presentation who would be willing to take on the role of “feature lead” for upcoming features. Every hand in the room went up!

We’re just reaching the 2 month checkpoint and things are significantly better.

  • The fixes and achievable streams are generally running smoothly and shipping frequent product updates.
  • The team have learned what sizes of work and level of understanding do/don’t work well as “achievable” items.
  • The feature lead role and relationship with the business is successfully flushing out real user needs.
  • The analysis/discovery stream took longer than the team were hoping for but development of the first major feature off the back of the research work has just started.
  • The team completed their first user story mapping session to plan a genuinely iterative set of potential releases.
  • The first iteration of a new major feature is (optimistically) planned to finish in 2 sprints.
  • At any point after the initial release we could stop and move on to something else.
  • After the first release the team will have the ability to run UX sessions, gather feedback and adjust plans based on real user responses.
  • The team’s new project manager has been able to take his sabbatical and leave the running of the project to the team for 6 weeks! (with occasional check-ins and support from me)

Better still, Nick (my successor) has been gathering measurable feedback from the team since we started the reset. Here’s the data.


From Architectural Decoupling to a Complete Rewrite

For a lesson in what not to do, here’s the gory detail on the first project that was killed.

The project goal was to “make it significantly easier to change the UI in future” – delivering value to internal stakeholders in the form of efficiency and quality savings.

This was conceived as “3 month” project back in April but ran into trouble over the summer. The PMs involved in the start of the project had assembled enough project data to establish “This could take anything up to another 1-2 years to complete”.

The project scope had changed from architectural decoupling to a an internal technology refresh and from a technology refresh to a pilot for a new (unfinished) product style guide, a complete UI replacement, implementing a set of untried technologies with a team that had little or no experience of using those technologies and improving the user experience of the UI along the way.

Essentially they were going for a complete unbounded UI rewrite.

The initial idea was to introduce the new UI as an MVP alongside the old UI with limited functionality and support for end-to-end workflows.

Somewhere in the course of time the “end-to-end” part of this had been lost.

By the time I joined, the implementation approach had become replacing whole tabs of the UI, developing multiple tabs in parallel to improve and validate code reuse and completing these widget-by-widget rather than by workflow.

On the surface, technically this seemed sensible. It reduced rework and kept architectural quality high.

I made things worse by pointing out that an MVP wasn’t viable for a product whose broad set of features were in daily use by over 1,000 customers. Thus it became a “like for like” replacement project – of which so many end up doomed by a big bang approach and repeated delays as scope is discovered!

Shipping a new release with significantly cut-down functionality is a big gamble and usually only one you can take if you have something else of significant value to offer to the customer.

We didn’t.

In fact, the most commonly requested feature from our customers – a rewrite of some particularly nasty functionality to support additional use cases and improve existing ones – wasn’t being worked on. We’d actually decided to remove the poorer version of that feature and not replace it until a future release.

In hindsight and from the outside this all seemed a bit nuts.

Before I joined the team, in my oversight role across all projects in the company I was aware of a nagging worry about the state of the project but I couldn’t put my finger on why. The reports and whiteboards seemed “ok” or “meh” but not fundamentally bad.

This is now another warning sign to us when reviewing project status. Month after month of “ok” (not “good”) status across an entire series of checks is a smell (see “Yellow is the new Green” )

I repeatedly asked about any deadlines, due dates and what was going on and received reassurance that everything was under control but there weren’t any real deadlines so everything was still “OK”.

As “Head of Project Management” I have to admit it’s pretty embarrassing to not have tackled this wave of trouble. I and others in the leadership teams saw it coming but trusted the feedback we were seeing. We continued to treat everyone as knowing what needed to be done and safe to be able to do so (“S4″ in Situational Leadership terms) when actually they were in trouble and needed help. (“D2″ level).

Things came to a head at the beginning of August with the project on the rocks, one PM left and another wanted to go back to a development role.

Determined not to face a “Netscape project” I joined the team to get things back on an even keel. Until now, I’d never had a project I couldn’t salvage quickly – usually in under a couple of weeks. This one was definitely much more of a challenge.

Fortunately the team had backlog and historic size & velocity data so we at least had a baseline to work from (the outgoing PM pointed out that the estimates were wildly inconsistent and hard to trust).

Step 1 – We visualised the entire scope of the UI and discovered some 30+ dialogues and little-used screens that could be postponed. We took a scythe to the scope listed in the backlog based on this visualisation and managed to improve the forecast delivery dates.

Step 2 – I was unhappy that the partial backlog on the wall didn’t offer easy traceability to the full digital backlog so I tagged up all the stories and reconciled the wall with the backlog in a spreadsheet and discovered another 50% of scope hiding in the walls.

At this point the project had ballooned beyond all recognition of the original. We’d dropped into sunk costs and the approach to delivery had removed our ability to deliver iteratively.

Every sprint that passed uncovered more technical problems and more missed scope. Some major performance issues with the new technology tripped the team up for weeks and even after shutting the project down, taking what was delivered as an Alpha in November, salvaging the most complete parts and getting it shippable involved 3 more months of stabilising the technology – insanity!

Fortunately, the salvaged tech finally paid off. After releasing the main work, the team were able to deliver a series of small enhancements near weekly for the next month.

They’re through the worst now and I’m moving onto another team.

The team did a great job staying sane in spite of the craziness around them and both they and “management” have learned a lot . What we have in place now goes beyond preventing a repeat of all these problems – it resets the whole way they work to focus on customer needs, customer problems and customer value. It gives them space and freedom to self-organise whilst balancing the needs of the business when necessary.

I’m sad it took nearly 9 months to get here but glad we’re finally on the right track.

We’re also using the lessons from here to ensure other projects that threaten a rewrite focus on incrementally delivering end user value by only rewriting areas that directly improve a team’s ability to deliver on the next customer-facing changes and features.

Update – 24th April 2015:

I had a chat with one of the team who reviewed this for me. Other than some tweaks to the early timeline he mentioned that as of a couple of weeks ago the team have now actually deleted about 2/3 of the UI code that was developed but never finished. That’s a really brave move given the effort they all put if but they found it was holding them back in adding new features (using the new UI technology in smaller increments).

He also mentioned that the first new feature to run through the updated team analysis process is flying – a way better start than the last!

They’re is still a great team and have really turned the corner by the looks of things now.

On Story Points and Distributions

Reading time ~6 minutes

If you’ve been reading my posts for a long while, you might remember this curve in relation to fixing bugs. Today I’m resurrecting it for other reasons.right-skewed beta distribution First of all I’ll admit I used to be a bit of an estimation geek. I loved the subject. (really!)

Last summer I attended my first ever #noEstimates session at Agile 2014. What I heard made complete sense to me and there are a few teams here that have stopped estimating (they still measure and forecast throughput) but I still see value in trying to estimate. I’ve seen enough examples over the last decade where work is traded out because a team cannot estimate the effort of the previous.

I love Arlo Belshee’s concept of Naked Planning – in that if you remove all estimation information and focus purely on what the most valuable and important thing is, you’ll do the most valuable and important thing.

The trouble is, in many commercial product situations, it’s not actually that clear what the most valuable thing really is – and value generally has a trade-off of cost. (Or at least “how much am I willing to spend on this”).

I also like Chris Matt’s Options thinking alternative to estimates – “How much are you willing to spend to get a better understanding of this”. This ties in with a fundamental part of estimation theory in that there is a trade-off (and decreasing value) in spending additional time understanding a problem in order to better estimate the outcome.

Finally, whilst many teams I’ve worked with have used story point estimates for stories and hour estimates for tasks, we don’t go in blind. We have triangulation and reference points from prior work wherever possible.

I used to run half-day training courses on software estimation (and still see that course as valuable – primarily because the theory is transferrable). These days, if I have a team that have never had a real conversation about what estimates are, why do them and how I trim down that half-day content into about 15-30 minutes on the most important bits for story and task estimation.

I’ve been using the same explanations on estimation for about 7 years now and although some of my assertions on why we estimate are finally wavering, the information on how is still useful – and – as far as I know, nobody else has explained it this way.

Back in 2011 I wrote an article on Swimlane Sizing that was subsequently referenced and made popular by Alexey Krivitsky in his Scrum Simulation With Lego Bricks paper.

What I want to share today is how all this hangs together based on a very simple concept from the 1950’s.

The PERT technique came from the Polaris missile project and in my very simple terms is essentially a collection of tools and techniques based around probability distribution.

When examining the probability of completing any task there’s a great rule of thumb to start with:

There’s a limit to how well things can go but no limit to how bad they could get!

With this thinking in mind, essentially the completion time and/or effort for a given task can be represented by a probability distribution. A right-skewed beta distribution.

Furthermore, when adding a series of these together, you have a collection of averages. (cue a #noEstimates discussion).

If you ask someone to estimate how long a bug will take to fix, without historic data you’ll get a “how long is a piece of string” type answer. Everyone remembers the big statistical outliers but if you strip these away, you can forecast quite accurately with about 95% confidence. (This is one of the foundations behind using data for service level agreements in Kanban).

Some items will take longer than average, some will be faster but based on those averages, you can get a “good enough” idea on durations.

With PERT, this beta distribution is simplified further to 3 points – “optimistic”, “most likely” and “pessimistic”. The math is simple but at least for today’s point, not important.

Here’s the bits I care about.

PERT summary

Now here’s a neat thing (and the point of this post).

With the introduction of story points, we’ve moved away from the amazing power of  3-point range estimation back to single points.

Once you’re in the realm of single point estimates, people start seeing a falsely implied precision. Unless you’ve actually had training on use of story points (many senior managers probably won’t have done) then you’ll start building all those same human inferences that used to occur with estimates like “It’ll take about 2 weeks”.

When we hear “2 weeks”, we leap to a precise assumption and start making commitments based on that. In range estimation we’d say It’ll take 1 to 4 weeks and in PERT we’d say, “Optimistically it’ll take a week, most likely 2 and pessimistically, 4. Therefore we’ll plan on 2 but offer a contingency of up to a further 2 weeks.”

(Of course in reality, wishful hearing means you might still end up with a 1 or 2 week commitment but hopefully the theory is making sense).


It’s also worth examining the size of the gaps between optimistic -> most likely and most-likely -> pessimistic (in particular, the latter of these 2). These offer a powerful window into the relative levels of risk and uncertainty.

By moving back to single point estimates – at least at the single story level, this starts feeling a lot like precise and accurate commitments**. Our knee-jerk reaction may then be to avoid providing estimates again.

But here’s the missing link…

Every story point estimate is in fact a REPRESENTATION of a range estimate!

We can take this thinking even further…

The greater a story point estimate, the less precise it is.

Obvious right? Let’s take one more step…

 If a story point estimate is a representation of a range then a larger number implies a WIDER range.

Let’s take that back to the swimlane sizing diagram and (crudely) overlay some beta distributions…


Look carefully at how those ranges fall.

  • Some 2 point stories take as long as a 3 point story.
  • Some 5 point stories may be the size of 3 point stories, in rare cases, others may end up being 13 or even 20 points.
  • And some 13 point stories may go off the scale.

And that’s all entirely acceptable!

On average a 5-point story will take X amount of effort.

That’s enough to forecast and that’s enough to start building commitments and service level agreements around (should you need to).


Time to start thinking about what distributions your story point estimates are.

If you’re getting wild variation, how might you capture some useful (but lightweight) data to help you, your team and your management understand and improve?

You might be in a fortunate place where you simply don’t need to produce estimates at all. That’s great. I’d assert there’s a lot to learn by simply trying to estimate even if you don’t use the results for anything but for the majority of us that need to do at least some sane forecasting, this thinking might just make estimation a bit safer and a bit more scientific again.

If you’re interested in more on this, take a look at the human side of estimation in “Seeing the Value in Task Estimates”.

**As an aside, a while ago, the SCRUM guide was updated and replaced “commitment” with “forecast”. That’s a big change and tricky to retrain into those that saw SCRUM as the answer to their predictability problems. (Many managers needed guarantees!) For those of you facing continuing problems here, it’s worth gathering and reviewing story data so you can build service level agreements with known levels of confidence as an alternative.

200 Questions to help bridge the Product Owner divide

Reading time ~6 minutes

A year or two before I first started writing publicly, I read an interesting, varied and quite long essay by Jeff Patton on the Product Owner role in Agile development (particularly Scrum)

Whilst there are some areas I agreed with, the discussion on “heroics” was an area that left me cold. In my opinion, great software teams don’t have space for individual heroics and (in my opinion again) the sustainable pace ethos generally also plays down team heroics (but that’s a debate for another time).

There was a comment in Jeff’s article that resonated with me that I made a note of at the time. Years later it’s still highly relevant and I wanted to re-raise it this week in the context of recent challenges I’ve been facing.

“For the product you’re working on right now, how will your company benefit after the product has shipped? If you understand how, do others?”

Although this question was aimed at Product Owners it’s something I’d ask all development team members to really think hard about.

Now go further. Are you solving technical problems and a series of requirements to ship a product release or do you really understand the value of the release you’re working on, its reason for being developed and its impact on specific customers, the market or the company?

How many times have you or your team examined the reason for delivering a release with a specific set of features and understood their aim, questioned the backlog and vision and better still, properly understood what you’re trying to deliver in terms of solving the needs of the user?

When I say “understood”, I mean really understood – not just prettied up to follow one or another user story format.

Consider this an opportunity to pause for thought.

Back in 2013 I attended a Retrospective Facilitators Gathering and was introduced to an amazing concept over lunch by Willem Larsen that he described as “200 Questions”. He explained an idea that comes from animal tracking.

When tracking an animal you explore the environment in a way that an every day person doesn’t even consider. You look at every twig, leaf and blade of grass and ask questions of how it came to be in its given state (and why) in order to determine if you’re on the right path.

My translation of this to my own work is to try and develop a genuine and deep curiosity for every item you look at. (Even if you don’t have time to answer all your questions)

At the same lunch, we tried it with a salt cellar

How many people have handled it since it was made?

Who designed it?

What inspired the design?

What’s it made of?

Where did the salt come from?

What process did the salt go through?

(we lost count of how many questions we asked)

Applying this same healthy curiosity to incoming product features or requests is incredibly powerful.

You know things are wrong when every request coming in is essentially a specific solution being proposed.

And you know how translating feature requests back to user stories in anything except a shopping cart or finance trading application seems to be really poorly explained in the agile community?

Try this as an experiment for getting unstuck…

  • Set yourself a target of asking 200 questions about the next big item coming in to your team.
  • Try different “lenses” for asking the questions (much like 6 thinking hats)
    • The “User” lens
    • The “Investment” lens
    • The “Functional” lens
    • The “Market” lens
  • What questions logically follow from those first “level 0″ questions?
  • Decide which of those are important and valuable to actually answer in order to further your understanding towards your goal.
  • Work on answering them – collaboratively.

In no particular order (but with some logical grouping around lenses) and in the very rough form I’ve been actively using in my current team, here’s over 50 of the “level 0″ questions I’ve started experimenting with.

Feature Info

  • Name – what do we call it?
  • What is it? (2-3 sentence user-facing description *and* 2-3 sentence product description)

User info

  • Describe how a user would do this now? (without new code)
  • What types of users does this help?
  • How does it help them? (what does it enable, unblock, simplify etc.)
  • What is a user trying to actually achieve when they do this?
  • Why?
  • What constraints are they facing at that moment?
  • Who can we talk to about their goals and needs for this? (a selection of people willing to talk to us more)
  • How complex/accessible (to the user) is it?
  • How many/what proportion of users will benefit from this?
  • How frequently might we expect different types of users to use this?
  • How might this detract from the product for other users?

Investment/Product/goal alignment

  • How does this improve the product?
  • How big/expensive is it?
  • How much time do we want to invest in initial investigation?
  • If an initial investigation is inconclusive, what next?
  • How much are we willing to invest in this? (and how flexible is that decision)
  • Is there anything obviously more important/valuable/beneficial to do before this?
  • Would we be fools not to do this in the next 90 days? – why/why not?
  • If this were the only thing we did in the next 30-90 days, is this the right thing to do?
  • Why is this valuable to the team/company?
  • How does it align to current product/company goals?
  • How does it further our journey toward those goals?
  • Would you describe this as a tactical or strategic change?
  • Why?
  • Does this really fit within the product we have already?
  • (how) will it help differentiate the product from the competition?
  • Could this be a new product?
  • Are there any other products it may integrate with or impact? (ours or elsewhere)
  • Does this align more closely to another product than this one?
  • Do any of our competitors already do this? (and how)

Functional info

  • What solution options exist already?
  • Can you give an example of how that would work?
  • Under what situations might that not be suitable/successful?
  • What additional solution options can we think of
  • What’s the smallest simplest thing that could possibly work?
  • What’s “good enough” to meet customer needs?
  • Is there an 80/20 solution option for this?
  • How can we make this more discoverable, useful and valuable to users?
  • Is there an ingeniously simple solution for users?
  • What would it take to make a user say “wow!” (and is it worth the extra effort?)
  • What is the best solution in terms of cost/functionality/usability/quality balance
  • What performance and quality criteria do we have for this?
  • What is explicitly out of scope?
  • Are there any cross-cutting or performance risks?
  • Are there any other significant risks to be investigated?
  • What dependencies exist?
  • If we ship an incomplete solution, what is the follow-on plan and forecast cost for completion?


  • What is the market demand for this?
  • Is there any market or customer sensitivity around timing for this?
  • What marketing is planned or expected for this feature (if any)?
  • How do we expect our competitors to respond to this?
  • For what we’re planning to deliver, what do we expect from our users on social media?
  • How will we launch this feature (alpha/beta/freq update/big bang etc)?
  • How will this impact our position in the market relative to our competitors?
  • What impact are we expecting on sales/renewals by announcing / releasing this feature?
  • What impact are we expecting on our reputation by announcing / releasing this feature?
  • How will we sell / price this?

It’s a lot like writing 10 minute test plans – so quick and so powerful.

Give 200 questions a try on your next big feature request.

Take It Off-Site (Part 1) – The Bad and The Ugly

Reading time ~3 minutes

Offsite meetings have a checkered past. Back in 2006 a colleague shared this painful article with me.  To quote from the main article:

“…only about 10 percent of executives consider offsites truly valuable. Half said they aren’t worth the time or money.”

Over the last 7 years I’ve seen the exceptionally high value offsite meetings can bring but also the stress and pain that often leads up to them.

At my last employer, the entire global executive and senior management team would have a 6-monthly operations review offsite that ran for 3 days. We booked out significant portions of a large hotel in the US and fly the entire team together from over a dozen sites around the world. Over 3 days, the leaders from each site would present their statuses, finances, roadmaps, portfolios, visions, strategies and plans to a group of about 50 of us. We’d put them under the grill and review their work in depth both as bosses and peers.

We’d take time to focus on specific strategic issues that benefitted from face-to-face focused collaboraton. (When your management team are distributed around the world. A few intense days of co-location often delivers far more decisions and clarifications than 6 months of project work and conference calls.)

The general approach for these was: Prepare, Pitch, Review, Collaborate, Revise, Re-pitch.

Sounds pretty sensible? – It is…        … if you know what you’re in for. 

The offsite workouts were usually brutal but exceptionally valuable. By the end of the week everyone’s plans were in far better shape, we all understood what each other were doing, found areas for shared benefit and made a lot of difficult decsions.

Better still, we spent a few days working, eating and drinking together. 3 days of deliberate convergence in order to stem the behaviour issues associated with extreme divergence kepts us functioning as a successful management team.

Whilst the outcomes were great, they took their toll. The pain behind the scenes generally involved a bunch of us working 2 weeks of long evenings and weekends to pull all the data, presentations and spreadsheets together. ($100m of software projects contain a lot of moving parts & data) Often we’d still be revising our work during the start of sessions and whoever went first bore the brunt of painful lessons and questions whilst everyone else frantically updated their slide decks to accomodate the new knowledge.

And here’s the thing – the lines of questioning…

Our Senior Exec at that employer was an exceptionally sharp, experienced, bright guy. After working through multiple iterations and having joined the operations team on the organizing side of the offsites I saw what made him tick. He had a brain full of powerful questions that he’d learned from experience and was constantly adding to them. He knew what to ask and when that would lift the lid on just the right barrel to find a body (if there were bodies to be found).

As a spectator, sometimes it’s satisfying seeing the blood on the floor when someone you think is an ass-kisser takes a roasting. It’s not so fun when it’s your own team.

I even helped develop a set of “difficult questions” for our Head of Operations in the weekend prior to one of these offsites. They were great questions to ask. The thought and effort put into developing them was a lot like spending an hour writing test cases up-front. We learned quickly what we really needed and wanted to know (and why). We were confident we could get under the covers of even the most prettied-up project report.

But we could have made the whole thing so much less brutal.

Much like writing test plans, if we’d just put the questions out for participants to review in advance they’d have adjusted their research and reporting to accommodate them rather than playing project question battleship on the day.

So here’s an action for you.

Next time you’re reviewing a project or a piece of work, capture the questions you regularly ask.


Share them

Share them with the people you’ll be asking beforehand.

Now they can answer those questions sensibly first and you’ll all have time to dig into more valuable insights.

I have a few more posts planned around “critical questioning”, plenty around running offsites and I’ll be sharing where I’ve been for most of the last year so keep your eyes open for more soon.


A Year of Whiteboard Evolution

Reading time ~2 minutes

Back in December last year I started supporting Red Gate’s .NET Developer Tools Division. As of this month, we’ve restructured the company and from next week, the old division will be no more (although the team are still in place in their new home).

When I joined the team things were going OK but they had the potential to be so much more so I paired up with Dom their project manager and we set to work.

The ANTS Performance Profiler 8.0 project was well under way already and the team had a basic Scrum-like process in place (without retrospectives), a simple whiteboard and a wall for sharing the “big picture”.

I spent the first week on the team simply getting to know everyone, how things worked and observing the board, the standups and the team activities.

We learned some time ago here at Red Gate that when you ask a team to talk you through their whiteboard, they tell the story of their overall process and how it works. Our whiteboards capture a huge amount about what we do and how we do it.

I attempted to document and capture at least some key parts of the journey we’ve have over the year in which we released over a dozen large product updates across our whole suite of tools. This post is picture heavy with quite limited narrative but I hope you’ll enjoy the process voyeurism :) If there’s any specifics you have questions about, please ask and I’ll expand.

Next time, I think I’ll get a fixed camera and take daily photos!

The end result? Multiple releases of all 5 of our .NET tools including a startup and quality overhaul for our 2 most popular products, support for a bunch of new database platforms, full VS2013 support (before VS2013 was publicly released), Windows 8 and 8.1 compatibility and a huge boost for the morale of the team.  See for yourself if you’re interested!

Of course this is just one aspect of what I’ve been up to. You might notice the time between photos over the summer grew a little. See my last post for more insights into what happens at Red Gate Towers.