Take It Off-Site (Part 1) – The Bad and The Ugly

Share
Reading time ~3 minutes

Offsite meetings have a checkered past. Back in 2006 a colleague shared this painful article with me.  To quote from the main article:

“…only about 10 percent of executives consider offsites truly valuable. Half said they aren’t worth the time or money.”

Over the last 7 years I’ve seen the exceptionally high value offsite meetings can bring but also the stress and pain that often leads up to them.

At my last employer, the entire global executive and senior management team would have a 6-monthly operations review offsite that ran for 3 days. We booked out significant portions of a large hotel in the US and fly the entire team together from over a dozen sites around the world. Over 3 days, the leaders from each site would present their statuses, finances, roadmaps, portfolios, visions, strategies and plans to a group of about 50 of us. We’d put them under the grill and review their work in depth both as bosses and peers.

We’d take time to focus on specific strategic issues that benefitted from face-to-face focused collaboraton. (When your management team are distributed around the world. A few intense days of co-location often delivers far more decisions and clarifications than 6 months of project work and conference calls.)

The general approach for these was: Prepare, Pitch, Review, Collaborate, Revise, Re-pitch.

Sounds pretty sensible? – It is…        … if you know what you’re in for. 

The offsite workouts were usually brutal but exceptionally valuable. By the end of the week everyone’s plans were in far better shape, we all understood what each other were doing, found areas for shared benefit and made a lot of difficult decsions.

Better still, we spent a few days working, eating and drinking together. 3 days of deliberate convergence in order to stem the behaviour issues associated with extreme divergence kepts us functioning as a successful management team.

Whilst the outcomes were great, they took their toll. The pain behind the scenes generally involved a bunch of us working 2 weeks of long evenings and weekends to pull all the data, presentations and spreadsheets together. ($100m of software projects contain a lot of moving parts & data) Often we’d still be revising our work during the start of sessions and whoever went first bore the brunt of painful lessons and questions whilst everyone else frantically updated their slide decks to accomodate the new knowledge.

And here’s the thing – the lines of questioning…

Our Senior Exec at that employer was an exceptionally sharp, experienced, bright guy. After working through multiple iterations and having joined the operations team on the organizing side of the offsites I saw what made him tick. He had a brain full of powerful questions that he’d learned from experience and was constantly adding to them. He knew what to ask and when that would lift the lid on just the right barrel to find a body (if there were bodies to be found).

As a spectator, sometimes it’s satisfying seeing the blood on the floor when someone you think is an ass-kisser takes a roasting. It’s not so fun when it’s your own team.

I even helped develop a set of “difficult questions” for our Head of Operations in the weekend prior to one of these offsites. They were great questions to ask. The thought and effort put into developing them was a lot like spending an hour writing test cases up-front. We learned quickly what we really needed and wanted to know (and why). We were confident we could get under the covers of even the most prettied-up project report.

But we could have made the whole thing so much less brutal.

Much like writing test plans, if we’d just put the questions out for participants to review in advance they’d have adjusted their research and reporting to accommodate them rather than playing project question battleship on the day.

So here’s an action for you.

Next time you’re reviewing a project or a piece of work, capture the questions you regularly ask.

Now…

Share them

Share them with the people you’ll be asking beforehand.

Now they can answer those questions sensibly first and you’ll all have time to dig into more valuable insights.

I have a few more posts planned around “critical questioning”, plenty around running offsites and I’ll be sharing where I’ve been for most of the last year so keep your eyes open for more soon.

 

A Year of Whiteboard Evolution

Share
Reading time ~2 minutes

Back in December last year I started supporting Red Gate’s .NET Developer Tools Division. As of this month, we’ve restructured the company and from next week, the old division will be no more (although the team are still in place in their new home).

When I joined the team things were going OK but they had the potential to be so much more so I paired up with Dom their project manager and we set to work.

The ANTS Performance Profiler 8.0 project was well under way already and the team had a basic Scrum-like process in place (without retrospectives), a simple whiteboard and a wall for sharing the “big picture”.

I spent the first week on the team simply getting to know everyone, how things worked and observing the board, the standups and the team activities.

We learned some time ago here at Red Gate that when you ask a team to talk you through their whiteboard, they tell the story of their overall process and how it works. Our whiteboards capture a huge amount about what we do and how we do it.

I attempted to document and capture at least some key parts of the journey we’ve have over the year in which we released over a dozen large product updates across our whole suite of tools. This post is picture heavy with quite limited narrative but I hope you’ll enjoy the process voyeurism :) If there’s any specifics you have questions about, please ask and I’ll expand.

Next time, I think I’ll get a fixed camera and take daily photos!

The end result? Multiple releases of all 5 of our .NET tools including a startup and quality overhaul for our 2 most popular products, support for a bunch of new database platforms, full VS2013 support (before VS2013 was publicly released), Windows 8 and 8.1 compatibility and a huge boost for the morale of the team.  See for yourself if you’re interested!

Of course this is just one aspect of what I’ve been up to. You might notice the time between photos over the summer grew a little. See my last post for more insights into what happens at Red Gate Towers.

Kipling and Keogh on Requirements

Share
Reading time ~2 minutes

A short thought today…

From age 11 to 16, written on the wall of my school sports hall was the following quote.

I keep six honest serving-men (They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.

It wasn’t credited, nobody spoke about it or referenced it. I never really thought about its source, I wasn’t a reader of much classic literature or poetry (preferring fantasy and comics) and I didn’t really consider its value but it stuck with me.

Some years later I looked it up.

Here’s the full version

I keep six honest serving-men (They taught me all I knew); Their names are What and Why and When And How and Where and Who.

I send them over land and sea, I send them east and west; But after they have worked for me, I give them all a rest.
I let them rest from nine till five, For I am busy then, As well as breakfast, lunch, and tea, For they are hungry men.

But different folk have different views; I know a person small- She keeps ten million serving-men, Who get no rest at all!
She sends’em abroad on her own affairs, From the second she opens her eyes-

One million Hows, two million Wheres, And seven million Whys!

 

The next time you’re looking at requirements, use cases, user stories or similar, consider…

Who is wanting to achieve what and why?

Where would they normally do this, when is the right time and how would they do so?

Take this a step further. Ask “why?” just once more.

And if you’re still writing user stories as “In order to … I want… so that…”, then take a read of Liz Keogh’s recent post.

Cracking Big Rocks

Share
Reading time ~3 minutes

Bramber castle (side perspective view)

Things have been rather busy over the last couple of months. I’ve joined a new division at my current company and have the basis for another dozen articles slowly developing but that’s just the start. I have some really exciting news!

After over a year of development with the exceptionally talented Johanna Hunt (@joh) through field testing, workshops, paper prototypes, conferences, conversations, and peer reviews I’m very pleased to announce the launch of crackingbigrocks.com.

The Concept

In trying to solve problems of our own, we found challenges that we now identify as “Big Rock” problems – those things that when faced alone cause us sleepless nights and illogical stress. The mental load associated with Big Rock problems can be so taxing that every time we go to tackle them we find ourselves procrastinating and avoiding or exhausted through trying. A deep breath, a step back, a few pointers, a second brain and some support can help us get back on track.

We’ve faced these problems in both our personal lives and professional careers. For example the series of articles I wrote on “The Oubliette” describes the multiple strategies I used for reducing a major defect backlog.

(I’ve spent the last 2 weeks reusing the oubliette strategies along with a few new ones to help my new team get our quality back under control and keep the motivation to improve high.)

 

Simple Patterns & Coaching Cards

Between us we’ve taken the “Simple Patterns” concept and our combined experiences  of solving large, difficult problems and developed a set of over 50 simple problem solving patterns. In particular, we’ve captured the essence of big mental hurdles and how these can be overcome in ways that have resonated with almost everyone we’ve shared with. (Of the 150 or so people who’ve explored the concepts so far, only two didn’t find they had “Big Rock” problems of their own.)

We’ve produced a limited first edition set (only 50 decks) of  high quality coaching cards containing 45 patterns.  (Within a week of the box being delivered, with no marketing at all we’re already down to just 35 decks left!)

As of about June 2013, the first edition has sold out however the expanded second edition is now available on amazon.

It turns out the ideas and concepts we’ve captured and the formats we’ve chosen are significantly more popular than we expected. Back in August 2012 after a weekend of hand-trimming and cutting (causing a few injuries and RSI) we produced nearly 40 paper prototype decks with unedited wording and far fewer patterns. We gave all of these away and left a few attendees disappointed after running a hugely successful workshop to a packed-out room of nearly 80 attendees at Agile 2012. A month later we had to produce a few extras for a re-run at Agile Cambridge (and took the opportunity for some edits whilst we were at it).

We delivered our first bulk-order for nearly 200 decks for attendees at Agile Cambridge in 2013 and ran another packed-out session. Our good friend Olaf regularly takes a few sets out with him to Play4Agile in Germany.

Even whilst developing and trimming the original prototype decks, we actually used the patterns within the decks to “keep rolling” – the idea of having to manually cut out and collate over 1500 paper playing cards is quite a “Big Rock” in itself. We measured our throughput rate, adjusted our cutting process, optimised our flow and working practices, banked our results in suitable sized batches and made sure our pace was (mostly) sustainable!

Here’s a quick preview of what’s in the box…

Photo of cracking big rocks cards

The majority of patterns in the deck are unique to us but there’s a couple that are better known (such as the “Rubber Duck” shown above). What’s unique here is the format, approach, style and most intriguingly, the community we’re aiming to build. We want to share the ideas and experiences everyone has using these cards as coaching and problem-solving tools.

If you’d like to find out more, head on over to crackingbigrocks.com and take a look.

OK. Marketing over – I generally find selling or marketing what we do a little crass so I hope you as readers don’t mind too much!

Stopping The Line

Share
Reading time ~4 minutes

A few weeks ago the Company I work for celebrated its 13th birthday. As part of the celebrations we were each given the latest copy of the “BoRG” – the company book.  On reading through the pages I found one of the teams I’m responsible for received an award.

This isn’t quite as positive as it sounds and I’m pretty sure the incident will become lgendary within the company. The lessons learned, what we did afterward and the forward thinking attitude of our senior management are however truly worth celebrating.

The (rough) Story

Over the summer one of our teams was working on some updates to our product deployment tools (we deploy upwards of 50 new releases across our product portfolio every month). Part of the automated process involves uploading a packaged installer for our software to a download location and updating our web site to point to the update.

Due to a mix-up between environments and configurations, one of our internal tests made it into the outside world. The problem was spotted and resolved fast but something was clearly wrong for this to have been possible.

This alone would have been rather embarrassing however this was about the 6 or 7th significant incident that had come from our operations teams in as many weeks. We’d recently restructured team ownership of parts of the codebase, were making a large number of significant infrastructure, library, test and build changes to our systems – mostly legacy code (code without sufficient tests). Moreover, we added a whole new team onto the codebase with a very different remit in terms of approach and pace, the volume of churn in the code had massively increased.

This was all during the height of holiday and conference season so many of us weren’t fully aware of the inner carnage that had been occurring. Handing over from one manager to the next repeatedly meant we’d not seen the bigger picture.

I returned from Agile 2012 to a couple of mails from my boss (who was now on holiday) to fill me in on what had happened with the (paraphrased) words: “Can I leave this safe in your hands”.

Over the first 2 hours of my return I was briefed by managers and team members on the situation. Everything was sorted, no problems were in the wild but the team’s credibility had taken a beating.

I’d seen similar things happen in other companies and had always been certain of the right course of action. This was the first time I actually felt safe to lead what I knew was right.

I donned my “Lean” hat and started my nemawashi campaign with our senior managers.

I spoke to each manager individually – they were already well aware of the problems which made things much easier. I simply said.

“These problems can’t continue, we’re going to ‘stop the line’. All projects are going to stop until we’re confident that we can progress again safely.”

I went a step further and set expectations on timescales.

We’d be stopping development work for nearly 20 staff for at least a week. We’d monitor progress daily, and approval to continue would be on the condition that we were confident problems would not reoccur.

By lunchtime I had unanimous support. It was described as a “brave” thing to do by our CEO but all agreed it was right.

A side-benefit of Lean is the shared language it provides. In every case when I approached our management team and explained that I wanted to “stop the line” they immediately understood what I meant plus the impact, value and message behind such an action.

Now of course you can’t prevent new problems with hindsight but you can identify patterns of failure and address these.  In our case I had a good understanding of what had been going on.

Initially I was strongly against performing a full root-cause analysis.  There were half a dozen independent incidents and a strong chance of finger-pointing if we’d gone through these. I was already “pretty sure” where our problems lay. The increased pace had led to a fall in technical discipline coupled with an increased pressure to deliver faster and a lack of sufficient safety net (insufficient smoke tests).

I divided the group into 3 teams to focus on 3 areas.

  • “before release” – technical practices
  • “at the point of release” – smoke tests
  • “after release” – system monitoring

With an initial briefing and idea workshop I stepped back and left the 3 teams to deliver.

The technical practices team developed a team “technical charter”.  We brought all participants together for a review, revised and then published this. Individuals have since signed up to follow this charter and we review it regularly to ensure it’s working.

The smoke testing team developed a battery of smoke tests for the most critical customer-facing areas (shopping cart, downloads etc). These are live and running daily.

The monitoring team developed a digital dashboard (that I can still see from my desk every day). This shows the status of the last run of smoke tests (and history), build status, system performance metrics and a series of alerts for key business metrics that would indicate a potential problem with the site – e.g. a tail-off in volume of downloads or invoices.

They also implemented some server-side status monitoring and alerts that we subscribe to via email.

Since these have been in place we *have* had a couple more incidents but in every case we’ve spotted and resolved it early.

Subsequently a couple of the teams have self-selected to perform a root-cause analysis on a couple of issues. This is exactly the behaviour I love to see, it’s not a management push, they simply wanted to ensure we’d pinned things down and done the right thing. Moreover, they published the results to the whole company.

The award…