About The Captain

Captain Crom started programming and debugging games from magazines on his Brother’s BBC as a small boy in the early 1980s. With early qualifications in both computer science & art and a love of live music it became clear he was destined for bad things. His tyrannical ways commenced with a degree in Computing & Informatics at Plymouth and from the mid 1990's a career in the software industry. After formative years as "The Scourge of the Thames Valley" between Reading and Bracknell with occasional raids on the San Francisco Bay area, since 2004 he has been seen sailing stretches of the A10 North and South of the Isle of Ely with the primary source of his raids targeted around Cambridge. Sightings have also been rumored as far afield as Scotland, Norway, India, Nevada, Florida and Georgia. The Captain has served in companies ranging from successful startups and ailing dot-coms to global corporations, spanning roles from IT, consulting, support, development and management through to agile coaching. The common thread in each of his roles is that he has always chosen to join software product groups - usually large-scale enterprise software. His large-scale product and organizational focus differentiates him from the more common textbook agile captains. (Other differentiators include his distinctive hoop earrings and love of spiced rum) The Captain's Agile experience started with a blend of FDD and XP in what he describes as "the most disciplined team he had ever served with". He subsequently moved onto using Scrum and XP blended with Theory Of Constraints, Kanban and Lean philosophies to improve software delivery techniques in other organizations. He believes every member of a delivery team should spend time with customers supporting the product they produced. “Sitting at the dirty end of a product (or cutlass) completely changes the way you think about business processes and write software for the rest of your softwarefaring career!”

Dark for the last few months?

Reading time ~3 minutes

photo looking up a medieval chimney

There’s some bright lights ahead…

Over the last year I’ve changed roles (twice – but still in the same company) so things have been pretty quiet on here as I cram a mountain of new information, ideas, thoughts and challenges into my brain and try to make sense of it in ways that are insightful or useful to anyone that reads these posts.

I picked up and then put down development of my text adventure after a few months – I learned more than enough to meet my stated goals when I started but re-discovered how addictive coding something you really care about can be. I got to the point where I was head-down coding 6+ hours a day on top of a full time job. This was stealing time away from learning more important and valuable (but probably less fun) things for me and not leaving me enough head space for much else. (It was also mentally exhausting doing both)

As some context, Last Spring I moved on from my Head of Project Management role here to work with our Sales Operations team for a couple of months. This was a combination of me wanting to get my head back into the commercial side of the software business and a pull from the team to have some support and a tune-up around their capacity, flow of work and goals.

I’ll write a short post on this as it was useful for both me and the team but in a nutshell it was just under 3 months of applying lean, agile & theory of constraints principles to an ops team and setting them up for future success.

Since the Summer I’ve had a new long-term challenge. It’s a far cry from leading agile teams but still uses my broader industry skills & experience.

I’m now heading up Product Design here.

So far I’m loving the move. It’s full of new challenges. I’ve picked up support for our Technical Communications, User Experience and Product Management communities, bootstrapped a major product design initiative and (as always) I’m interviewing like crazy for new recruits for all these roles.

I’m now working directly for our CTO as part of a team of 6 amazingly talented people covering our innovation, strategy, engineering and product design capabilities. We’ve a load of work to cover and our roles intersect enough that we can collaborate on some bigger things affecting the way we operate.

At the moment we’re in the middle of rebooting our communities of practice, a portfolio-wide strategic review, a push toward design-led product thinking, building a strong learning and development foundation for our product organization and significant market and innovation research.

After feeling that much of the new focus in agile (around scaling) was in contexts that no longer applied to me, it’s good to be back into researching and studying in depth around strategy, design and innovation and finally feeling these ideas all coming together. I’m thoroughly enjoying the break from running software teams directly and working hard on my influencing, sharing and leading again.

So… Time & motivation-permitting I’ll start posting a few new lessons and experiences over the coming weeks and months. They’ll still have relevance to agile and software but will be much more focused on the business end of things; product design, strategy, recruitment, organizational capabilities and innovation.

Thanks for sticking with me this far.



Shy? Try Exercising Your Asking Muscles

Reading time ~4 minutes

Some Friday morning amateur social philosophy.

One of my current team is as shy as I am. We both agree that “we’d rather not deal with people”. It’s somewhat funny that even from the first day of my degree I had to work in teams and interact with external stakeholders. My ability to do so is my strength (but my dirty secret is that it’s also my biggest fear).

I’ve read and re-read loads of stuff on being an introvert, on being shy and assorted other musings for the “socially challenged”. I’m not socially awkward – I might be a bit of an over-sharer but once I’m up and running I can be the life and soul of the corner of the kitchen at a party, I enjoy sharing, I enjoy interacting with people but I’m like a dynamo – I need some winding up to reach that state.

I think my favorite term comes from Doc List who describes himself as an “Ambivert”. I think his observations closely define me as well.

Most people that I interact with in public or at work would never believe I’m shy. It’s one of the joys of the agile/conference scene – you know most people are just like you. They wear a mantle of bravery and a mask of confidence but after a week of conferences will happily spend the weekend recovering under the covers. Over the years those people become friends and the interactions become easier.

I gain short-term energy from talking to people but I’m exhausted afterwards. I find nothing more draining than leading planning sessions, demos and retrospectives but most people will never see it. In some camps that counts as introverted, in others extroverted – Let’s stick with ambivert.

Anyway. A challenge with being shy/introverted or even an ambivert is asking. Asking for help, information or even asking someone to do something you know is their responsibility or even pleasure to do for you.

As a project manager or leader this is a bit of a crazy situation. Every day I need to ask my team or external stakeholders for an assortment of things. The emotional effort needed to ask can vary from easy to terrifying and descends into experiences that are often addressed with Cognitive Behavior Therapy.

  • You believe it’s unreasonable to ask busy people for more of their time to meet needs you think are yours.
  • You feel (or have been told) you should ask someone for help or input.
  • You’re trying to make a difference, you know other people care but you don’t know how to approach the subject.
  • The list of doubts, “Shoulds” and “aughts” goes on.

From my experiences I’ve discovered the ability to ask is like any other mental muscle. If you use it regularly it gets better. If you don’t use it it wastes away. 

Unfortunately that shy moment freezes you up. It means you’re more inclined to not ask until it’s critical so you end up making life harder for yourself and the person you need to ask. All the while you’re procrastinating you’re expending mental cycles on stress and doing nothing.

But there’s more to it than that.

Each person you need to ask something of is a different muscle.

When I first started working as a PM in our DevOps group a few years ago I was new to the company, I had few existing working relationships and a number of key stakeholders I needed to learn to interact with.

How did I build up my asking muscles?

I reviewed my working relationships in the past to understand what I found easy and what was more difficult.  I found – even with my own team – that after about 3 useful interactions in a short period of time things became significantly easier so I decided to try an experiment.

There’s a neat idea in software development known as the “rule of three” – once you’ve needed to use the same thing 3 times then it’s probably time to make it generic and standardize it.

I bought a small (A3) sized whiteboard and placed it on my desk. On the board I drew a grid. Each horizontal lane represented a person I needed to learn to interact with. I wrote each name in the left-hand column. Each column after that was a placeholder where I’d mark an X every time I had an asking or collaborative interaction with that person. My goal was to get 3 X’s in each row over the space of about a month.

It worked – when there were only one or two X’s I’d figure out how I could build another interaction with that person that didn’t have to involve asking for anything significant.

One of those people came over to my desk and spotted my board – they asked what the “X’s” next to their name were. I explained and they immediately empathized. I wasn’t alone. That conversation counted as yet another X – It became easier to ask!

The tricky thing is once you’ve built those muscles up you need to kep them running.

If you neglect your asking skills for any time you need to rebuild them again (sometimes the second time is easier). If like me your projects and roles change a lot over the years you’ll probably find that by the time you’ve just got comfortable with everyone you need to ask that the world changes around you and you need to start again.

This is a good thing – it keeps us exercising our asking muscles.

Who do you want to ask something of today?

Go talk to them.

Have a great weekend


The Pitfalls of Measuring “Agility”

Reading time ~7 minutes

This post expands on one of the experiences I mentioned in “Rapunzel’s Ivory Tower“.

I presented these lessons and the story at Agile Cambridge back in 2010.  It’s taken nearly 5 years to see the light of day in writing on here. I hope it’s not too late to make a difference.

I and my team hadn’t been in our roles long. We’d been given a challenge. Our executives wanted to know “which teams are agile and which aren’t” (see the Rapunzel post for more). We managed to re-educate them and gain acceptance of a more detailed measurement approach (they we were all Six Sigma certified – these people loved measurement) and I’d been furiously pulling the pieces together so that when we had the time to work face to face we could walk away with something production ready.

Verging on quitting my job I asked James Lewis from Thoughtworks for a pint at The Old Spring. I was building a measurement system that was asking the right questions but there was no way I could see a path through it that would prevent it being used to penalize and criticise hard-working teams. This was a vital assessment for the company. It defined clearly the roadmap we’d set out, took a baseline measure of where we were and allowed us and teams to determine where to focus.

My greatest frustration was that many of the areas teams would score badly were beyond their immediate control – yet I knew senior management would have little time to review anything but the numbers.

James’ question he left me with was:

“How do you make it safe for teams to send a ‘help’ message to management instead?”

I returned to my desk fuelled by a fresh pair of eyes and a pint of cider. I had it!

At the time many agility assessments had two major flaws.

1 – they only have a positive scale – they’re masking binary answers making them look qualitative but they’re not.

2 – They assume they’re right – authoritative, “we wrote the assessment, we know the right answers better than you…”

  • What if the scale went up to 11 (metaphorically) How could teams beat the (measurement) system.
  • And what if 0 wasn’t the lowest you could score. What would that mean?

The assessment was built using a combination of a simpler and smaller agility assessment provided to us by Rally plus the “Scrum Checklist” developed by Henrik Kniberg, the “Nokia Test“, the XP “rules” and my own specific experiences around lightweight design that weren’t captured by any of these. As we learned more, so we adapted the assessment to bake in our new knowledge.  This was 2009/2010, the agile field was moving really fast and we were adding new ideas weekly.

The results were inspired – a 220 question survey covering everything we knew. Radar charts, organizational heat maps, the works.

The final version of the assessment (version 27!) covered 12 categories with an average of about 18 questions to score in each category:

  1. Shared responsibility & accountability
  2. Requirements
  3. Collaboration & communication
  4. Planning, estimation & tracking
  5. Governance & Assurance
  6. Scrum Master
  7. Product Owner
  8. Build and Configuration Management
  9. Testing
  10. Use of tools (in particular Rally)
  11. Stakeholder trust, delivery & commitment
  12. Design

The most valuable part was the scale:

  • -3 We have Major systemic and/or organizational impediments preventing this (beyond the team’s control)
  • -2 We have impediments that require significant effort/resource/time to address before this will be possible (the team needs support to address)
  • -1 We have minor/moderate impediments that we need to resolve before this is possible (within the team’s control)
  • 0 We don’t consider or do this / this doesn’t happen (either deliberate or not)
  • 1 We sometimes achieve this
  • 2 We usually achieve this
  • 3 We always achieve this (*always*)
  • 4 We have a better alternative (provide details in comments)

a radar chart from excel showing each category from the assessment

The assessment was designed as a half-day shared learning experience. For any score less than 3 or 4, we would consider & discuss what should be done and when, what were the priorities, where did the team need support, what could teams drive themselves and what were the impediments. Teams could also highlight any items they disagreed with that should be explored.

Actions were classified as:

  • Important but requires management support / organizational change to achieve
  • Useful, low effort required but requires more change support than low hanging fruit
  • Potential “low hanging fruit”, easy wins, usually a change in practice or communication
  • Important but requires significant sustained effort and support to improve

As a coaching team we completed one entire round of assessments across 14 sites around the globe and many teams then continued to self-assess after the baseline activity.

Our executive team actually did get what they needed – a really clear view on the state of their worldwide agile transformation. It wasn’t what they’d originally asked for but through the journey we’d been able to educate then about the non-binary nature of “being agile”

But the cost, the delays, the iterative approach to developing the assessment, the cultural differences and the sheer scale of work involved weren’t sustainable. An assessment took anything from an hour to two days! We discovered that every question we asked was like a mini lesson in one more subtle aspect of agile.  Fortunately they got quicker after the teams had been through them once.

By the time we’d finished we’d started to see and learn more about the value in Kanban approaches and were applying our prior Lean experience and training rather than simply Scrum & XP + Culture. We’d have to face restructuring the assessment to accommodate even more new knowledge and realized this would never end. Surely that couldn’t be right.

Amongst the lessons from the assessments themselves, the cultural differences were probably my favourite.

  • Teams in the US took the assessment at face-value and good faith and gave an accurate representation of the state of play (I was expecting signs of the “hero” culture to come through but they didn’t materialize).
  • The teams in India were consistently getting higher marks without supporting evidence or outcomes.
  • Teams in England were cynical about the entire thing (the 2-day session was one of the first in England. Every question was turned into a debate).
  • The teams in Scotland consistently marked themselves badly on everything despite being some of our most experienced teams.

In hindsight this is probably a reflection on the level of actual knowledge & experience of each site.

Partway through the baseline assessments after a great conversation with one of the BA team in Cambridge (who sadly for us has since retired) we added another category – “trust”. His point was all the practices in the world were meaningless without mutual trust, reliability and respect.

It seemed obvious to us but for one particular site there was so much toxic politics between the business leadership and development that nobody could safely tackle that he had an entirely valid point. I can’t remember if we were ever brave enough to publish the trust results – somewhat telling perhaps? (Although the root cause “left to pursue other opportunities” in a political power struggle not long before I left).

Despite all this the baselining activities worked and we identified a common issue on almost all teams. Business engagement.

We were implementing Scrum & XP within a stage-gate process. Historically the gate at which work was handed over from the business to development was a one-way trip. Product managers would complete their requirements and them move on to market-facing activities and leave the team to deliver. If a team failed to deliver all their requirements it was historically “development’s fault” that the business’ numbers fell short. We were breaking down that wall and the increased accountability and interaction was loved by some and loathed by others.

We shifted our focus to the team/business relationship and eventually stopped doing the major assessments. We replaced them with a 10 question per-sprint stakeholder survey where every team member could anonymously provide input and the product managers view could be overlaid on a graph. This was simpler, focused and much more locally & immediately actionable. It highlighted disconnects in views and enabled collaborative resolution.

Here’s the 10 question survey.

Using a scale of -5 to +5 indicate how strongly you agree or disagree with each of the following statements  (where -5 is strongly disagree, 0 is neutral and +5 is strongly agree)


  • The iteration had clear agreement on what would be delivered
  • The iteration delivered what was agreed
  • Accepted stories met the agreed definition of done
  • What was delivered is usable by the customer
  • I am proud of what was delivered


  • I am confident that the project will successfully meet the release commitments
  • Technical debt is being kept to a minimum or is being reduced


  • Impediments that cannot be resolved by the team alone are addressed promptly
  • The team and product manager are working well together

If you’re ever inclined to do an “agile assessment” of any type, get a really good understanding of what questions you’re trying to answer and what problems you’re trying to solve. Try to avoid methodology bias, keep it simple and focused and make sure it’s serving the right people in the right ways.

Oh – if you’re after a copy of the assessment, I’m afraid it’s one of the few things I can’t share. Those that attended the Agile Cambridge workshop have a paper copy (and this was approved by the company I was at at the time) but I don’t have the rights to share the full assessment now I’m no longer there. I also feel quite strongly that this type of assessment can be used for bad things – it’s a dangerous tool in the wrong circumstances.

Thanks – as always – for reading.

Seeing the Value in Task Estimates

Reading time ~5 minutes

a list of task estimate sizes with beta curves overlaidYou might be aware of the ongoing discussions around the #noEstimates movement right now. I have the luxury here of rarely needing to use estimates as commitments to management but I usually (not always) still ask my teams to estimate their tasks.

My consistently positive experiences so far mean I’m unlikely to stop any time soon.

3 weeks ago I joined a new team. I decided I wanted to get back into the commercial side of the business for a while so I’ve joined our Sales Operations team. (Think DevOps but for sales admin, systems, reporting, targeting & metrics).

Fortunately for me the current manager of the team who took the role on a month or so earlier is amazing. She has so much sales domain knowledge, an instinct for what’s going on and deeply understands what’s needed by our customers (the sales teams).

I’d been working with her informally for a while getting her up to speed on agile project management so by the time I joined the team already had a basic whiteboard in place, were having effective daily standups and were tracking tasks.

The big problem with an ops team is balancing strategic and tactical work. Right now the work is all tactical, urgent items come in daily at the cost of important but less urgent work.

We’re also facing capacity issues with the team and much of the work is all flowing to a single domain expert who’s due to go on leave for a few months this Summer – again a common problem in ops teams.

I observed the movement of tasks on the team board for a week to understand how things were running, spot what was flowing well and what was blocked. As I observed I noted challenges being faced and possible improvements to make. By the end of the week I started implementing a series of near-daily changes – My approach was very similar to that taken in “a year of whiteboard evolution“.

Since the start of April we’ve made 17 “tweaks” to the way the team works and have a backlog of nearly 30 more.

Last week we started adding estimates to tasks.

I trained the team on task estimation – it took less than 10 minutes to explain after one of our standups. The technical details on how I teach this are in my post on story points. But there’s more than just the technical aspect. (In fact the technicalities are secondary to be honest)

Here’s the human side of task estimation…

  • Tasks are estimated in what I describe as “day fragments” – essentially an effort hours equivalent of story points. These are periods of time “small enough to fit in your head”.
  • The distribution scale for task estimates I recommend is always the same. 0.5, 1, 2, 4, 8, 16, 24 hours. (the last 3 are 1, 2 and 3 days) – It’s rare to see a task with a “24” on it. This offers the same kind of declining precision we see with Fibonacci-based story point estimates.
  • For the level of accuracy & precision we’re after I recommend spending less than 30 seconds to provide an estimate for any task. (Usually more like 5-10)
  • If you can’t provide an estimate then you’re missing a task somewhere on understanding what’s needed.
  • Any task of size 8 or more is probably more than one task.
  • Simply having an estimate on a task makes it easier to start work on – especially if the estimate is small (this is one of the tactics in the Cracking Big Rocks card deck)
  • By having an estimate, you have a better idea of when you’ll be done based on other commitments and activities, this means you can manage expectations better.
  • The estimates don’t need to be accurate but the more often you estimate, the better you get at it.
  • When a task is “done”, we re-check the estimate but we only change the number if the result is wildly off. E.g. if a 1 day task takes just an hour or vice versa. And we only do this to learn, understand and improve, not to worry or blame.

So why is this worth doing?

Within a day we were already seeing improvements to our flow of work and after a week we had results to show for it.

  • The majority of tasks fell into the 0.5 or 1 hour buckets – a sign of lots of reactive small items.
  • Tasks with estimates of 8 hours or more (1 day’s effort) were consistently “stuck”.
  • We spotted many small tasks jumping the queue ahead of larger more important items despite not being urgent. (Because they were easier to deliver and well-understood)
  • Vague tasks that had been hanging around for weeks were pulled off of the board and replaced with a series more concrete smaller actions. (I didn’t even have to do any prompting)
  • Tasks that still couldn’t be estimated spawned 0.5 or 1 hour tasks to figure out what needed to be done.
  • Large blocked items started moving again.
  • Team members were more confident in what could be achieved and when.
  • We can start capacity planning and gathering data for defining service level agreements and planning more strategic work.

I’m not saying you have to estimate tasks but I strongly believe in the benefits they provide internally to a team.

If you’re not doing so already, try a little simple education with your teams and then run an experiment for a while. You can always stop if it’s not working for you.



A quick update – Janne Sinivirta pointed out that “none of the benefits seem to be from estimates, rather about task breakdown and understanding the tasks.”

He’s got a good point. This is a key thing for me about task estimation. It highlights quickly what you do & don’t understand. The value is at least partially in estimating, not estimates. (Much like the act of planning vs following a plan). Although by adding the estimates to tasks on the wall we could quickly see patterns in flow of tasks that were less clear before and act sooner.

As we move from tactical to strategic work I expect we’ll still need those numbers to help inform how much of our time we need to spend on reactive work. (In most teams I’ve worked in it’s historically about 20% but it’s looking like much more than that here so far).

Martin Burns also highlighted that understanding and breaking down tasks is where much of the work lies. The equivalent of that in this team is in recognising what needs investigation and discussion with users and what doesn’t and adding tasks for those items.

Ship Early – Why New Software Sucks

Reading time ~4 minutes

I’ve been working in software product development companies for nearly 20 years.

Until 4 years ago I’d always been involved in “enterprise” software.

You know – the monolithic systems with great reporting capabilities that sell well to managers and (at least historically) poor UX for the real users. Those same products that promise the moon during slick demos, require 6-12 month sales cycles where complex pricing structures are worked through and year-long implementations are agreed.

I’ve helped set up demos, I’ve watched amazing presales engineers work all night to rewrite chunks of an application to demonstrate to a prospect that it’ll meet their unique requests.

And then during implementation you discover some of it just doesn’t work.

I implemented one company’s products internally. Dogfooding from the day V1.0 was released. In 6 months I raised over 50 showstopper bugs (almost all were found by their first real customer not long after us). On a visit to HQ in San Francisco, after a fair bit of wine at a great Italian restaurant in Burlingame (the House of Garlic if I remember correctly) I challenged the then VP of Development for that application on why they shipped blatantly unfinished software.

His answer was simple,  logical and for a young, inexperienced graduate analyst programmer my first window into the commercial realities of product development. He said;

“Market timing”

“If we released when the software was actually finished, we’d be beaten to market by our competitors.”

“It’s acceptable business practice because it takes 6 months to sell and we can’t start selling until the product is released and in our price books.”

“Even after the sale it takes months to implement so by the time users are ready to go live we’ve fixed all the major issues because you guys are dogfooding it for us”.

It made a whole lot of sense but it wasn’t something they ever told us on the project!

The thing is, everyone else is on the same bandwagon and the escalation games start rolling.

Products are brought to market earlier and earlier in their maturity with the knowledge that “nobody trusts a v1.0 version of a product anyway.” It continues today. Many large banks won’t touch an x.0 version of a product until the first major maintenance release is shipped.

My experience was the same. In most companies I worked for; when a new release of the DB platform shipped, we’d plan on adopting it sometime after the first 6 months out in the wild when we knew it was stable.

The game has moved on. It’s no longer just products that take a year to sell and implement so our exposure to early releases is increasing and in general so is the entire industry tolerance (there are obvious exceptions in safety-critical systems).

In my time at Oracle in the late ’90s I saw them try to change this world. They had a vision known as”Gray Iron”. It seemed brilliant – for business, development teams and customers. The idea was that based on a playbook of “best practice” business processes a customer could buy a server entirely preconfigured and working out of the box to support their entire financial, ERP and CRM set of processes. They could simplify the pain of users, ensure quality was high and radically reduce implementation times. Obviously this also offers a great potential competitive advantage.

Sadly it didn’t take off. Competitors were still playing the “ship early” escalation game so we had to play too – the poor system fed itself.

The surge of change brought about by Eric Ries’ book The Lean Startup has fanned the flames of this mentality. Ship early, get customer feedback and adjust.

The sad thing is this used to be only in the enterprise market but the world has turned. Consumer electronic devices now face the same market timing escalation issues as this article from December on Forbes highlights.

It’s now common for us to expect our phones to need a reboot or crash and even for consumer software to have significant glitches.

It’s great for the pace of innovation but spare a thought for the end users and customer support teams.

It’s a commercial reality we can’t avoid but let’s make sure we’re not sacrificing the end user experience. If you must use your customers as lab-rats, let them opt in or out of your experiments. Let them decide if they want “bleeding edge”, “new” or “stable” and honor their wishes.

If your product is even remotely valid you probably have enough early adopters willing to validate the bleeding edge without sacrificing laggards on the altar of possible futures.

What’s your “ship early” policy like?