The Pitfalls of Measuring “Agility”

Reading time ~ 6 minutes

This post expands on one of the experiences I mentioned in “Rapunzel’s Ivory Tower“.

I presented these lessons and the story at Agile Cambridge back in 2010.  It’s taken nearly 5 years to see the light of day in writing on here. I hope it’s not too late to make a difference.

I and my team hadn’t been in our roles long. We’d been given a challenge. Our executives wanted to know “which teams are agile and which aren’t” (see the Rapunzel post for more). We managed to re-educate them and gain acceptance of a more detailed measurement approach (they we were all Six Sigma certified – these people loved measurement) and I’d been furiously pulling the pieces together so that when we had the time to work face to face we could walk away with something production ready.

Verging on quitting my job I asked James Lewis from Thoughtworks for a pint at The Old Spring. I was building a measurement system that was asking the right questions but there was no way I could see a path through it that would prevent it being used to penalize and criticise hard-working teams. This was a vital assessment for the company. It defined clearly the roadmap we’d set out, took a baseline measure of where we were and allowed us and teams to determine where to focus.

My greatest frustration was that many of the areas teams would score badly were beyond their immediate control – yet I knew senior management would have little time to review anything but the numbers.

James’ question he left me with was:

“How do you make it safe for teams to send a ‘help’ message to management instead?”

I returned to my desk fuelled by a fresh pair of eyes and a pint of cider. I had it!

At the time many agility assessments had two major flaws.

1 – they only have a positive scale – they’re masking binary answers making them look qualitative but they’re not.

2 – They assume they’re right – authoritative, “we wrote the assessment, we know the right answers better than you…”

  • What if the scale went up to 11 (metaphorically) How could teams beat the (measurement) system.
  • And what if 0 wasn’t the lowest you could score. What would that mean?

The assessment was built using a combination of a simpler and smaller agility assessment provided to us by Rally plus the “Scrum Checklist” developed by Henrik Kniberg, the “Nokia Test“, the XP “rules” and my own specific experiences around lightweight design that weren’t captured by any of these. As we learned more, so we adapted the assessment to bake in our new knowledge.  This was 2009/2010, the agile field was moving really fast and we were adding new ideas weekly.

The results were inspired – a 220 question survey covering everything we knew. Radar charts, organizational heat maps, the works.

The final version of the assessment (version 27!) covered 12 categories with an average of about 18 questions to score in each category:

  1. Shared responsibility & accountability
  2. Requirements
  3. Collaboration & communication
  4. Planning, estimation & tracking
  5. Governance & Assurance
  6. Scrum Master
  7. Product Owner
  8. Build and Configuration Management
  9. Testing
  10. Use of tools (in particular Rally)
  11. Stakeholder trust, delivery & commitment
  12. Design

The most valuable part was the scale:

  • -3 We have Major systemic and/or organizational impediments preventing this (beyond the team’s control)
  • -2 We have impediments that require significant effort/resource/time to address before this will be possible (the team needs support to address)
  • -1 We have minor/moderate impediments that we need to resolve before this is possible (within the team’s control)
  • 0 We don’t consider or do this / this doesn’t happen (either deliberate or not)
  • 1 We sometimes achieve this
  • 2 We usually achieve this
  • 3 We always achieve this (*always*)
  • 4 We have a better alternative (provide details in comments)

The assessment was designed as a half-day shared learning experience. For any score less than 3 or 4, we would consider & discuss what should be done and when, what were the priorities, where did the team need support, what could teams drive themselves and what were the impediments. Teams could also highlight any items they disagreed with that should be explored.

Actions were classified as:

  • Important but requires management support / organizational change to achieve
  • Useful, low effort required but requires more change support than low hanging fruit
  • Potential “low hanging fruit”, easy wins, usually a change in practice or communication
  • Important but requires significant sustained effort and support to improve

As a coaching team we completed one entire round of assessments across 14 sites around the globe and many teams then continued to self-assess after the baseline activity.

Our executive team actually did get what they needed – a really clear view on the state of their worldwide agile transformation. It wasn’t what they’d originally asked for but through the journey we’d been able to educate them about the non-binary nature of “being agile”

But the cost, the delays, the iterative approach to developing the assessment, the cultural differences and the sheer scale of work involved weren’t sustainable. An assessment took anything from an hour to two days! We discovered that every question we asked was like a mini lesson in one more subtle aspect of agile.  Fortunately they got quicker after the teams had been through them once.

By the time we’d finished we’d started to see and learn more about the value in Kanban approaches and were applying our prior Lean experience and training rather than simply Scrum & XP + Culture. We’d have to face restructuring the assessment to accommodate even more new knowledge and realized this would never end. Surely that couldn’t be right.

Amongst the lessons from the assessments themselves, the cultural differences were probably my favourite.

  • Teams in the US took the assessment at face-value and good faith and gave an accurate representation of the state of play (I was expecting signs of the “hero” culture to come through but they didn’t materialize).
  • The teams in India were consistently getting higher marks without supporting evidence or outcomes.
  • Teams in England were cynical about the entire thing (the 2-day session was one of the first in England. Every question was turned into a debate).
  • The teams in Scotland consistently marked themselves badly on everything despite being some of our most experienced teams.

In hindsight this is probably a reflection on the level of actual knowledge & experience of each site.

Partway through the baseline assessments after a great conversation with one of the BA team in Cambridge (who sadly for us has since retired) we added another category – “trust”. His point was all the practices in the world were meaningless without mutual trust, reliability and respect.

It seemed obvious to us but for one particular site there was so much toxic politics between the business leadership and development that nobody could safely tackle that he had an entirely valid point. I can’t remember if we were ever brave enough to publish the trust results – somewhat telling perhaps? (Although the root cause “left to pursue other opportunities” in a political power struggle not long before I left).

Despite all this the baselining activities worked and we identified a common issue on almost all teams. Business engagement.

We were implementing Scrum & XP within a stage-gate process. Historically the gate at which work was handed over from the business to development was a one-way trip. Product managers would complete their requirements and them move on to market-facing activities and leave the team to deliver. If a team failed to deliver all their requirements it was historically “development’s fault” that the business’ numbers fell short. We were breaking down that wall and the increased accountability and interaction was loved by some and loathed by others.

We shifted our focus to the team/business relationship and eventually stopped doing the major assessments. We replaced them with a 10 question per-sprint stakeholder survey where every team member could anonymously provide input and the product managers view could be overlaid on a graph. This was simpler, focused and much more locally & immediately actionable. It highlighted disconnects in views and enabled collaborative resolution.

Here’s the 10 question survey.

Using a scale of -5 to +5 indicate how strongly you agree or disagree with each of the following statements  (where -5 is strongly disagree, 0 is neutral and +5 is strongly agree)

SPRINT

  • The iteration had clear agreement on what would be delivered
  • The iteration delivered what was agreed
  • Accepted stories met the agreed definition of done
  • What was delivered is usable by the customer
  • I am proud of what was delivered

RELEASE

  • I am confident that the project will successfully meet the release commitments
  • Technical debt is being kept to a minimum or is being reduced

GENERAL

  • Impediments that cannot be resolved by the team alone are addressed promptly
  • The team and product manager are working well together

If you’re ever inclined to do an “agile assessment” of any type, get a really good understanding of what questions you’re trying to answer and what problems you’re trying to solve. Try to avoid methodology bias, keep it simple and focused and make sure it’s serving the right people in the right ways.

Oh – if you’re after a copy of the assessment, I’m afraid it’s one of the few things I can’t share. Those that attended the Agile Cambridge workshop have a paper copy (and this was approved by the company I was at at the time) but I don’t have the rights to share the full assessment now I’m no longer there. I also feel quite strongly that this type of assessment can be used for bad things – it’s a dangerous tool in the wrong circumstances.

Thanks – as always – for reading.

Seeing the Value in Task Estimates

Reading time ~ 4 minutes

a list of task estimate sizes with beta curves overlaidYou might be aware of the ongoing discussions around the #noEstimates movement right now. I have the luxury here of rarely needing to use estimates as commitments to management but I usually (not always) still ask my teams to estimate their tasks.

My consistently positive experiences so far mean I’m unlikely to stop any time soon.

3 weeks ago I joined a new team. I decided I wanted to get back into the commercial side of the business for a while so I’ve joined our Sales Operations team. (Think DevOps but for sales admin, systems, reporting, targeting & metrics).

Fortunately for me the current manager of the team who took the role on a month or so earlier is amazing. She has so much sales domain knowledge, an instinct for what’s going on and deeply understands what’s needed by our customers (the sales teams).

I’d been working with her informally for a while getting her up to speed on agile project management so by the time I joined the team already had a basic whiteboard in place, were having effective daily standups and were tracking tasks.

The big problem with an ops team is balancing strategic and tactical work. Right now the work is all tactical, urgent items come in daily at the cost of important but less urgent work.

We’re also facing capacity issues with the team and much of the work is all flowing to a single domain expert who’s due to go on leave for a few months this Summer – again a common problem in ops teams.

I observed the movement of tasks on the team board for a week to understand how things were running, spot what was flowing well and what was blocked. As I observed I noted challenges being faced and possible improvements to make. By the end of the week I started implementing a series of near-daily changes – My approach was very similar to that taken in “a year of whiteboard evolution“.

Since the start of April we’ve made 17 “tweaks” to the way the team works and have a backlog of nearly 30 more.

Last week we started adding estimates to tasks.

I trained the team on task estimation – it took less than 10 minutes to explain after one of our standups. The technical details on how I teach this are in my post on story points. But there’s more than just the technical aspect. (In fact the technicalities are secondary to be honest)

Here’s the human side of task estimation…

  • Tasks are estimated in what I describe as “day fragments” – essentially an effort hours equivalent of story points. These are periods of time “small enough to fit in your head”.
  • The distribution scale for task estimates I recommend is always the same. 0.5, 1, 2, 4, 8, 16, 24 hours. (the last 3 are 1, 2 and 3 days) – It’s rare to see a task with a “24” on it. This offers the same kind of declining precision we see with Fibonacci-based story point estimates.
  • For the level of accuracy & precision we’re after I recommend spending less than 30 seconds to provide an estimate for any task. (Usually more like 5-10)
  • If you can’t provide an estimate then you’re missing a task somewhere on understanding what’s needed.
  • Any task of size 8 or more is probably more than one task.
  • Simply having an estimate on a task makes it easier to start work on – especially if the estimate is small (this is one of the tactics in the Cracking Big Rocks card deck)
  • By having an estimate, you have a better idea of when you’ll be done based on other commitments and activities, this means you can manage expectations better.
  • The estimates don’t need to be accurate but the more often you estimate, the better you get at it.
  • When a task is “done”, we re-check the estimate but we only change the number if the result is wildly off. E.g. if a 1 day task takes just an hour or vice versa. And we only do this to learn, understand and improve, not to worry or blame.

So why is this worth doing?

Within a day we were already seeing improvements to our flow of work and after a week we had results to show for it.

  • The majority of tasks fell into the 0.5 or 1 hour buckets – a sign of lots of reactive small items.
  • Tasks with estimates of 8 hours or more (1 day’s effort) were consistently “stuck”.
  • We spotted many small tasks jumping the queue ahead of larger more important items despite not being urgent. (Because they were easier to deliver and well-understood)
  • Vague tasks that had been hanging around for weeks were pulled off of the board and replaced with a series more concrete smaller actions. (I didn’t even have to do any prompting)
  • Tasks that still couldn’t be estimated spawned 0.5 or 1 hour tasks to figure out what needed to be done.
  • Large blocked items started moving again.
  • Team members were more confident in what could be achieved and when.
  • We can start capacity planning and gathering data for defining service level agreements and planning more strategic work.

I’m not saying you have to estimate tasks but I strongly believe in the benefits they provide internally to a team.

If you’re not doing so already, try a little simple education with your teams and then run an experiment for a while. You can always stop if it’s not working for you.

 

 

A quick update – Janne Sinivirta pointed out that “none of the benefits seem to be from estimates, rather about task breakdown and understanding the tasks.”

He’s got a good point. This is a key thing for me about task estimation. It highlights quickly what you do & don’t understand. The value is at least partially in estimating, not estimates. (Much like the act of planning vs following a plan). Although by adding the estimates to tasks on the wall we could quickly see patterns in flow of tasks that were less clear before and act sooner.

As we move from tactical to strategic work I expect we’ll still need those numbers to help inform how much of our time we need to spend on reactive work. (In most teams I’ve worked in it’s historically about 20% but it’s looking like much more than that here so far).

Martin Burns also highlighted that understanding and breaking down tasks is where much of the work lies. The equivalent of that in this team is in recognising what needs investigation and discussion with users and what doesn’t and adding tasks for those items.

Suboptimising Around Data

Reading time ~ 3 minutes

I originally wrote this article back in about 2009/2010 whilst working at a large corporation with a very strong measurement culture.

just some dataAs more teams and companies are adopting lean concepts and with the strong influence of the Lean Startup movement (which to reduce confusion is not the same as lean), this post feels relevant to finally publish publicly…

“Data is of course important in manufacturing, but I place the greatest emphasis on facts.” – Taiichi Ohno

There’s a great lean principle known as “Genchi Genbutsu” – “actual place, actual thing” Generally we interpret this as: “Go and See at the source”

When this critical pillar of lean thinking is eroded by a proxy we open ourselves to some painful problems.

Where organizations become too measurement focused, we risk our proxy for Genchi Genbutsu becoming data.

Lean and agile processes both rely on data but these are indicators only. Particularly in agile, there is a strong emphasis on data being used by the team for internal diagnostics – in fact very little agile material talks about data support for external management and there are good reasons for this.

Even where managers are fully aware that data are not the whole story, external or imposed measurement drives strange behaviors.

Talk to any individual or team that is measured on something that managers at one time or another thought was “reasonable” and chances are there will be a range of emotions from bemusement or cynicism to fear. All these responses will drive negative alterations in behavior. Occasionally there are good measures but they’re pretty rare

You’ll find individuals and teams that are working in the “expected” way will be absolutely fine. But their behavior will now be constrained by the metrics and their capacity to improve will be limited.

Those that are aware of their constraints (and are often limited in what they can actually influence to solve their problems) will at best sub-optimize around their own goals and at worst “game” the system in order to preserve themselves. This is a natural self-preservation response.

I’ve seen the most extreme example of this using game theory in simulated product management (in a session run by Rich Mironov back in 2009).

A team of 4 product managers were each given a personal $1M sales target, a set of delivery resources and a product. Performance against their target was measured on a quarterly basis. In the example, the game was deliberately rigged. It was impossible for all product managers to meet their personal targets with the limited resources they were given. However It was possible to achieve well above the total $4m target if product managers collaborated and in some cases were actually willing to sacrifice their own products, releases and resources in order to fund cycles on better-performing products.

Data may also only tell you how tools are being used. If a team is constantly inspecting and adapting, I would expect their tool usage to change. It may then not reflect expectations or worse, they may not be able to adapt for fear of damaging “the numbers”.

Here’s a great example of this from (of all places) a hair Salon… http://lssacademy.com/2007/08/05/bad-metrics-in-a-hair-salon/

For a closer to home example try this experiment…

If you’re not already doing so, start measuring story cycle time (from commencement of work to acceptance and ideally delivery).

Now try the following:

  1. Measure cycle times without giving feedback to the team. What are you observing in the data? What can you infer from this? What can’t you see?
  2. Continue measuring but start reporting the numbers to the team and discussing observations. Ask the team what they’re observing, what they can infer and what can’t they see. What would they change?
  3. Ask your teams if they’re willing to report their data to “management”. What responses do you get?
  4. If the teams are willing. Start reporting the data to management. Ask them what they’re observing and what they can infer? What Can’t they see? What would they change?
  5. Consider the level of trust in your organization. How would the experiment above change behavior if trust was significantly higher or lower than its current level?

Food for thought.

Stopping The Line

Reading time ~ 4 minutes

A few weeks ago the Company I work for celebrated its 13th birthday. As part of the celebrations we were each given the latest copy of the “BoRG” – the company book.  On reading through the pages I found one of the teams I’m responsible for received an award.

This isn’t quite as positive as it sounds and I’m pretty sure the incident will become lgendary within the company. The lessons learned, what we did afterward and the forward thinking attitude of our senior management are however truly worth celebrating.

The (rough) Story

Over the summer one of our teams was working on some updates to our product deployment tools (we deploy upwards of 50 new releases across our product portfolio every month). Part of the automated process involves uploading a packaged installer for our software to a download location and updating our web site to point to the update.

Due to a mix-up between environments and configurations, one of our internal tests made it into the outside world. The problem was spotted and resolved fast but something was clearly wrong for this to have been possible.

This alone would have been rather embarrassing however this was about the 6 or 7th significant incident that had come from our operations teams in as many weeks. We’d recently restructured team ownership of parts of the codebase, were making a large number of significant infrastructure, library, test and build changes to our systems – mostly legacy code (code without sufficient tests). Moreover, we added a whole new team onto the codebase with a very different remit in terms of approach and pace, the volume of churn in the code had massively increased.

This was all during the height of holiday and conference season so many of us weren’t fully aware of the inner carnage that had been occurring. Handing over from one manager to the next repeatedly meant we’d not seen the bigger picture.

I returned from Agile 2012 to a couple of mails from my boss (who was now on holiday) to fill me in on what had happened with the (paraphrased) words: “Can I leave this safe in your hands”.

Over the first 2 hours of my return I was briefed by managers and team members on the situation. Everything was sorted, no problems were in the wild but the team’s credibility had taken a beating.

I’d seen similar things happen in other companies and had always been certain of the right course of action. This was the first time I actually felt safe to lead what I knew was right.

I donned my “Lean” hat and started my nemawashi campaign with our senior managers.

I spoke to each manager individually – they were already well aware of the problems which made things much easier. I simply said.

“These problems can’t continue, we’re going to ‘stop the line’. All projects are going to stop until we’re confident that we can progress again safely.”

I went a step further and set expectations on timescales.

We’d be stopping development work for nearly 20 staff for at least a week. We’d monitor progress daily, and approval to continue would be on the condition that we were confident problems would not reoccur.

By lunchtime I had unanimous support. It was described as a “brave” thing to do by our CEO but all agreed it was right.

A side-benefit of Lean is the shared language it provides. In every case when I approached our management team and explained that I wanted to “stop the line” they immediately understood what I meant plus the impact, value and message behind such an action.

Now of course you can’t prevent new problems with hindsight but you can identify patterns of failure and address these.  In our case I had a good understanding of what had been going on.

Initially I was strongly against performing a full root-cause analysis.  There were half a dozen independent incidents and a strong chance of finger-pointing if we’d gone through these. I was already “pretty sure” where our problems lay. The increased pace had led to a fall in technical discipline coupled with an increased pressure to deliver faster and a lack of sufficient safety net (insufficient smoke tests).

I divided the group into 3 teams to focus on 3 areas.

  • “before release” – technical practices
  • “at the point of release” – smoke tests
  • “after release” – system monitoring

With an initial briefing and idea workshop I stepped back and left the 3 teams to deliver.

The technical practices team developed a team “technical charter”.  We brought all participants together for a review, revised and then published this. Individuals have since signed up to follow this charter and we review it regularly to ensure it’s working.

The smoke testing team developed a battery of smoke tests for the most critical customer-facing areas (shopping cart, downloads etc). These are live and running daily.

The monitoring team developed a digital dashboard (that I can still see from my desk every day). This shows the status of the last run of smoke tests (and history), build status, system performance metrics and a series of alerts for key business metrics that would indicate a potential problem with the site – e.g. a tail-off in volume of downloads or invoices.

They also implemented some server-side status monitoring and alerts that we subscribe to via email.

Since these have been in place we *have* had a couple more incidents but in every case we’ve spotted and resolved it early.

Subsequently a couple of the teams have self-selected to perform a root-cause analysis on a couple of issues. This is exactly the behaviour I love to see, it’s not a management push, they simply wanted to ensure we’d pinned things down and done the right thing. Moreover, they published the results to the whole company.

The award…

 

 

Rapunzel’s Ivory Tower

Reading time ~ 3 minutes

“If the problem exists on the shopfloor then it needs to be understood and solved at the shopfloor” – Wikipedia

Some years ago I took on an agile coaching role at a very large corporation. Like many stereotypical large corporations, they were seen as data-driven, process and documentation-heavy. Management culture was perceived as measurement-focused, command & control and low-trust.

They had a very well established set of Lean practices and managers promoted strong values around empowerment. Despite Lean training for all staff, there was still a very limited “Go See” culture.  Above a certain level it was still traditional management-by-numbers and standardization – mostly by apparent necessity through scale.

James Lewis recognized some of these challenges. (but was perhaps more brutal than my insider view)

At the start of the transformation the leaders wanted to know “who’s agile and who isn’t”.

Disturbing as the thought might seem, their motivation was sound. We’d all put our careers on the line to “go agile” in order to turn around a struggling group. The last thing needed was a disaster project with “Agile” being labelled as the cause of failure.

(Nearly 2 years further down the line, we managed to have at least one project fail early and be recognised as saving over a million dollars).

We developed an extensive “agility assessment” in order to teach all those involved that “being agile” wasn’t a binary question and wasn’t just about Scrum practices.

The measurement system for the assessment acknowledged that whilst there may be “good” answers, there are no “right” answers or “best practices” – teams could actually beat the system. (If there were “one true way” of developing software, the industry would be very dull).

Beyond measurement, the big challenge I and my team faced was the pressure to “operationalize” agile. To develop common standards, procedures, work instructions, measurements and tools worldwide. The Quality Management System (QMS) culture from our manufacturing background meant that interpretation of ISO accreditation needs was incredibly stringent and was required in order to do business with many customers.

Ironically that requirement kept us almost entirely away from the teams delivering software!

Operationalization was what our managers were asking for and it was very difficult to say “no”. Traditional corporate culture defined this as the way things should be.

So from stepping into a role where we expected the gloves to come off, where we could get out of the management bubble and start making a real difference with teams; within a few months my entire team found themselves unwittingly captive in an ivory tower.

We saw it coming and felt powerless to stop it but as permanent employees fresh into our very high-profile roles, those painful home-truths could not be comfortably raised.

I and my team spent that first period doing what was asked of us and helping teams out for the odd few days at a time wherever we could.

Fortunately all was not lost. At the same time, we invested in a highly experienced external group to engage on each of our sites and drive some of the changes we needed to achieve from within the teams.

Was the value I’d hoped to add in my role lost? – Actually no.

The managers got what they wanted – heavily seeded with a our own more balanced agile/lean understanding and experience.

We weren’t perfect but made a significant series of improvements. The teams actually delivering products had far more experienced consultants supporting them, who as contractors could take the right risks that permanent staff could not have done at the time.

This 2-tier approach actually gave the delivery teams more air-cover to find their own way whilst we worked on coaching the management.

The teams still had a long way to go but were heading in the right direction and getting progressively better. At the same time, the management team learned that Agile isn’t simply a case of running 2 days Scrum Master training, developing a set of procedural documentation and expecting that everything will show 1,000% improvement.

After the initial bedding in period, I and my team were able to build up sufficient trust with our leaders that we could set future direction ourselves.  The kick-start needed on change within the teams had already been made. (far more effectively than we could have achieved alone). 

With our leadership trust established, after being holed up in a tower for too long, our coaching team were able to reach the real world again. This time it was entirely within our own control, with the management support we needed and enough credibility remaining with the teams we had interacted with to move forward.

We were free, able to step in, learn more, tune, help out and spend months at a time properly embedded on teams taking them forward – reaching that point of empowerment for our team was a coaching journey in itself.

If you’re in the fortunate position to be an agile coach or in a similar role in a very large or more traditional organization, make sure you recognize that your coaching efforts will often be as much (if not more) necessary in coaching your leaders first.

Transactions for Managing Technical Debt

Reading time ~ 4 minutes

If you’re losing capacity maintaining brittle code or infrastructure, the “litter patrol” in a related neighborhood may be good practice but how do you visualize and manage the true source of your pain?

This is based on a mix of my own experiences and great nugget of insight picked up from Jim Highsmith at Agile 2010 who in turn credited Israel Gatt.

First the paraphrased nugget from Jim…

“Teams need both debt reduction and debt prevention strategies.”

Here’s the quote from Israel back in December ’09.

“If your company relentlessly pursues growth, the quality/technical debt liability it is likely to incur could easily outweigh the benefits of growth. Consider the upside potential of growth vis-a-vis the downside of the resultant technical debt. When appropriate, monetize technical debt using the technique described in Technical Debt on Your Balance Sheet.”

Here’s my expansion to Israel’s article…

How do we know we have debt?

We “know” it’s there, we feel it. But do the right people know about it? You may be allocating permanent team capacity to keep bailing without seeing the hole.

  • Train all your teams in spotting technical debt
  • Develop and communicate your debt strategy to your teams and stakeholders.
  • Determine a common unit of currency (points, hours, money, NUTs) that your stakeholders can understand and engage with and use this to describe the transaction decisions you’re making.

Communicate current debt – “Balance”

  • This is obvious but hard to achieve. For the debt that hurts; quantify the cost and determine what you could do if it were resolved. Get that opportunity cost shared and get your solution sponsored. In most large companies if you can demonstrate a cost reduction or productivity improvement you can get support.

Communicate new debt – “Recent Transactions”

  • Not the decade-old pile in the corner! Focus on what you’re forced to add because of deadlines and market pressure. Every time you make a debt-related compromise, get that conversation exposed. Determine the maximum acceptable age for any debt being added, a cost (in your selected currency) and a priority. Whilst there are cases where a debt may never need to be repaid, chances are you’ll need to pay the next transaction off.

Limit debt – Set yourself a “Credit Limit”

  • What’s the maximum debt your team/project/product is willing and/or able to tolerate? Set a credit limit and stick to it. If we blow our limit, we’re in trouble. My personal rule-of-thumb is if it’ll take more than one whole sprint for the entire team to clear all new debt then you’re worryingly-leveraged and heading away from releasable software. This approach means a simple default credit limit for a new project (using story points as currency) is the same as your velocity – simple to calculate and remember. You could convert this into the average hours (and therefore financial impact) a story of that size takes if that means more to your leaders.

Whatever you do, don’t over-mortgage your product. You don’t want your product portfolio to collapse under unrecoverable bad debt.

Set a maximum age threshold – “Payment Terms”

  • Use topping & tailing approaches to control and ratchet your debt age to keep the span down.  If your debt exists for more than N sprints (say 3) promote it to the top of the priority stack. Fixing new debt when it arrives is hard work and items do slip through. For those that escape, setting a maximum age means you’ll cycle all your debt through and keep it fresh. Even tough problems get resolved rather than rotting in the debt heap.

Visualize and report your debt – provide a “Debt Statement”

  • Put all debt visibly on your wall as cards or stickies, give it a different color, heading, whatever – make it visible. Now every sprint let’s highlight the numbers, the growth, reduction, reiterate your limit. If you’re using an electronic tool, try putting all the debt items under a single parent and tracking cumulative flow and burn-up of debt rolled up to that item.

Prioritize your debt – “Positions”

This is getting a little more advanced and difficult to manage – I’d reserve this for only projects that already have high debt where the simple strategies alone are too invasive.

  • Partition your debt into 3 positions. New:Short-term, New:Long-term, Old:All.

Set a different credit limit and payment terms for each of these and as part of prioritization set a ratio of effort/pay-off against each that ensures that at a minimum debt is sustained at a constant level but focus on cycling through so that the contents remain fresh.

Taking Action

Here are a few other points to explore – mostly around debt reduction.

• What “small wins” exist for you? Are there some simple debts than can be cleared quickly?

• If you sacrificed a team member for 1 sprint to pay off some short-term debt, would you be able to increase the overall team performance in later sprints?

• If a particular single debt item is too big for a team to swallow in a single sprint, can you call in or implement a parallel “SWAT Team” to fix it?

• What capacity do you need to reserve to sustain your current debt level?

• If you were going to reduce your debt burden by 10-20% what effort/capacity would that require and how long would it take?

• Can you demonstrate a return on investment for a particular piece of debt-reduction?

• Worst-case scenario: Could some debt control run “under the radar”? (And if this were discovered, what problems would be caused?)

Wrapping up:

Develop both debt prevention and reduction strategies for your teams, don’t just focus on reduction.

Treat technical debt like real personal or business debt: use “credit limits”, “statements”, “balance”, “transactions” & “payment terms” to your advantage.

To achieve effective reduction, work through the Oubliette strategies and in worst cases, consider a “SWAT Team”.

SMART Goals and The Elephant Test

Reading time ~ 2 minutes

Just under 4 years ago I set myself a goal to “socialize the concept of technical debt” within my organization. I had a strategy but no visible means of measurement. When I’d achieved my goal it was obvious that I’d succeeded but I had no direct evidence to prove it. – and why bother? – I succeeded.

Thanks to Luke Morgan (Agile Muze) for the inspiration of the “Elephant Test” – I’d never heard of it until last week.

For years everyone I know has been indoctrinated into using “SMART” goals (as defined in the early 1980s). As a line manager, employee of multiple large corporations and one-time domain expert in learning and performance management systems I too bought into and supported the “SMART” mnemonic.

Here’s a challenge – try running a 5 whys exercise on each of the attributes of SMART.

  • Specific,
  • Measurable
  • Attainable
  • Relevant
  • Timely

I can develop valuable meaningful responses (not excuses) to most of these except measurable.

I’ve spent enough years working for corporations that love to measure to have a very good handle on the values and dangers of measurement. But until my inspiration from Luke, I never had an alternative.

Today I do!

Why measure something when you implicitly know, trust and recognize what you’re looking at?

The Elephant Test “is hard to describe, but instantly recognizable when spotted”.

A big leap in agile management is trust. Trust your teams to do the right thing.

  • If you trust your team to set and accomplish their own reasonable goals, you must also trust their judgement.
  • If you trust their judgement, they must be able to recognize when they’ve achieved a goal.

Measurement is the most brute force way of recognizing something – but not the only way.

Software development and management is a knowledge activity. We tacitly know what’s right and wrong and we openly share that recognition. Occasionally we choose to measure but much of the time we trust our judgement and that of our teams.

So if the team says they saw an Elephant, chances are they saw an elephant.

Or don’t you trust them?

If that Elephant happens to be that your team believes they’ve met their goal and their stakeholders agree, why must that goal be explicitly measurable?

I know when I’ve made a difference and those around me know when I’ve done a good job. That team consensus is far more rewarding and more trustworthy than preparing measurable evidence – it’s also a lot harder to game and a lot harder to sub-optimize your behavior to group perception than around numbers.

Next time you’re looking at goal setting. Don’t go overboard on making them all measurable. If they can pass the elephant test, that should be more than sufficient.

Try starting out with a clear simple vision, good direction, a suitable time window, a strategy and some commitment to do the right thing and work from there. You’ll know amongst yourselves when elephant-testable goals have been achieved (and delivered in good faith). These may also be some of the most valuable impacts your team can have.