Where’s My Tools?

Reading time ~ 3 minutes
hand

The most effective tool in the box

Most of the really big successes and game-changers I’ve seen in my career have been initiated through individuals seeing a problem and fixing it themselves rather than waiting for others. 

If you need something done that isn’t already happening, nobody else is going do it for you.

Here’s 4 lessons learned from adopting TDD…

It takes time & effort to get TDD into place

You will have to sacrifice “new code” time in the short term. From my experience, time is probably the biggest barrier to successfully introducing TDD on most teams (closely followed by motivation & accountability). Seeing the pay-off on the other side is notoriously hard to when you’re not there – either starting out or sitting at the bottom of the change curve.

If you’re coaching teams, sometimes you might just need to say “Trust me, I know what I’m doing”

“It can’t be unit-tested”

When you or someone else makes this statement they probably mean one or more of the following:

  • “I don’t believe I have the time to make this testable”
  • “Nobody has provided the infrastructure/harness/test data for me”
  • “I don’t know how to unit test this” (and need some help to figure it out)
  • “We have some architectural impediment to resolve in order to achieve this”

From these statements, which do you think is least common?

Break this cycle by encouraging accountability for making code testable. 

This doesn’t just mean the product code it includes taking ownership of your testing tools and abilities as well.

If you’re struggling…

There’s 2 options for you; “abdicate” or “own”

Option 1 – Abdicate responsibility.

  •  You’ll set the tone for your whole team. The next time someone hits the same problem, the cycle will repeat.
  •  At best, someone better than you may eventually address the issue. (And I hope they make you feel mediocre or inferior).
  •  At worst you may find them saying that you didn’t provide the solution for them.

The “best” case is highly unlikely to happen during your current project – maybe even the one after.  Just think how much testing debt can build up over the life of a single project. We should be avoiding that, right?

Someone has to break the cycle.  If you’re facing pain, why aren’t you responsible for fixing it?

Option 2 – Take ownership of the problem yourself

Great! You’ve recognized that your accountability is part of the problem.

Nobody else will do this stuff for you!

  • Establish a Personal strategy for implementing TDD – how are YOU going to solve the problem?
  •  The teams and individuals I’ve seen that are successful with TDD have all got there by taking a personal responsibility to do so.

Just Do It

Here’s some suggested steps to get going – you can either do the first few alone or as part of a team.

(If there’s an accountability gap in your group you’ll probably need to start alone)

Round 1

1. Get a unit test harness and environment in place – there’s plenty online but write one yourself for starters if you have to!

2. If you have a database, break that dependency and channel it through something replaceable.

3. Write your first round of tests and see what hurts – now resolve those pains.

Limit the time spent on this first cycle and determine whether you should ask for time in advance or for forgiveness later – it’ll depend on your local context.

Round 2

4. Educate & support your team on what you’ve provided so far.

5. Get other team members to write their own round of unit tests (treat it “as  an experiment” if there’s concerns over the effort required). See what hurts and resolve those pains.

6. Encourage and develop shared long-term ownership of the test harness, test data, stubs and mocks (if used).

Remember you’re setting the tone. If someone needs something that isn’t available, pair up, help them provide it and share the results with the team. Don’t leave them alone but don’t do it for them either.

Round 3

7. Repeat round 2 during development until writing tests is fast, easy and natural. (Don’t give up too soon – this is hard work and takes time!)

8. Capture any “wins” and “losses” during this cycle – what benefits are you seeing and what new pains are coming through?

9. Review the wins & losses. Decide what to do about the losses and how to promote the wins up the management chain and across the team. For example either publicise as you go or use as fuel for your retrospectives.

Think about the positive impact that little bit of extra effort and ownership has on you and your team in leading by example – doing a great job that others will want to share, not just a good job.

Abdication or accountability is a personal choice but it affects everyone around you. What are you doing to help your team?

The Joy of Peer Reviews (Part 2 – Documentation)

Reading time ~ 4 minutes

Another agile management myth to dispel – agile projects have no documentation…

The level of documentation you provide depends on the needs of your stakeholders.

About a month ago I spent some time discussing code reviews and promised a follow-up on document reviews.  Since then, the number of documents I’ve needed to work on or review has been pretty slim so there’s not been much scope for inspiration.

One of my teams are coming to the end of a major release and our product managers are re-evaluating parts of the portfolio. Both of these events drop us into some more traditional documentation areas for a while.

Before I progress, there’s three types of documentation we tend to deal with on a project.

  • Product documentation (user-facing)
  • Control documentation (mostly management-facing)
  • Technical documentation (mostly team-facing)

We produce a lot of information in all these areas but of most of it isn’t as formal documents.

Here’s the bits we treat formally…

  • Vision, business needs & minimum marketable feature set
  • High level plan (priorities, scope, cost, schedule, quality)
  • Significant, complex or high level designs (functional or technical)
  • End-user and admin guides
  • Significant scope changes (change control)
  • Closure (agreement from stakeholders that we’ve met commitments)

Before going any further. The main rule of thumb in documentation I follow is:

Strive to keep your overheads to a minimum

Focus on the level of documentation needed by the relevant stakeholders and seek alternatives where feasible. For example user guides & help files may be better replaced with automated drive-throughs, screencams and audio descriptions. Project reviews may be best served with a conversation, a single slide with some visual anchors and a sign-off.

My teams have had cases where we felt we needed more documentation because we weren’t happy with the level of traceability from our less-formal approaches.  This tends to vary depending on what levels of organizational trust exist and how much protection you may need.

Where you do need formal documentation – regardless of which of the above types it falls into here’s my basic document delivery and review guidelines…

Delivery

It’s remarkable how much good documentation is very like code.  My mantra is “deliver early and often”.

  • Start by delivering the scaffold, walking skeleton or spine of your document and get that out for initial feedback on structure.
  • Consider swarming your team around parts of the document to accelerate delivery.
  • If you have interaction across teams, ideally have the teams write their components for you – do the editorial but they cover the “meat”.
  • Balance time invested with expected value – especially before initial reviews
  • Start adding flesh to the bones one section at a time.
  • As each section is “good enough”, send it out for a draft review.
  • Following review, rework the meat and polish
  • As major themes come together for completion, review again for cohesion.

I find there’s one large challenge in writing documents – Inertia – don’t underestimate the effort needed both to get moving and to be properly finished.

Most experienced coders understand about getting into “the zone” or being in “flow”. This same situation holds true for writing documents however personally I find it’s much harder to start moving on a document, takes more sustained effort for completion and requires more polish than my code. Once you are in flow. Stopping is equally hard. Much like delivering small frequent reviewable coding tasks takes practice, so the same applies for documentation. The team approach to documentation is a great way around this once you have a scaffold in place.

Review

If producing documentation is like cutting high quality code, then it should be no surprise that reviewing documentation should be treated with similar respect.

Bear in mind, usually when you’re reviewing documentation, you’re probably at a point in the cycle where you can prevent major defects and costs in future by asking the right questions.  Reviewing documents is not a hurdle to overcome it’s an essential quality step in setting your projects up for success and in ensuring we have consensus and understanding on results.

I’ve divided the checklist into 3 parts. “Thinking”, “Approach” and “Review”.

Thinking

A few considerations to make before starting the review itself (and before submitting for review)

  • Discipline – treat this review just like you would a code peer review.
  • Should it be a document , could it be something else?
  • Who is the intended audience, is it pitched at the right level?
  • Is this technical, control or product documentation (and how does that impact our approach)?
  • Is this primarily for management, the team or our customers/users?
  • Is this document transient or permanent?
  • How critical is accuracy/precision? (compare for example to scientific or medical papers)
  • How will this document be maintained and who by?

Approach:

What form is the review going to take?

  • Round table vs offline – will the review be individuals at desktops, as a round table session or a mix of the two
  • Piecemeal or big bang – I mentioned my preference for part-deliveries. This only works if you’re willing to perform part-reviews.
  • Feedback cycle – how will you manage feedback? For large critical documents I usually forecast 2 rounds of review/rework and a total elapsed time of 2 weeks from the initial draft to completion. (that’s assuming a well-managed but not top priority review cycle). The author/coordinator should have this at the top of their stack until it’s “done done”.

Review

Points to look for – these start simple and get trickier as you go through the list.

It’s disturbing how many people only review using a subset of the top items here and get no further. Lazy reviewers may read a document without actually reading it. (Although perhaps this is a reflection that we’ve missed something in the perceived value of a document)

  • Document information and headers/footers are correct
  • Consistent formatting & numbering e.g page numbers, typefaces, sizes
  • Spelling & grammar
  • Tables of contents & figures are up to date (ctrl+a, f9, update all – in Word!)
  • Cross-references work and point to the right locations
  • Related documents or prerequisites are both accessible and necessary
  • Use of correct templates where required
  • Proper heading formatting – helps with maintenance of TOC
  • Use of change tracking and versioning
  • Identify reviewers, owners, approvers
  • _________________________________________
  • Lots of words when a picture would be better
  • Emotive descriptions (where not valid)
  • Unfounded facts and numbers (particularly sales, estimates and time lines)
  • Impacts on other teams (especially without consultation)
  • Expectations from other teams (especially without consultation)
  • Over-optimistic statements (see unfounded facts and numbers)
  • Unbounded requirements
  • Missing assumptions
  • Alternatives and rejections
  • Over-technical or not detailed enough for audience
  • Length/brevity (much like some of my posts)
  • Emotional attachment to content or single approach/ideas
  • Any prior examples of similar to compare, reference and/or improve from
  • Voice of the customer (and their future accessibility)
  • Who are the decision makers and arbitrators,
  • A clear problem statement
  • Well-defined vision
  • SMART or Elephant goals
  • What does “good”, “done” or “success” look like?
  • Quality, acceptance criteria and tolerances
  • Any disconnects between related documents or teams
  • Anything missing

Phew! – as before, I’m sure there’s more.

To summarize in one sentence…

If your documentation is important to your stakeholders, treat it with the same reverence as your production code.

Escaping the Oubliette (Part 4) – The Litter Patrol

Reading time ~ 2 minutes

As promised in my last installment on oubliettes…

Your team might not be fully ready for the merciless refactoring encouraged by some agile approaches but this will help you stay heading in the right direction whilst keeping the delivery vs refactoring impacts balanced out.

The cost of change has a (roughly) logarithmic relationship to debt. I’ve seen first-hand how high-debt systems become almost impossible to change and it’s not pretty.

In a debt-ridden system we are eventually faced with a choice; refactor or replace. Eventually even once-newly-replaced systems build up debt and the refactor/replace choice returns. Craig Larman & Bas Vodde’s most recent book covers the debt/cost relationship brilliantly in the section on “legacy code”. They also describe the oubliette strategy of “Do No Harm” or as I call it – The Litter Patrol“.

This is a particularly powerful debt management approach as it’s both a prevention and reduction strategy.

Here’s the basic concept…

When working with an area of legacy code, you’re working in a particular “neighbourhood“. If that neighbourhood is untidy your care and attention to that neighbourhood is diminished. Much like the “broken windows” principle; once the damage seal is broken, neglect and further damage follow and overall code quality deteriorates rapidly.

So (without going overboard), every time you’re working in a particular neighbourhood, what can you do to clean up a few pieces of litter?

Not the run-down shopping district between lines 904 and 2897 but more the abandoned classic car between lines 857 and 892 or the overflowing trashcan on the corner between 780 and 804.

If you introduce a litter patrol in your teams and encourage a hygiene cycle with every change to your code, your debt load and future cost of change will rapidly reduce for the areas you hit most frequently.

Unfortunately although this is easier than a complete refactoring of a poorly designed class-hierarchy or monolithic god-class, in order to perform the litter patrol in safety on your code you need good unit tests or small functional tests for that neighbourhood and ideally some refactoring support (I like to call these your gloves, garbage bag and litter picker).

This challenge doesn’t mean don’t do it. In fact if you don’t have tests already, maybe your next patrol isn’t changing the code at all but to write just a couple of small, independent tests to demonstrate how that area is expected to behave. (You might even need to make some tweaks to make it testable)

If it’s hard, don’t avoid the problem, focus on making life easier one bite or constraint at a time.

Every time we successfully deliver a small clean-up task, the future cost of change to that area is reduced and our incentive to keep it clean is improved.

Look out for part 5 – sponsorship – coming soon.

Escaping the Oubliette (Part 3) – Bug Blitz

Reading time ~ 3 minutes

Every product team I’ve ever worked in had a bug blitz at some point and often one every year or two.

There’s no arguing that a decent bug blitz is a powerful way of getting the numbers down and clearing all the bugs in a good, solid drive feels good but the necessity for them is caused by a buildup from somewhere.

If you find you’re needing a bug blitz on every release, take a look at your defect and debt prevention activities and make sure you’ve got some topping & tailing practices going on.

Usually bug blitzes are performed at the end of a project but if you’ve not done so before, try having a 4-8 week blitz at the beginning instead. You’ll run quicker afterward and (if you keep things under control), you won’t have to worry about having time to mop up at the end.

Once you’ve got the numbers down, set your maximum defect threshold (or ratchet) at this level for the remainder of the project and keep this new lower level sustained throughout development.

What’s good about a bug blitz?

A blitz is particularly useful if you can focus on areas of the product you’ll be working with soon. It’ll get your team working together (particularly if they’re newly formed) and familiar with these areas before all the major work starts.

Couple this up with developing some decent automated tests in those areas as part of the defect fixing and you’ll be developing a much safer scaffold for your new work and reduce regression risks during your next release.

You could take things further and perform some refactoring but I suggest keeping different types of activities separate at this point and just stick to straight bugs. If you have a specific functional area needing a real overhaul, it doesn’t fit the bug blitz mold. I’ll cover this aspect  in more depth when I talk about “sponsorship” for debt reduction.

I use bug blitz approaches when training up new staff. I review the defect backlog for a particularly grubby functional area and have a pair of staff take custody of it as caretakers. We pipeline the defects so that they can start with some simple introductory ones (usually low severity, noise or cosmetic stuff) and once we get confident in these we expand out into adjacent areas – it usually takes a month or two for them to really get warmed up.

The bug blitz is also a great opportunity to start new release development with a clean slate. It means no mixing types of work during feature development and gives you an opportunity to scrub the grime out and get a few new scaffolding tests in place.

What about the down-side?

First; they cost time and money. If you dedicate a team (or most of a team) to a blitz you’re not delivering anything new. This is obvious but important. What’s the impact of a month’s delay to your next release? (assuming you can ship with those bugs in there). And what will that delay do to your stakeholder relationships?

Be mindful of the impact a bug blitz can have on your customers. A high level of churn on existing functionality can be really dangerous. You might need to make a point of jumping through a few hoops to retain backwards compatibility.

Don’t be too hasty to refactor if customers are expecting certain things. If you don’t have decent automated regression tests, you really need to ensure you write at least a couple that hit the same area before you fix any defects. (I usually expect my developers to deliver at least 2 or 3 new automated tests with each bug fix).

Worse still, I’ve seen a batch of fixes in a functional area radically change behavior “for the better” according to the developers that raised the original internal defects who then discovered that customers were actually depending on existing behavior or had developed their business processes around the issues.

Sometimes, even if you think it’s ugly, it might be that way for a good reason.

With enterprise software, you’re often looking at heavily customized implementations, some of which have taken years and millions of dollars to evolve.  Smacking these with a mountain of churn on existing functionality can be painful and expensive for your customers. Whilst it makes your numbers look good, consider whether some things should really be touched.

In summary. Bug blitzes are a great way of starting with a cleaner work area and ramping up teams but beware of backwards compatibility, customer impact, time and cost pitfalls.

Read part 4 – “the litter patrol

Building a Case for Change

Reading time ~ 3 minutes

I recently led a short evening session discussing strategies for collective code ownership. We opened with a set of patterns and practices but the team highlighted something painfully obvious that I’d overlooked…

What’s the business value and driver behind collective code ownership. What’s in it for your product owners and customers?

So often we focus on the people barriers and technical challenges to collective code ownership without recognizing that the barriers may be organizational or financial.

This challenge happens when you attach yourself to “the right thing to do” and seek to progress without recognizing that the population you’re likely to be working with will fall across a spectrum from complete support, through “don’t care” to disagreement and dissent.

If you want to pursue a strategy for collective code ownership, step back and work through the following questions (some of these are hard – consider starting by brainstorming with a small supportive team):

  • If we just got on did it, would we be successful?
    • Yes – Great, you’re working with a team and business that “gets” this or trusts you to do the right thing. You could stop reading here, but you might want to skim this to see what ideas & thoughts are triggered.
    • No – OK, you’re part of the more normal population (for now) read on for some ideas. You might not need to cover all of the following (and if you do, it might slow you down)

Still reading? – Good!

The following apply to pretty much any change or investment program but answering them is surprisingly hard. Your context will probably differ to mine so I’ve not given all the answers today.

  • What are the top 1-3 benefits of pursuing this?
  • What are the top 1-3 costs of pursuing this?
  • Are there people outside the team that I’m going to need to “sell” to?
  • Are there people inside the team that I’ll need to sell to?
  • What are the risks, pitfalls and political or personal agendas?
  • What do we sacrifice – what’s the opportunity cost?
  • Who are my supporters and how can they help me?
  • Who’s on the fence, do they need to be involved and how can they help/hinder?
  • Who will fight against this and how big a problem is that really?
  • Do I ask for permission and support or press ahead and ask for forgiveness later?
  • Who will this impact and in what ways (positive and negative)?  **detail below

**How about those stakeholders?

For the set of stakeholder questions I suggest you’ll achieve better flow and find natural breaks if you cover positive and negative aspects for each role by “switching hats” as you work through rather than batching up all the positives first.

  • What’s in it for you and what’s the down side?
  • What’s in it for your peers and what’s the down side?
  • What’s in it for your boss and what’s the down side?
  • What’s in it for your team and what’s the down side?
  • What’s in it for your business and what’s the down side?
  • What’s in it for your users and what’s the down side?
  • What’s in it for your customers and what’s the down side?
  • What’s in it for a product manager/owner and what’s the down side?
  • What’s in it for a project/program manager and what’s the down side?
  • What’s in it for an individual team member and what’s the down side?

Wow! that’s a lot of questions.

As I said – you might not need to answer all of these and you really don’t want to have to write them all down but it’s worth spending a few minutes or hours considering these if you’re planning to invest your time, effort and credibility in something you believe in.

Beyond everything here, I also recommend a read of Fearless Change to help you on your way.

Patterns For Collective Code Ownership

Reading time ~ 5 minutes

Following the somewhat schizophrenic challenges of dealing with person issues on collective code ownership, here we focus on the practices and practical aspects. How do we technically achieve collective ownership within teams?

We’re using the “simple pattern” approach again. For each pattern we have a suitable “anchor name”, a brief description and nothing more. This should be plenty to get moving  but feel free to expand on these and provide feedback.

Here’s the basic set of patterns to consider:

Code Caretaker

Let’s make it a bit less personal, encourage the team to understand what the caretaker role for code entails. It’s not a single person’s code – in fact the company paid us to write it for them. Code does still need occasional care and feeding – Enter the “Caretaker”.  Be wary though that caretakers may become owners.

Apprenticeship

Have your expert spend time teaching & explaining. An apprentice learns by doing – the old-fashioned way under the tutelage of a master craftsman guiding them full-time. This is time and effort-intensive for the pair involved but (unless you have a personality or performance problem) is a sure-fire way to get the knowledge shared.

If you need to expand to multiple team members in parallel,  skip forward to feature lead & tour of duty.

Ping-Pong Pairing

Try pair programming with an expert and novice together. Develop tests, then code, swap places. One person writes tests, the other codes. Depending on the confidence of the learner, they may focus more on writing the tests and understanding the code rather than coding solutions. Unequal pairing is quite tricky so monitor this carefully, your expert will need support in how to share, teach & coach.

Bug Squishing

Best with large backlogs of low-severity defects to start with. Cluster them into functional areas. Set a time-box to “learn by fixing” – delivery is not measured on volume fixed but by level of understanding (demonstrated by a high first-time pass rate on peer reviews). If someone needs help they learn to ask (or try, then ask) rather than hand over half-cut. This approach works with individuals having peer review support and successfully scales to multiple individuals operating on different areas in parallel.

A Third Pair of Eyes (Secondary Peer Review)

With either an “initiate” (new learner) or an experienced developer working on the coding, have both a learner and an expert participate in peer reviews on changes coming through – to learn and provide a safety net respectively. The newer developer should be encouraged to provide input and ask questions but has another expert as “backstop” for anything they might miss.

Sightseeing / Guided Tour

Run a guided walk-through for the team – typically a short round-table session. Tours are either led by the current SME (subject matter expert) or by a new learner (initiate). In the case of a new learner, the SME may wish to review their understanding first before encouraging them to lead a walk-through themselves.

Often an expert may fail to identify and share pieces of important but implicit information (tacit knowledge) whilst someone new to the code may spot these and emphasize different items or ask questions that an expert would not. Consider having your initiate lead a session with the expert as a backstop.

Feature Leads

The feature lead approach is one of my favourites. I’ve successfully acted as a feature lead on many areas with teams of all levels of experience where I needed to ramp up a large group on a functional area together.

By introducing a larger team, your expert will not have the capacity to remain hands-on and support the team at the same time. They must lead the team in a hands-off approach by providing review and subject/domain expertise only. This also addresses the single point of failure to “new” single point of failure risk seen with “apprenticeship”.

This is also a potential approach when an expert or prior owner steps in too frequently to undo or overwrite rather than coach other members activities. By ensuring they don’t have the bandwidth to get too involved you may be able to encourage some backing-off but use this approach with care in these situations to avoid personal flare-ups.

Cold Turkey

For extreme cases where “you can’t possibly survive” without a single point of failure, try cold turkey. Force yourself to work without them.

I read an article some years ago, (unfortunately I can’t track it down now) where the author explained how when he joined a project as manager, the leaders told him he must have “Dave” on the project as nobody else knew what Dave did. He spent a week of sleepless nights trying to figure out how to get Dave off the project. After removing Dave from the project, the team were forced to learn the bottleneck area themselves and delivered successfully.

(Apologies to any “Dave”s I know – this isn’t about any of you)

Business Rule Extraction

Get your initiate to study the code and write a short document, wiki article or blog defining the business rules in the order or hierarchy they’re hit. This is usually reserved for absolute new starters to learn the basics of an area and show they’ve understood it without damaging anything. It is however also a good precursor to a test retrofit.

Test Retrofit

A step up from business rule extraction whilst delivering some real value to the team. Encourage your learner to write a series of small functional tests for a specific functional area by prodding it, working out how to talk to it, what it does and what we believe it should do. Save the passing tests and get the results checked, peer reviewed and approved.

Chances are you might find some bugs too – decide whether to fix them now or later depending on your safety margin with your initiate.

Debt Payment

After a successful test retrofit, you can start refactoring and unit testing. As refactoring is not without risk this is generally an approach for a more experienced developer but offers a sound way of prizing out functionality and learning key areas in small parts.

This is also a natural point to start fixing any bugs found during a test retrofit.

National Service (also known as Jury Duty)

Have a couple of staff on short rotation (2-4 weeks) covering support & maintenance even on areas they don’t know. This is generally reserved for experienced team members who can assimilate new areas quickly but a great “leveller” in cross-training your team in hot areas.

Risks are generally around decision-making, customer responsiveness, turnaround time and overall lowering of performance but these are all short-term. I’ve often used this approach on mature teams to cross-train into small knowledge gaps or newly acquired legacy functional areas.

My preferred approach is to stagger allocation so that you have one member on “primary” support whilst another provides “backup” each sprint. For the member that covered primary in sprint one, they’re on backup in sprint 2. (expand this to meet your required capacity).

3-6 months of a national service approach should be enough to cover the most critical functional gaps on your team based on customer/user demand

Tour of Duty

Have your staff work on longer term rotations (1-6 months) working on a functional area or feature as part of a feature team. This couples up with the “feature lead” approach.

Again, this is an approach that is very useful for mature agile teams that understand and support working as feature teams but can be introduced on less-experienced teams without too much pain.

The tour of duty can also work beyond an individual role. For example a developer taking a 3-6 month rotation through support or testing will provide greater empathy and understanding of tester and user needs than simply rotating through feature delivery. (My time spent providing customer support permanently changed my attitude toward cosmetic defects as a developer)

Adjacency

This takes a lot more planning and management than the other approaches described but is the most comprehensive.

From the areas each team member knows; identify where functional or technical adjacent areas lay and then use a subset of the approaches described here to build up skills in that adjacency.

Develop a knowledge growth plan for every team member sharing and leveraging their growth across areas with each other. Although this is an increase in coordination and planning, the value here is that your teams get to see the big picture on their growth and have clear direction.

All these patterns have a selection of merits and pitfalls. some won’t work in your situation and some may be more successful than others. There are undoubtedly more that could be applied.

Starting in your next sprint, try some of what’s provided here to developing shared ownership and knowledge transfer for your teams. Pick one or more patterns, figure out the impacts and give them a try.

(Coming soon “Building a Case for Collective Code Ownership” – when solving the technical side isn’t enough)

Escaping the Oubliette (Part 2) – Tailing

Reading time ~ 4 minutes

Following my previous articles on topping and debt prevention, we’ll now focus on the easier parts. How to clear old defects and debt. we’ll cover 4 simple strategies; 2 for defects and 2 for debt. (there are paired similarities across these).

For defects, we’ll focus on:

For debt we’ll focus on:

Unlike prevention and stop the bleeding activities which often require significant effort and commitment to accomplish, defect and debt removal have some simple and relatively low effort options. There are some high-effort/impact alternatives that we’ll cover later.

Today we’ll look at…

Tailing & Ratcheting

When you have a build up of defects over time you tend to have a “tail” of really old defects and issues. These are usually low or medium severity/priority items (with the occasional blip) and are often in functionally gray areas requiring difficult decisions or significant rework.

They age because we don’t want to touch them and would rather forget about them (the oubliette again). A typical defect age distribution looks something like this:

right-skewed beta distribution

right-skewed beta distribution

It’s one of the most common statistical patterns I see in software development (and I’ll return to it in future) but for the purposes of this article, I want to look at the “tail” of this curve- that last 5-10% of your defect population – all your oldest items.

Addressing the tail is pretty straightforward. You can either set yourself a target “maximum defect age by a given date” or simply focus on continuous improvement. Whilst the target approach gives you a clear goal, you risk setting yourself up for failure or under-commitment. Defects are considered notoriously hard to size (I’ll cover this myth in future) but chances are if they’re old they’re probably a bit tricky too so being predictable about dates is something you might prefer to avoid for starters.

Whether you aim for a specific target or improving every day/week/month you’ll need the same stepped approach.

Identify your oldest defect, pick it off the end of the queue and commit to closing it quickly and fairly.

Here’s the first thing to accept… Closing doesn’t mean fix it in every case. Because you’ve not touched it for a while, chances are someone is expecting a solution so you’ll be looking at a difficult conversation if you don’t fix it but that might be far easier than “fixing” something that shouldn’t be changed or will derail your team and product. Make open, fair, honest decisions in each case.

  • If it is something you think you should be fixing, get on with it.
  • If it’s not – close it

Sound familiar?

Aim to close at least one of your oldest defects every week or every sprint.

If you have a lot of items that are considered old, consider increasing your capacity on these in the short term to get the ball rolling. (at the cost of other delivery activities)

If we look back to your distribution of defects over time, when you do close out your oldest defects, put a ratchet mechanism in place that sets a continuously reducing maximum age.  In very bad weeks the ratchet may not improve but don’t let the numbers get worse again or the effort and good faith from your teams and management will have been wasted.

With a ratchet, remember each “notch” is a step change from the last point. For example if your last point was 1000 days(!), clearing everything older than that should leave some defects nearly 1,000 days old. Set the next ratchet point to 950 days (rather than 997), determine what falls in that next block (950-999) and fix those. Ensure each ratchet point is a larger time window than your execution period otherwise you’ll end up stationary. (I usually go for a ratchet up of 50 days improvement per week or sprint).

Here’s what I mean…

Defects Tail

Ratcheting out the oldest defects each week

3 months of this tailing practice every week will dramatically reduce the average and maximum age of issues in your queue. It’ll make you all feel better and it usually helps to draw the heat off on escalations for a while. (Bear in mind you should be doing this in addition to your “topping” approaches).

Occasionally you’ll reopen some old wounds this way but paying these some attention now stops them coming back up when the timing isn’t under your control.

Keep this up for six months or a year and you’ll reach a stable point where nothing gets too old and your team becomes adept at having difficult conversations with your customers early and backs them up with some successes as well.

Eventually in order to keep the ratcheting improving you’ll have to commit more people. This probably means you’re approaching your natural level of control or entitlement where you have aging defects levelling off and others being addressed in a reasonably consistent manner. It’s up to you whether you aggressively pursue the numbers down further or sustain them at this level.

In case this sounds too obvious or simple to work, I’m working with a team right now that introduced this ratcheting approach less than a month ago. We’ve reduced our oldest defect age by over 25% and customer escalations are consistently lower already. Perhaps most important to the team; we’re no longer at the top of the charts when our leaders review the defect stats. Although harsh-sounding. Sometimes, having someone else drawing the heat for a while lets us get back to focusing on value and priorities.

In part 3 we’ll look at an old favourite – the “bug blitz” or feel free to skip to part 4 – “the litter patrol“.

Escaping the Oubliette (Part 1a) – Debt Prevention

Reading time ~ 2 minutes

This is a partial re-post of Escaping the Oubliette (Part 1). I’ve split the article into smaller readable components.

Great, I’ve got my incoming defect strategy nailed,

Now how do I prevent defects and debt in new code?

In 5 words…

Continuous attention to technical excellence.

Here’s my top 7 (there are plenty more)

  1. Acceptance Criteria – Be really disciplined on your acceptance criteria & acceptance tests, team up with Analysts, Testers, Product Owners if you have them and attack your stories from every angle. A good approach to this is a “story kick-off” where the whole team dismantles a story before starting.
  2. Thinking Time – don’t just start coding right away, task things out, try the 10 minute test plan, discuss your approach with your peers and for more complex or large items, try the “just enough design” approach.
  3. TDD – It’s hard to start but has an immense impact.  I’ve just seen a team complete their first project using TDD. 3 weeks into their final round of post feature-complete testing, their defect run-rate hasn’t had the testing spike seen on prior projects. In fact they’re keeping on top of all new incoming defects and have time to start paying down the historic backlog.
  4. Pair Programming – Do it in half-day trial chunks if you don’t have the stomach for going full-tilt. I’ve performed remote pair-programming with colleagues across the Atlantic using decent phone headsets and online collaboration tools for hours at a time. The net result of 2 days of remote pairing was finding and fixing about 10 extra defects in a thousand lines of code that neither of us would have found coding alone.
  5. Peer reviews – there is still a huge space for these in agile teams. But here’s the thing. Be really tough. A peer review is not a hurdle. It’s a shared learning exercise. Functional correctness is actually the smallest component of a peer review. You should trust your developers that far. But there’s a whole series of other aspects to review. See the joy of peer reviews.
  6. Small tasks – I once worked with an outsourced team who when taking work would disappear into a hole for 2 weeks and return with a single task in our configuration management system containing edits to 200+ files and multiple condensed edits to the files. My rule of thumb is one reviewable task per activity. If you’re going to add new functionality and refactor, that’s 2 independent tasks that can be identified and reviewed separately. This means you should be able to easily deliver 2 reviewable, closable tasks per day.
  7. Fast Builds – make it insanely simple for a developer to perform an incremental build that validates new code against the latest main code line. (small tasks are a big help here).  This includes the right subset of unit and functional tests. Aim for a target of a 30 second response time or less between hitting the button and seeing the first results.

In the next article in this series I’ll focus on “Tailing” – How do you start reducing the old defects.

Your Baby Might Be Ugly

Reading time ~ 3 minutes

Everyone thinks their own babies are beautiful. Some probably are but there are plenty that really aren’t. In fact I remember one particularly interesting comment -“ugly people produce ugly babies”

This happens in code too.

Collective code ownership has many challenges. Here I focus on the person pitfalls.

Imagine you’ve just spent the last 3, 5, or even 10 years nurturing and grooming an insanely complex (and dare I say “clever”) functional area of a product. This has been your working heart and soul for years – apart from the occasional customer to distract you. It follows your own unique,  “perfect” coding style and conventions. You’re so damn good you don’t need unit tests, you instinctively know and understand every intricate tracery and element of logic in there.

Why then if it’s so perfect are you not willing to share?

This is where “Bad Captain” starts to kick in with me…

  • Do you believe that every other developer in the building is less capable than you?

(OK, sometimes it might be true but that’s very rare.)

– That’s fine, you can peer review and provide feedback if they don’t meet your standards right? It might be an emotional lurch to start with but seeing someone else succeed and recognize your greatness must surely be rewarding right? I’ll help you if you need.

  • It’ll always be quicker if you work on it. It’s just a waste of time and effort to train someone else up.

– Sure. Except if we had 3 or 4 people capable of working on it in parallel we might get everything that’s on the wish list for it done this year and you might be able to do that other interesting stuff that the rest of the team have been playing with. In fact, I’m willing to invest if it’s that important…

Bad Captain starts to twitch…

…and if it’s not, why do I have someone who believes they’re “one of the best” working on it – I want my “best” people on my priority tactical and strategic items.

  • You’ve been doing this as your own pet-project on the side and it’s not ready to share.

– If it’s important I want a team on it. I’ll happily lobby for some strategic investment. If it’s not as important as anything else we have committed right now I want you helping the rest of the team out…

Bad Captain…

– even if you’re good, you’re not “special”, don’t expect to be treated any differently than the rest of the team.

These first reasons are almost valid. But not good enough for me. The more of these I hear the more I start to twitch, the more terse I get and the more I start to frown and tense up.

Suddenly it’s too late! Captain Hyde takes the helm!

Here’s why Captain Hyde thinks you’re not sharing…

  1. You’re lazy, you’re only any good at this one area and don’t want to have the responsibility of teaching it or learning something new that you’re not so good at. – Time for you to learn a bit about intellectual humility.
  2. If you relinquish control of this code, you relinquish your seat of power, you’ll make yourself dispensable and that can’t possibly happen, you think you’re far too important for that. – I believe in good people, if you’re one of them I’ll fight to keep you. If there’s a squeeze, even the indispensable will get the push. In fact in my experience, the all-rounders are more likely to find new homes.
  3. Secretly in your heart you know it’s loaded with technical debt, years of warts strapped onto the sides, a catalogue of compromises and mud that doesn’t mirror the patterns of the outside world in the 21st century. – In fact I could probably find something open source that does the same thing. Quite frankly it’s an ugly baby that should have been put up for adoption or surgery years ago but your professional pride won’t admit it.

Captain Hyde can’t think of any good reason other than personal self-preservation or embarrassment that someone who is a single code owner would be unwilling for other people to work on and learn their code.

Maybe I’ve just been in management too long but I distinctly remember in my early coding years a call with a colleague in San Francisco where he said:

“Damn, you’re the only guy in the world that knows this stuff man”.

– at which point I felt that little clench inside that said. “I need to find and train my replacements up as soon as I can!”

A similar counterpart of mine in the US at the same time got thrown another pile of stock options every time he threatened to leave – that wasn’t my style as a developer and it’s not my style as a manager either.

Now despite all this ranting, there is another very real problem.

Developing collective code ownership from single points of failure on complex products is time-consuming, expensive and probably not what my business stakeholders want my team “wasting” their time on when we already have experts who are “perfectly capable of maintaining this themselves”.

It’s a fair point.

With a team who are willing to share but have no slack, if I want collective code ownership to succeed I’ve got to be able to market it properly.

Escaping the Oubliette (Part 1) – Topping

Reading time ~ 2 minutes

As promised, how to Escape The Oubliette

So here you are, part of a team at the bottom of the defect & debt pit. What now?

The simple fact is, there’s no one answer but there are a selection of tools and there is a key.

The key comes courtesy of a comment from Jim Highsmith during one of his sessions at Agile 2010 – my paraphrasing below…

“Teams facing technical debt need both debt reduction and debt prevention strategies.”

When teams face technical debt they pick away, removing it piecemeal. After all, you have to eat an elephant one bite at a time right?

Trouble is, that only works if the elephant isn’t growing as fast as you’re chewing.

You need a two-pronged attack – what I like to call “topping and tailing”. The “top” is all the incoming issues and build-up of new debt. This is where you need to stop the bleeding. The “tail” is the backlog of existing debt. It’s already there – gently rotting in the corner of your product.

Topping

If you’ve read my post on stopping the bleeding, this is the crux of “topping”. You may have a variety of strategies for this but here’s the fundamentals.

Incoming Customer Defects

Ringfence enough capacity to address defects as soon as they come in. Make sure your burn rate matches your incoming rate. If you’re strapped, it’s your call whether you only cover high severity issues but for my money, I’d hit them all. A multi-year build-up of low severity issues makes products ugly, no pleasure for the users and a series of minor problems rapidly becomes a major burden and a big debt buildup.

Addressing every incoming defect doesn’t mean fixing every one. Whilst I generally recommend you do, when trying to dig yourself out of an oubliette the bigger issue is how to keep your goal achievable.

Be critical. The big, nasty bugs are usually pretty obvious but what about the rest?

    • Is it low severity and not important?
      • Is it quick & cheap to fix? – Just fix it.
      • If it’s not – Can you put a safety rail around it? Document it as a limitation or even ignore it?

Negotiate with your customers and make an explicit decision to fix now or never. Once decisions are made, communicated and agreed, close the defects in your backlog.

Getting these conversations happening immediately makes customers far happier than pretending they’ll just go away and never making a decision. (but not quite as happy as fixing their issues).

Incoming Internal Defects

The story here is the same. I’ve called it out independently as many companies and teams do.

Those incoming internal defects speak volumes about your approach to quality.

Get the discipline in place to fix them when you find them or be brutal and admit which ones will never be fixed but don’t pussyfoot around!

  • If you provide a service to other internal teams, treat them as customer defects.
  • If they’re raised by your own team, are they on new code or exposing existing issues?
    • On new code, fix them! – No excuses .
    • On existing code, the problems are already there. You need to decide if what you’ve done has made it more visible or harder to work with than before and make a judgement call. Fix now or never, resolve or close.

In the next article in this series I’ll cover debt prevention