The Joy of Peer Reviews (Part 2 – Documentation)

Reading time ~ 4 minutes

Another agile management myth to dispel – agile projects have no documentation…

The level of documentation you provide depends on the needs of your stakeholders.

About a month ago I spent some time discussing code reviews and promised a follow-up on document reviews.  Since then, the number of documents I’ve needed to work on or review has been pretty slim so there’s not been much scope for inspiration.

One of my teams are coming to the end of a major release and our product managers are re-evaluating parts of the portfolio. Both of these events drop us into some more traditional documentation areas for a while.

Before I progress, there’s three types of documentation we tend to deal with on a project.

  • Product documentation (user-facing)
  • Control documentation (mostly management-facing)
  • Technical documentation (mostly team-facing)

We produce a lot of information in all these areas but of most of it isn’t as formal documents.

Here’s the bits we treat formally…

  • Vision, business needs & minimum marketable feature set
  • High level plan (priorities, scope, cost, schedule, quality)
  • Significant, complex or high level designs (functional or technical)
  • End-user and admin guides
  • Significant scope changes (change control)
  • Closure (agreement from stakeholders that we’ve met commitments)

Before going any further. The main rule of thumb in documentation I follow is:

Strive to keep your overheads to a minimum

Focus on the level of documentation needed by the relevant stakeholders and seek alternatives where feasible. For example user guides & help files may be better replaced with automated drive-throughs, screencams and audio descriptions. Project reviews may be best served with a conversation, a single slide with some visual anchors and a sign-off.

My teams have had cases where we felt we needed more documentation because we weren’t happy with the level of traceability from our less-formal approaches.  This tends to vary depending on what levels of organizational trust exist and how much protection you may need.

Where you do need formal documentation – regardless of which of the above types it falls into here’s my basic document delivery and review guidelines…

Delivery

It’s remarkable how much good documentation is very like code.  My mantra is “deliver early and often”.

  • Start by delivering the scaffold, walking skeleton or spine of your document and get that out for initial feedback on structure.
  • Consider swarming your team around parts of the document to accelerate delivery.
  • If you have interaction across teams, ideally have the teams write their components for you – do the editorial but they cover the “meat”.
  • Balance time invested with expected value – especially before initial reviews
  • Start adding flesh to the bones one section at a time.
  • As each section is “good enough”, send it out for a draft review.
  • Following review, rework the meat and polish
  • As major themes come together for completion, review again for cohesion.

I find there’s one large challenge in writing documents – Inertia – don’t underestimate the effort needed both to get moving and to be properly finished.

Most experienced coders understand about getting into “the zone” or being in “flow”. This same situation holds true for writing documents however personally I find it’s much harder to start moving on a document, takes more sustained effort for completion and requires more polish than my code. Once you are in flow. Stopping is equally hard. Much like delivering small frequent reviewable coding tasks takes practice, so the same applies for documentation. The team approach to documentation is a great way around this once you have a scaffold in place.

Review

If producing documentation is like cutting high quality code, then it should be no surprise that reviewing documentation should be treated with similar respect.

Bear in mind, usually when you’re reviewing documentation, you’re probably at a point in the cycle where you can prevent major defects and costs in future by asking the right questions.  Reviewing documents is not a hurdle to overcome it’s an essential quality step in setting your projects up for success and in ensuring we have consensus and understanding on results.

I’ve divided the checklist into 3 parts. “Thinking”, “Approach” and “Review”.

Thinking

A few considerations to make before starting the review itself (and before submitting for review)

  • Discipline – treat this review just like you would a code peer review.
  • Should it be a document , could it be something else?
  • Who is the intended audience, is it pitched at the right level?
  • Is this technical, control or product documentation (and how does that impact our approach)?
  • Is this primarily for management, the team or our customers/users?
  • Is this document transient or permanent?
  • How critical is accuracy/precision? (compare for example to scientific or medical papers)
  • How will this document be maintained and who by?

Approach:

What form is the review going to take?

  • Round table vs offline – will the review be individuals at desktops, as a round table session or a mix of the two
  • Piecemeal or big bang – I mentioned my preference for part-deliveries. This only works if you’re willing to perform part-reviews.
  • Feedback cycle – how will you manage feedback? For large critical documents I usually forecast 2 rounds of review/rework and a total elapsed time of 2 weeks from the initial draft to completion. (that’s assuming a well-managed but not top priority review cycle). The author/coordinator should have this at the top of their stack until it’s “done done”.

Review

Points to look for – these start simple and get trickier as you go through the list.

It’s disturbing how many people only review using a subset of the top items here and get no further. Lazy reviewers may read a document without actually reading it. (Although perhaps this is a reflection that we’ve missed something in the perceived value of a document)

  • Document information and headers/footers are correct
  • Consistent formatting & numbering e.g page numbers, typefaces, sizes
  • Spelling & grammar
  • Tables of contents & figures are up to date (ctrl+a, f9, update all – in Word!)
  • Cross-references work and point to the right locations
  • Related documents or prerequisites are both accessible and necessary
  • Use of correct templates where required
  • Proper heading formatting – helps with maintenance of TOC
  • Use of change tracking and versioning
  • Identify reviewers, owners, approvers
  • _________________________________________
  • Lots of words when a picture would be better
  • Emotive descriptions (where not valid)
  • Unfounded facts and numbers (particularly sales, estimates and time lines)
  • Impacts on other teams (especially without consultation)
  • Expectations from other teams (especially without consultation)
  • Over-optimistic statements (see unfounded facts and numbers)
  • Unbounded requirements
  • Missing assumptions
  • Alternatives and rejections
  • Over-technical or not detailed enough for audience
  • Length/brevity (much like some of my posts)
  • Emotional attachment to content or single approach/ideas
  • Any prior examples of similar to compare, reference and/or improve from
  • Voice of the customer (and their future accessibility)
  • Who are the decision makers and arbitrators,
  • A clear problem statement
  • Well-defined vision
  • SMART or Elephant goals
  • What does “good”, “done” or “success” look like?
  • Quality, acceptance criteria and tolerances
  • Any disconnects between related documents or teams
  • Anything missing

Phew! – as before, I’m sure there’s more.

To summarize in one sentence…

If your documentation is important to your stakeholders, treat it with the same reverence as your production code.

Patterns For Collective Code Ownership

Reading time ~ 5 minutes

Following the somewhat schizophrenic challenges of dealing with person issues on collective code ownership, here we focus on the practices and practical aspects. How do we technically achieve collective ownership within teams?

We’re using the “simple pattern” approach again. For each pattern we have a suitable “anchor name”, a brief description and nothing more. This should be plenty to get moving  but feel free to expand on these and provide feedback.

Here’s the basic set of patterns to consider:

Code Caretaker

Let’s make it a bit less personal, encourage the team to understand what the caretaker role for code entails. It’s not a single person’s code – in fact the company paid us to write it for them. Code does still need occasional care and feeding – Enter the “Caretaker”.  Be wary though that caretakers may become owners.

Apprenticeship

Have your expert spend time teaching & explaining. An apprentice learns by doing – the old-fashioned way under the tutelage of a master craftsman guiding them full-time. This is time and effort-intensive for the pair involved but (unless you have a personality or performance problem) is a sure-fire way to get the knowledge shared.

If you need to expand to multiple team members in parallel,  skip forward to feature lead & tour of duty.

Ping-Pong Pairing

Try pair programming with an expert and novice together. Develop tests, then code, swap places. One person writes tests, the other codes. Depending on the confidence of the learner, they may focus more on writing the tests and understanding the code rather than coding solutions. Unequal pairing is quite tricky so monitor this carefully, your expert will need support in how to share, teach & coach.

Bug Squishing

Best with large backlogs of low-severity defects to start with. Cluster them into functional areas. Set a time-box to “learn by fixing” – delivery is not measured on volume fixed but by level of understanding (demonstrated by a high first-time pass rate on peer reviews). If someone needs help they learn to ask (or try, then ask) rather than hand over half-cut. This approach works with individuals having peer review support and successfully scales to multiple individuals operating on different areas in parallel.

A Third Pair of Eyes (Secondary Peer Review)

With either an “initiate” (new learner) or an experienced developer working on the coding, have both a learner and an expert participate in peer reviews on changes coming through – to learn and provide a safety net respectively. The newer developer should be encouraged to provide input and ask questions but has another expert as “backstop” for anything they might miss.

Sightseeing / Guided Tour

Run a guided walk-through for the team – typically a short round-table session. Tours are either led by the current SME (subject matter expert) or by a new learner (initiate). In the case of a new learner, the SME may wish to review their understanding first before encouraging them to lead a walk-through themselves.

Often an expert may fail to identify and share pieces of important but implicit information (tacit knowledge) whilst someone new to the code may spot these and emphasize different items or ask questions that an expert would not. Consider having your initiate lead a session with the expert as a backstop.

Feature Leads

The feature lead approach is one of my favourites. I’ve successfully acted as a feature lead on many areas with teams of all levels of experience where I needed to ramp up a large group on a functional area together.

By introducing a larger team, your expert will not have the capacity to remain hands-on and support the team at the same time. They must lead the team in a hands-off approach by providing review and subject/domain expertise only. This also addresses the single point of failure to “new” single point of failure risk seen with “apprenticeship”.

This is also a potential approach when an expert or prior owner steps in too frequently to undo or overwrite rather than coach other members activities. By ensuring they don’t have the bandwidth to get too involved you may be able to encourage some backing-off but use this approach with care in these situations to avoid personal flare-ups.

Cold Turkey

For extreme cases where “you can’t possibly survive” without a single point of failure, try cold turkey. Force yourself to work without them.

I read an article some years ago, (unfortunately I can’t track it down now) where the author explained how when he joined a project as manager, the leaders told him he must have “Dave” on the project as nobody else knew what Dave did. He spent a week of sleepless nights trying to figure out how to get Dave off the project. After removing Dave from the project, the team were forced to learn the bottleneck area themselves and delivered successfully.

(Apologies to any “Dave”s I know – this isn’t about any of you)

Business Rule Extraction

Get your initiate to study the code and write a short document, wiki article or blog defining the business rules in the order or hierarchy they’re hit. This is usually reserved for absolute new starters to learn the basics of an area and show they’ve understood it without damaging anything. It is however also a good precursor to a test retrofit.

Test Retrofit

A step up from business rule extraction whilst delivering some real value to the team. Encourage your learner to write a series of small functional tests for a specific functional area by prodding it, working out how to talk to it, what it does and what we believe it should do. Save the passing tests and get the results checked, peer reviewed and approved.

Chances are you might find some bugs too – decide whether to fix them now or later depending on your safety margin with your initiate.

Debt Payment

After a successful test retrofit, you can start refactoring and unit testing. As refactoring is not without risk this is generally an approach for a more experienced developer but offers a sound way of prizing out functionality and learning key areas in small parts.

This is also a natural point to start fixing any bugs found during a test retrofit.

National Service (also known as Jury Duty)

Have a couple of staff on short rotation (2-4 weeks) covering support & maintenance even on areas they don’t know. This is generally reserved for experienced team members who can assimilate new areas quickly but a great “leveller” in cross-training your team in hot areas.

Risks are generally around decision-making, customer responsiveness, turnaround time and overall lowering of performance but these are all short-term. I’ve often used this approach on mature teams to cross-train into small knowledge gaps or newly acquired legacy functional areas.

My preferred approach is to stagger allocation so that you have one member on “primary” support whilst another provides “backup” each sprint. For the member that covered primary in sprint one, they’re on backup in sprint 2. (expand this to meet your required capacity).

3-6 months of a national service approach should be enough to cover the most critical functional gaps on your team based on customer/user demand

Tour of Duty

Have your staff work on longer term rotations (1-6 months) working on a functional area or feature as part of a feature team. This couples up with the “feature lead” approach.

Again, this is an approach that is very useful for mature agile teams that understand and support working as feature teams but can be introduced on less-experienced teams without too much pain.

The tour of duty can also work beyond an individual role. For example a developer taking a 3-6 month rotation through support or testing will provide greater empathy and understanding of tester and user needs than simply rotating through feature delivery. (My time spent providing customer support permanently changed my attitude toward cosmetic defects as a developer)

Adjacency

This takes a lot more planning and management than the other approaches described but is the most comprehensive.

From the areas each team member knows; identify where functional or technical adjacent areas lay and then use a subset of the approaches described here to build up skills in that adjacency.

Develop a knowledge growth plan for every team member sharing and leveraging their growth across areas with each other. Although this is an increase in coordination and planning, the value here is that your teams get to see the big picture on their growth and have clear direction.

All these patterns have a selection of merits and pitfalls. some won’t work in your situation and some may be more successful than others. There are undoubtedly more that could be applied.

Starting in your next sprint, try some of what’s provided here to developing shared ownership and knowledge transfer for your teams. Pick one or more patterns, figure out the impacts and give them a try.

(Coming soon “Building a Case for Collective Code Ownership” – when solving the technical side isn’t enough)

Escaping the Oubliette (Part 1a) – Debt Prevention

Reading time ~ 2 minutes

This is a partial re-post of Escaping the Oubliette (Part 1). I’ve split the article into smaller readable components.

Great, I’ve got my incoming defect strategy nailed,

Now how do I prevent defects and debt in new code?

In 5 words…

Continuous attention to technical excellence.

Here’s my top 7 (there are plenty more)

  1. Acceptance Criteria – Be really disciplined on your acceptance criteria & acceptance tests, team up with Analysts, Testers, Product Owners if you have them and attack your stories from every angle. A good approach to this is a “story kick-off” where the whole team dismantles a story before starting.
  2. Thinking Time – don’t just start coding right away, task things out, try the 10 minute test plan, discuss your approach with your peers and for more complex or large items, try the “just enough design” approach.
  3. TDD – It’s hard to start but has an immense impact.  I’ve just seen a team complete their first project using TDD. 3 weeks into their final round of post feature-complete testing, their defect run-rate hasn’t had the testing spike seen on prior projects. In fact they’re keeping on top of all new incoming defects and have time to start paying down the historic backlog.
  4. Pair Programming – Do it in half-day trial chunks if you don’t have the stomach for going full-tilt. I’ve performed remote pair-programming with colleagues across the Atlantic using decent phone headsets and online collaboration tools for hours at a time. The net result of 2 days of remote pairing was finding and fixing about 10 extra defects in a thousand lines of code that neither of us would have found coding alone.
  5. Peer reviews – there is still a huge space for these in agile teams. But here’s the thing. Be really tough. A peer review is not a hurdle. It’s a shared learning exercise. Functional correctness is actually the smallest component of a peer review. You should trust your developers that far. But there’s a whole series of other aspects to review. See the joy of peer reviews.
  6. Small tasks – I once worked with an outsourced team who when taking work would disappear into a hole for 2 weeks and return with a single task in our configuration management system containing edits to 200+ files and multiple condensed edits to the files. My rule of thumb is one reviewable task per activity. If you’re going to add new functionality and refactor, that’s 2 independent tasks that can be identified and reviewed separately. This means you should be able to easily deliver 2 reviewable, closable tasks per day.
  7. Fast Builds – make it insanely simple for a developer to perform an incremental build that validates new code against the latest main code line. (small tasks are a big help here).  This includes the right subset of unit and functional tests. Aim for a target of a 30 second response time or less between hitting the button and seeing the first results.

In the next article in this series I’ll focus on “Tailing” – How do you start reducing the old defects.

The Joy of Peer Reviews (Part 1 – Code)

Reading time ~ 2 minutes

Pair programming replacing peer reviews is a myth in the same way that “agile projects have no documentation”.

From my experience peer reviews continue to hold a vital place in agile development and software craftsmanship. Unfortunately they are often misunderstood or misapplied.

“Humanizing Peer Reviews” by Karl Wiegers is the best primer I’ve read so far on peer reviews so I’m not going to duplicate Karl’s efforts – I strongly recommend a thorough read. In fact, print a copy and give one to every member of your delivery team to read and discuss.

Like everything else in modern software development, peer reviews are a collaborative team learning experience. Reviewing code properly means both the reviewer and reviewee walk away having learned something and improved their craft.

With code reviews; beyond reviewing for functional correctness, (the simplest, most obvious and potentially quickest part of a review) there’s a selection of considerations I expect reviewers to look for. (There are plenty more).

  1. No code without tests
  2. Good variable naming
  3. Correct use of classes and interfaces
  4. Small methods
  5. Adherence to standards and conventions
  6. Style consistency across the team
  7. Readability
  8. Good test coverage from small tests
  9. Test code follows the same quality standards as production code
  10. Tests describe expected behaviours
  11. All tests pass
  12. Evidence of some test preparation to consider boundary cases, failure modes and exceptions
  13. Clear exception handing, failure modes and explicit boundaries
  14. Sensible error messages
  15. Code smells
  16. Performance risks or tuning opportunities
  17. Security or other “ility” issues.
  18. Opportunities to learn new tricks
  19. New good practices & patterns
  20. Functional correctness

Once you have cultural acceptance on the breadth of technical peer reviews develop your own checklist that everyone supports and remember the goal of a review is to share improvement opportunities, not for lazy coders to have someone else find their bugs for them or for staff to step on each other.

Now…

If you think peer review of code is problematic for your teams, I guarantee that document peer reviews will be a whole lot worse! – see part 2.