Legacy Code Vs Legacy Product

Reading time ~ 3 minutes

Some old gems I picked up at Agile 2010 from “Agile Test Automation Strategy for the non-Technical” by Gerard Meszaros (Author of xUnit Test Patterns).

These are still as relevant today as they were 5 years ago.

First, there are 2 critical definitions to provide context…

  • Legacy code: Code that does not have automated unit tests.
  • Legacy products: Products that contain legacy code.

However – a legacy product does not have to contain only legacy code.

Now, considering the differences between legacy product & legacy code think about your work on “legacy products”.

  • If you are adding new code, where do you add it and what testing do you do?
  • If you are modifying legacy code, how do you treat it and what testing do you do?

Here’s a quick refresh of the testing pyramid concept…

testing pyramid

The (grossly simplified) ideal world view is that you have a large number of automated (fast) unit tests as a safe platform upon which you build a suite of automated and manual functional tests and at the peak you have your broader system tests.

Most legacy products and systems have this pyramid inverted with heavy investment in system test and UI automation, insufficient small functional tests and few if any unit tests.

Great. So What?

Stop trying to “Correct” the testing pyramid.

 Rather than wasting our constrained capacity trying to automate everything, we must understand that legacy products have a testing “sweet spot”.

Focus testing and test automation strategy and investment on the areas of highest risk and highest ROI.

Even at the uppermost level, automate the things that steal most of our time first.  This frees us up to do more value-added work.

Do not try to retrofit unit testing into legacy code.

 To paraphrase Gerard, I distinctly remember him saying “Don’t waste your time unit testing legacy code”.

Instead, invest in the middle layer – the small automated functional tests.
This is because for the volume of code you are working with and its maturity, small functional tests will provide you a far higher return on investment than the same effort spent unit testing legacy code.

Given this is from the guy who literally “wrote the book” on unit testing this surprised me at first. It all becomes clearer in the next 2 points he makes!

(This is where the difference between legacy code and legacy product is key)

If you’re adding new code to a legacy product, unit test it.

Add a procedure or method call from the legacy code to the new code and then treat the new code as independent new code.

This practice also encourages better coding & architecture practices with greater readability.

Whilst the ROI on retrofitting unit tests to legacy code is low, the unit test effort for new code is recovered in prevented defects, less review issues, faster build & test feedback and simplified code maintenance.

Legacy code modifications should be refactored into new code.

 If you are modifying legacy code, use good refactoring patterns to turn it into new code.

Now it’s new code, it’s small, unit test it!

The “Extract Method” refactoring pattern is of particularly high value here. Essentially you have a fragment of code within a method to fulfil a particular purpose. Pull it out into its own short well-named method and treat it as new.

Avoid merciless or repeated refactoring.

This point came from Naresh Jain during his time working at Industrial Logic but fits here…

Refactoring is a discipline, don’t get carried away.

Understand what refactoring really means, study the patterns involved and use them with caution.

Wholesale rewrites are not refactoring.

I’d argue that all developers will work on legacy code in their careers so I strongly recommend a complete read of these…

There’s also a great cheat-sheet on code smells to refactorings here.

Negative User Stories and Email Subscriptions

Reading time ~ 3 minutes

When I first joined Red Gate nearly 4 years ago my first major project was working on a replacement system for managing marketing and transactional email for our customers.

Yes – email marketing – and boy did we suck!

Back in summer 2011 I kicked-off some team research, analysis and story activities on one major enhancement we wanted to develop – “Subscription Management”.

This sounds pretty straightforward right?

Hell no!

This is a sensitive area involving a series of tensions between international compliance & standards, good end-user experience, the working lives of a marketing team and ultimately the reputation of the company.

Let’s put this into context of a few major groups of users (this is just a subset)

1: The “Customer”

I bought your software once, I pay my annual support & maintenance, I’m reasonably happy with what I have. Let me know when there’s a free upgrade or my renewal is due but don’t sent me any more damn marketing or I’ll report you as a spammer.

2: The “Tourist”

I attended an event sponsored by you at some point in my life and since this I seem to sporadically receive mails from you or one of your affiliate companies. I’m interested in the community stuff and freebies so I might read some of the mails you send but if they’re not directly relevant to my interests I want to get off of whatever list/sub-list I’m on fast.

3: The “Fan”

I bitched about your company when you acquired a free product and turned it into a paying one but I’ll concede you guys have done an awesome job and I already used a few of your other tools anyway.

I’m always on the lookout for things that make my life easier and I’m not averse to finding out about the new developments you have in the area I work in.

If you send out something interesting I’ll even share it.

These people are worth looking after & listening to but they’re a fickle bunch – use with caution.

4: The “WTF”

How the hell did I get on this list? I’ve never heard of you, stop sending me spam!

It turns out they attended an event or downloaded some freebie from an affiliate site an their details found their way into the system or into the hands of a marketer who moved project, role or teams but kept their contact “because it was closely related”.

5: The “Prospect”

Yup, that’s right – you might actually be sending mails to real prospects!

What are they trying to achieve right now and what’s the most useful thing you could offer them?

If that’s not in the mails you’re sending, expect an unsubscribe. Staying in there inbox “to keep them aware” is probably not going to improve their opinion of your products.

6: The Marketer

I’ve been sending mails to this specific subset of the company-wide install base for years. I have my campaigns carefully planned with content, timings, massive target audience etc. I’d love to see the campaign convert into direct revenue but I’m more interested in getting it out of the door as one small part of my overall marketing strategy for the quarter.

This should be the easy bit, right?

What do you mean everyone unsubscribed because half a dozen other marketers hit them with irrelevant stuff over the last month as well?

 

Yup. That really used to happen here.

So the great thing is that with a little research, designing a great user experience around email subscriptions is surprisingly easy. There’s a basic set of users with competing and/or overlapping needs but more importantly. When someone doesn’t want your marketing mails, that doesn’t mean they hate you. It’s not a break-up, it’s not the end of a beautiful relationship, it’s simply someone asserting their needs.

So many companies manage to screw this up because they focus only on the needs of the marketer.

Make opting out a positive experience and chances are if a contact has a reason to talk to you (or someone like you) again they’ll be in touch. If they don’t – well, there’s no point in them being a contact still, right?

Spend a bit of time thinking about the more negative interfaces your customers, users and prospects have with your company.

What small steps can you take to raise the bar on these?

2 Visual Management Tools to Help Evaluate Your Options

Reading time ~ 4 minutes

Back in December 2012 I moved to a new team – from what was DevOps to .NET (same 3 roles as before – Dev Manager, Project Manager, Head of Project Management).

As part of my ramp-up on the team, products, customers, market and current challenges I was briefed by my new Product Manager and Division Head on the areas they’d like me to focus for the next few months –  and asked to deliver some draft plans for each of these.

I scratched around online to find a suitable A3 problem solving template as a thinking tool and struggled find anything that was quite suitable. The majority of A3 problem-solving tools are around defects/quality and root-cause analysis.

There’s a lot of wisdom in performing an RCA for a problem but sometimes you just need to move forward (see my “stopping the line” article).  In my particular case the change of teams would not be confirmed until I could satisfy my new potential boss that I was capable of covering their needs. In order to prevent confusion and disruption, this meant “going it alone” on my first iteration of a plan without interviewing the team on their thoughts. That makes performing an RCA problematic.

Here’s the A3 template I used.

Click here for a link to the PPT version

This got me far enough to complete the review – a win!

After discussion, I moved onto deciding next steps. Moving form vision and strategy to plans and actions and found that whilst I had a set of options, my timing was bad. The following week was “Down Tools Week” for all our development teams. Nobody was freely available to hassle about current projects and products and I didn’t want to interrupt the amazing things everyone was doing.

Rather than waste a week, I moved onto priority number 2 – increasing the team’s customer understanding. This needed less input but was also less well-formed in terms of approach, options and actions. My original plan had suggested one possible option but during review, it was felt I needed to provide and evaluate some alternatives.

I had the information to achieve this but hit a wall.

After half a day of thrashing with a blank slate I approached my new Division Head and asked for his help. I explained I was blocked. I had the right information but couldn’t move things forward. We spent 10 minutes discussing the problem with him essentially working in “Rubber Duck” mode.

After talking through the blockage we established a way forward. We discussed the (now much longer) list of options and an initial set of evaluation criteria for these. And I agreed to turn this into “some kind of evaluation matrix”.

As soon as I returned to my desk things fell into place…

We regularly borrow a few visual management techniques from Lean (we had a former Toyota UK guru spend a day a week here for a couple of years consulting with us on a wide variety of management topics).

One particular technique that has proven exceptionally effective is the visual representation of “Good/OK/Bad” using a colored circle, triangle and cross. When we were first introduced to this notation we were sceptical. Now, 3 years later we have a common visual form that all managers understand with patterns that can be seen at a glance and the approach is part of our routine reporting across all areas of the business. Even the monthly roll-up report for all projects I track across the entire development portfolio uses this same notation.

I took the list of options we’d been discussing and both positive & negative evaluation criteria, applied them to a matrix and used the (then new) notation to capture my thoughts.

Here’s the outcome.

Without any reading at all, the 3 strongest options stood out. Better still, this technique didn’t need anything more than expertise and instinct. I’m free to follow up with further research and evidence if needed but at that point we were after a first-cut and a quick narrowing of options to focus our efforts.

There’s an important subtlety in this notation to be aware of. A yellow triangle means “OK”. It’s not a problem but we think we can do better. A green circle means “Good” – we mean really good – not just “fine”. This ties back to an article I wrote back in 2011.

When we report something as “good” on a project status, we expect supporting evidence. When something is “ok”, we ask “how can we help or improve this”.

What this evaluation is saying is that we have a couple of really strong options that would help us achieve our goal. They’re strong rather than just the best of what we have.

Next time you’re trying to figure out what to do, try these 2 visual management and thinking tools as a way of unpicking and reassembling your options into something simpler and more cohesive. If you’re lacking options that have enough “good” points, try some more alternatives, don’t just big the best of a bad bunch.

(In the end I spent a year working with the .NET team. and shared a sample of the results here. )

 

On Story Points and Distributions

Reading time ~ 5 minutes

If you’ve been reading my posts for a long while, you might remember this curve in relation to fixing bugs.right-skewed beta distribution

Today I’m resurrecting it for other reasons. First of all I’ll admit I used to be a bit of an estimation geek. I loved the subject. (really!)

Last summer I attended my first ever #noEstimates session at Agile 2014. What I heard made complete sense to me and there are a few teams here that have stopped estimating (they still measure and forecast throughput) but I still see value in trying to estimate. I’ve seen enough examples over the last decade where work is traded out because a team cannot estimate the effort of the previous.

I love Arlo Belshee’s concept of Naked Planning – in that if you remove all estimation information and focus purely on what the most valuable and important thing is, you’ll do the most valuable and important thing.

The trouble is, in many commercial product situations, it’s not actually that clear what the most valuable thing really is – and value generally has a trade-off of cost. (Or at least “how much am I willing to spend on this”).

I also like Chris Matt’s Options thinking alternative to estimates – “How much are you willing to spend to get a better understanding of this”. This ties in with a fundamental part of estimation theory in that there is a trade-off (and decreasing value) in spending additional time understanding a problem in order to better estimate the outcome.

Finally, whilst many teams I’ve worked with have used story point estimates for stories and hour estimates for tasks, we don’t go in blind. We have triangulation and reference points from prior work wherever possible.

I used to run half-day training courses on software estimation (and still see that course as valuable – primarily because the theory is transferrable). These days, if I have a team that have never had a real conversation about what estimates are, why do them and how I trim down that half-day content into about 15-30 minutes on the most important bits for story and task estimation.

I’ve been using the same explanations on estimation for about 7 years now and although some of my assertions on why we estimate are finally wavering, the information on how is still useful – and – as far as I know, nobody else has explained it this way.

Back in 2011 I wrote an article on Swimlane Sizing that was subsequently referenced and made popular by Alexey Krivitsky in his Scrum Simulation With Lego Bricks paper.

What I want to share today is how all this hangs together based on a very simple concept from the 1950’s.

The PERT technique came from the Polaris missile project and in my very simple terms is essentially a collection of tools and techniques based around probability distribution.

When examining the probability of completing any task there’s a great rule of thumb to start with:

There’s a limit to how well things can go but no limit to how bad they could get!

With this thinking in mind, essentially the completion time and/or effort for a given task can be represented by a probability distribution. A right-skewed beta distribution.

Furthermore, when adding a series of these together, you have a collection of averages. (cue a #noEstimates discussion).

If you ask someone to estimate how long a bug will take to fix, without historic data you’ll get a “how long is a piece of string” type answer. Everyone remembers the big statistical outliers but if you strip these away, you can forecast quite accurately with about 95% confidence. (This is one of the foundations behind using data for service level agreements in Kanban).

Some items will take longer than average, some will be faster but based on those averages, you can get a “good enough” idea on durations.

With PERT, this beta distribution is simplified further to 3 points – “optimistic”, “most likely” and “pessimistic”. The math is simple but at least for today’s point, not important.

Here’s the bits I care about.

PERT summary

Now here’s a neat thing (and the point of this post).

With the introduction of story points, we’ve moved away from the amazing power of  3-point range estimation back to single points.

Once you’re in the realm of single point estimates, people start seeing a falsely implied precision. Unless you’ve actually had training on use of story points (many senior managers probably won’t have done) then you’ll start building all those same human inferences that used to occur with estimates like “It’ll take about 2 weeks”.

When we hear “2 weeks”, we leap to a precise assumption and start making commitments based on that. In range estimation we’d say It’ll take 1 to 4 weeks and in PERT we’d say, “Optimistically it’ll take a week, most likely 2 and pessimistically, 4. Therefore we’ll plan on 2 but offer a contingency of up to a further 2 weeks.”

(Of course in reality, wishful hearing means you might still end up with a 1 or 2 week commitment but hopefully the theory is making sense).

It’s also worth examining the size of the gaps between optimistic -> most likely and most-likely -> pessimistic (in particular, the latter of these 2). These offer a powerful window into the relative levels of risk and uncertainty.

By moving back to single point estimates – at least at the single story level, this starts feeling a lot like precise and accurate commitments**. Our knee-jerk reaction may then be to avoid providing estimates again.

But here’s the missing link…

Every story point estimate is in fact a REPRESENTATION of a range estimate!

We can take this thinking even further…

The greater a story point estimate, the less precise it is.

Obvious right? Let’s take one more step…

 If a story point estimate is a representation of a range then a larger number implies a WIDER range.

Let’s take that back to the swimlane sizing diagram and (crudely) overlay some beta distributions…

swimlane-distributions

Look carefully at how those ranges fall.

  • Some 2 point stories take as long as a 3 point story.
  • Some 5 point stories may be the size of 3 point stories, in rare cases, others may end up being 13 or even 20 points.
  • And some 13 point stories may go off the scale.

And that’s all entirely acceptable!

On average a 5-point story will take X amount of effort.

That’s enough to forecast and that’s enough to start building commitments and service level agreements around (should you need to).

So…

Time to start thinking about what distributions your story point estimates are.

If you’re getting wild variation, how might you capture some useful (but lightweight) data to help you, your team and your management understand and improve?

You might be in a fortunate place where you simply don’t need to produce estimates at all. That’s great. I’d assert there’s a lot to learn by simply trying to estimate even if you don’t use the results for anything but for the majority of us that need to do at least some sane forecasting, this thinking might just make estimation a bit safer and a bit more scientific again.

If you’re interested in more on this, take a look at the human side of estimation in “Seeing the Value in Task Estimates”.

**As an aside, a while ago, the SCRUM guide was updated and replaced “commitment” with “forecast”. That’s a big change and tricky to retrain into those that saw SCRUM as the answer to their predictability problems. (Many managers needed guarantees!) For those of you facing continuing problems here, it’s worth gathering and reviewing story data so you can build service level agreements with known levels of confidence as an alternative.

We Create The Politics Ourselves

Reading time ~ 4 minutes

Back in June 2013 I was asked to facilitate a discussion exploring one of Red Gate’s company values – No Politics. This was the second in a series of open sessions that the company had initiated to preserve and highlight what they have valued for many years and to get to the bottom of what these values mean, their relevance and impact on individuals and teams around the business. The expanded wording for our “No Politics” value is:

No gossiping, no intrigue, no pussy-footing around problems and no telling people what you think they want to hear whilst privately disagreeing. We will be transparent in our dealings.

Many of our staff – myself included – were starting to feel that we were no longer being true to this value.

After an hour’s discussion between about 40 of us including concrete examples of where many of us felt we’d seen or been involved in “politics” we established:

Politics are created when “your needs” conflict in some way with “my needs” and where both “you” and “I” fail to openly communicate.

Taking this further – all the examples of politics we’d seen boiled down to a combination of failures in communication of motive and intent and failure to share conflicting needs. Ironically, we’re usually all trying to achieve the same ultimate goal.

Our encouraged altruistic culture tended to exacerbate the situation further and cause us to wade in when we felt a decision had insufficiently involved those impacted – even when the person taking exception felt the decision was right. (You should have seen the angst when a rapid decision was taken “on high” to move to Github for all product development)

Our values and culture have encouraged this “challenge everything” behaviour for years!

Being on the receiving end of a change or decision we don’t fully understand, agree with or have had no involvement in makes us feel bad, we internalize and build that frustration up and start sharing it in passing conversations, at coffee and in corridors – because we’re entitled to share our opinions. But we’re all pretty gentle and fluffy here. We’re a software company, many of us are shy and conflict-averse. This means we share our worries in private and let them radiate out. (This also means any vocal or forceful minority have much stronger voices)

We create the politics ourselves

Since discussing and recognising this fact I started making a concerted effort to tackle issues head-on again and had a deeply humbling moment when a colleague pulled the “grown up” card on me for my own bad behaviour.

It’s amazing how the weight comes off when you realise that everyone is trying to do the right thing within their own context and needs and often have simply not recognised where this butts up against our own. (And that you have failed to empathize with them!)  Gently calling out those disconnects and addressing them has defused even some of the thorniest conflicts I’ve faced.

Oddly, I hadn’t recognised a link until a serendipitous moment yesterday but my friend Clarke talked me through some aspects of this same challenge a few years ago. He’s since written up his thoughts in detail here.

I also see this same expression and sharing of uncommunicated needs as a cornerstone of non-violent communication.

If you’re still reading, as some background here’s the wording of the full set of values as included in the “Book of Red Gate”  (an earlier 2010 Edition is also available) – we’re reviewing whether these are still the right set and the right words even now but they do capture a lot about working here.

  • You will be reasonable with us. We will be reasonable with you

We’re all trying to treat each other as we would like to be treated in the same circumstances. Sometimes the circumstances are difficult, but we will all still be reasonable.

  • Attempt to do the best work of your life

We’d like you to achieve your own greatness and to be all that you can be. We’ll try hard to allow that to happen and we’d like you to try hard too.

  • Motivation isn’t about carrots and sticks

Constant oversight and the threat of punishment are incompatible with great, fulfilling work. We believe in creating appropriate constraints and then giving people the freedom to excel.

  • Our best work is done in teams 

We work in groups and towards a common goal. The company is more important than the team, and the team is more important than the individual.

  • Don’t be an asshole

No matter how smart you are, or how good you are at narrowly defined tasks, there is no room for you here if you’re an asshole.

  • Get the right stuff done

We admire people who get stuff done.  While there’s a place for planning, thinking and process it is better to try – and try well – and fail than not to try at all.

  • Visible mistakes are a sign that we are a healthy organization

What we do is very difficult, the current situation is hard to understand and the future is uncertain. Mistakes are an inevitable consequence of attempting to get the right stuff done. Unless we can make mistakes visible both individually and collectively we will be doomed to mediocrity.

  • No politics

No gossiping, no intrigue, no pussy-footing around problems and no telling people what you think they want to hear whilst privately disagreeing. We will be transparent in our dealings.

  • Do the right things for our customers

We believe that if we do what is right for our customers then we will thrive.

  • Profits are only a way of keeping score, not the game itself

Focusing purely on the numbers is a sure way to kill Red Gate’s culture. We believe that if we focus on the game – building awesome products that people want to buy, and then persuading them to buy them – the success will follow.

  • We will succeed if we build wonderful, useful products

Shipping something amazing is better than creating something average and to budget and on time. We cannot market, sell, manage or account our way to success.

  • We base our decisions on the available evidence

Not on people’s opinions, the volume of their voices or who they are. When the evidence changes, we are prepared to change our minds. We will thank, and never shoot, the messenger.

  • We count contribution, not hours

What you achieve is more important than how long it takes.

200 Questions to help bridge the Product Owner divide

Reading time ~ 5 minutes

A year or two before I first started writing publicly, I read an interesting, varied and quite long essay by Jeff Patton on the Product Owner role in Agile development (particularly Scrum)
http://www.agileproductdesign.com/blog/2009/product_owner_and_problem_shaped_hole.html

Whilst there are some areas I agreed with, the discussion on “heroics” was an area that left me cold. In my opinion, great software teams don’t have space for individual heroics and (in my opinion again) the sustainable pace ethos generally also plays down team heroics (but that’s a debate for another time).

There was a comment in Jeff’s article that resonated with me that I made a note of at the time. Years later it’s still highly relevant and I wanted to re-raise it this week in the context of recent challenges I’ve been facing.

“For the product you’re working on right now, how will your company benefit after the product has shipped? If you understand how, do others?”

Although this question was aimed at Product Owners it’s something I’d ask all development team members to really think hard about.

Now go further. Are you solving technical problems and a series of requirements to ship a product release or do you really understand the value of the release you’re working on, its reason for being developed and its impact on specific customers, the market or the company?

How many times have you or your team examined the reason for delivering a release with a specific set of features and understood their aim, questioned the backlog and vision and better still, properly understood what you’re trying to deliver in terms of solving the needs of the user?

When I say “understood”, I mean really understood – not just prettied up to follow one or another user story format.

Consider this an opportunity to pause for thought.

Back in 2013 I attended a Retrospective Facilitators Gathering and was introduced to an amazing concept over lunch by Willem Larsen that he described as “200 Questions”. He explained an idea that comes from animal tracking.

When tracking an animal you explore the environment in a way that an every day person doesn’t even consider. You look at every twig, leaf and blade of grass and ask questions of how it came to be in its given state (and why) in order to determine if you’re on the right path.

My translation of this to my own work is to try and develop a genuine and deep curiosity for every item you look at. (Even if you don’t have time to answer all your questions)

At the same lunch, we tried it with a salt cellar

How many people have handled it since it was made?

Who designed it?

What inspired the design?

What’s it made of?

Where did the salt come from?

What process did the salt go through?

(we lost count of how many questions we asked)

Applying this same healthy curiosity to incoming product features or requests is incredibly powerful.

You know things are wrong when every request coming in is essentially a specific solution being proposed.

And you know how translating feature requests back to user stories in anything except a shopping cart or finance trading application seems to be really poorly explained in the agile community?

Try this as an experiment for getting unstuck…

  • Set yourself a target of asking 200 questions about the next big item coming in to your team.
  • Try different “lenses” for asking the questions (much like 6 thinking hats)
    • The “User” lens
    • The “Investment” lens
    • The “Functional” lens
    • The “Market” lens
  • What questions logically follow from those first “level 0” questions?
  • Decide which of those are important and valuable to actually answer in order to further your understanding towards your goal.
  • Work on answering them – collaboratively.

In no particular order (but with some logical grouping around lenses) and in the very rough form I’ve been actively using in my current team, here’s over 50 of the “level 0” questions I’ve started experimenting with.

Feature Info

  • Name – what do we call it?
  • What is it? (2-3 sentence user-facing description *and* 2-3 sentence product description)

User info

  • Describe how a user would do this now? (without new code)
  • What types of users does this help?
  • How does it help them? (what does it enable, unblock, simplify etc.)
  • What is a user trying to actually achieve when they do this?
  • Why?
  • What constraints are they facing at that moment?
  • Who can we talk to about their goals and needs for this? (a selection of people willing to talk to us more)
  • How complex/accessible (to the user) is it?
  • How many/what proportion of users will benefit from this?
  • How frequently might we expect different types of users to use this?
  • How might this detract from the product for other users?

Investment/Product/goal alignment

  • How does this improve the product?
  • How big/expensive is it?
  • How much time do we want to invest in initial investigation?
  • If an initial investigation is inconclusive, what next?
  • How much are we willing to invest in this? (and how flexible is that decision)
  • Is there anything obviously more important/valuable/beneficial to do before this?
  • Would we be fools not to do this in the next 90 days? – why/why not?
  • If this were the only thing we did in the next 30-90 days, is this the right thing to do?
  • Why is this valuable to the team/company?
  • How does it align to current product/company goals?
  • How does it further our journey toward those goals?
  • Would you describe this as a tactical or strategic change?
  • Why?
  • Does this really fit within the product we have already?
  • (how) will it help differentiate the product from the competition?
  • Could this be a new product?
  • Are there any other products it may integrate with or impact? (ours or elsewhere)
  • Does this align more closely to another product than this one?
  • Do any of our competitors already do this? (and how)

Functional info

  • What solution options exist already?
  • Can you give an example of how that would work?
  • Under what situations might that not be suitable/successful?
  • What additional solution options can we think of
  • What’s the smallest simplest thing that could possibly work?
  • What’s “good enough” to meet customer needs?
  • Is there an 80/20 solution option for this?
  • How can we make this more discoverable, useful and valuable to users?
  • Is there an ingeniously simple solution for users?
  • What would it take to make a user say “wow!” (and is it worth the extra effort?)
  • What is the best solution in terms of cost/functionality/usability/quality balance
  • What performance and quality criteria do we have for this?
  • What is explicitly out of scope?
  • Are there any cross-cutting or performance risks?
  • Are there any other significant risks to be investigated?
  • What dependencies exist?
  • If we ship an incomplete solution, what is the follow-on plan and forecast cost for completion?

Market

  • What is the market demand for this?
  • Is there any market or customer sensitivity around timing for this?
  • What marketing is planned or expected for this feature (if any)?
  • How do we expect our competitors to respond to this?
  • For what we’re planning to deliver, what do we expect from our users on social media?
  • How will we launch this feature (alpha/beta/freq update/big bang etc)?
  • How will this impact our position in the market relative to our competitors?
  • What impact are we expecting on sales/renewals by announcing / releasing this feature?
  • What impact are we expecting on our reputation by announcing / releasing this feature?
  • How will we sell / price this?

It’s a lot like writing 10 minute test plans – so quick and so powerful.

Give 200 questions a try on your next big feature request.

Take It Off-Site (Part 1) – The Bad and The Ugly

Reading time ~ 3 minutes

Offsite meetings have a checkered past. Back in 2006 a colleague shared this painful article with me.  To quote from the main article:

“…only about 10 percent of executives consider offsites truly valuable. Half said they aren’t worth the time or money.”

Over the last 7 years I’ve seen the exceptionally high value offsite meetings can bring but also the stress and pain that often leads up to them.

At my last employer, the entire global executive and senior management team would have a 6-monthly operations review offsite that ran for 3 days. We booked out significant portions of a large hotel in the US and fly the entire team together from over a dozen sites around the world. Over 3 days, the leaders from each site would present their statuses, finances, roadmaps, portfolios, visions, strategies and plans to a group of about 50 of us. We’d put them under the grill and review their work in depth both as bosses and peers.

We’d take time to focus on specific strategic issues that benefitted from face-to-face focused collaboraton. (When your management team are distributed around the world. A few intense days of co-location often delivers far more decisions and clarifications than 6 months of project work and conference calls.)

The general approach for these was: Prepare, Pitch, Review, Collaborate, Revise, Re-pitch.

Sounds pretty sensible? – It is…        … if you know what you’re in for. 

The offsite workouts were usually brutal but exceptionally valuable. By the end of the week everyone’s plans were in far better shape, we all understood what each other were doing, found areas for shared benefit and made a lot of difficult decsions.

Better still, we spent a few days working, eating and drinking together. 3 days of deliberate convergence in order to stem the behaviour issues associated with extreme divergence kepts us functioning as a successful management team.

Whilst the outcomes were great, they took their toll. The pain behind the scenes generally involved a bunch of us working 2 weeks of long evenings and weekends to pull all the data, presentations and spreadsheets together. ($100m of software projects contain a lot of moving parts & data) Often we’d still be revising our work during the start of sessions and whoever went first bore the brunt of painful lessons and questions whilst everyone else frantically updated their slide decks to accomodate the new knowledge.

And here’s the thing – the lines of questioning…

Our Senior Exec at that employer was an exceptionally sharp, experienced, bright guy. After working through multiple iterations and having joined the operations team on the organizing side of the offsites I saw what made him tick. He had a brain full of powerful questions that he’d learned from experience and was constantly adding to them. He knew what to ask and when that would lift the lid on just the right barrel to find a body (if there were bodies to be found).

As a spectator, sometimes it’s satisfying seeing the blood on the floor when someone you think is an ass-kisser takes a roasting. It’s not so fun when it’s your own team.

I even helped develop a set of “difficult questions” for our Head of Operations in the weekend prior to one of these offsites. They were great questions to ask. The thought and effort put into developing them was a lot like spending an hour writing test cases up-front. We learned quickly what we really needed and wanted to know (and why). We were confident we could get under the covers of even the most prettied-up project report.

But we could have made the whole thing so much less brutal.

Much like writing test plans, if we’d just put the questions out for participants to review in advance they’d have adjusted their research and reporting to accommodate them rather than playing project question battleship on the day.

So here’s an action for you.

Next time you’re reviewing a project or a piece of work, capture the questions you regularly ask.

Now…

Share them

Share them with the people you’ll be asking beforehand.

Now they can answer those questions sensibly first and you’ll all have time to dig into more valuable insights.

I have a few more posts planned around “critical questioning”, plenty around running offsites and I’ll be sharing where I’ve been for most of the last year so keep your eyes open for more soon.

 

Still Alive!

Reading time ~ 3 minutes

First of all, I’m flat-out at work right now on recruitment and running a challenging project so here’s a plug for some jobs…

We’re looking for 3 amazing Agile PMs here at Red Gate. If you think you’ve got what we need and fancy working with me, please dive in and apply.

[edit] – actually we’re hiring for more than a dozen different roles you might expect in a software company! – Take a look at the rest too.

Second. I’m currently supporting part of the SQL Source Control development team here on a rather challenging technology refresh project. We’re planning to ship an Alpha soon to validate our technology decisions and then press ahead with completion and new features. I can’t say much more about this at the moment but despite being relatively small by most company standards, it’s one of the trickiest projects I’ve ever inherited. Once we’re successful, I hope to write more.

Finally – back to the title…

Forgive the Portal reference but for the last few months I’ve been using my spare time coding a game.

If you’re of a certain age (like me), you may fondly remember hours of bashing away trying to solve puzzles and guess verbs in black and white (or green screen) dungeon/text adventure games.Minimum viable text adventure game screen

I grew up coding on a BBC micro and developed an early love for interactive fiction. It’s now a pretty niche area with some amazing games.

Whenever I want to learn something new, I need a goal – something concrete. So I decided to write my own but using slightly more modern technology. If you’re interested in why, see my last post.

Progress has gone really well and I think the results so far have some promise. There’s a few unique aspects of the game in comparison to the more old-fashioned equivalents. Essentially, the games characters and scoring are somewhat more dynamic and based much more on the actions of you as a player. I won’t spoil it but if you have the patience, give it a shot at: http://mvta.herokuapp.com/

Whilst not “finished”, it is live and that’s an achievement in itself.

I reconnected my developer empathy circuit, thoroughly enjoyed going back to coding and managed to learn some new skills on the way.

A few nerdy highlights…

  • I’m really impressed with Heroku as a hosting platform. I managed to go form zero knowledge to live and running in under 2 hours – including restructuring the codebase and packaging in order to support heroku deployment.
  • I had to get to grips with NoSQL  much sooner than expected. The original saved game data was filesystem-based but Heroku offers no filesystem support so games are now saved in a hosted Redis data store. This also gave me the benefit of painfully relearning use of binary data streams and debugging third-party libraries (developing on Windows vs production on Linux). It turned out that by default, the nodeJS Redis library is somewhat buggy on Windows as it drops back to a javascript parser. On Linux it uses a C library.
  • I’ve come to love Github for source control and got to grips with a modern IDE (Visual Studio) having last used Eclipse back in around 2007!
  • I’ve learned to value real user testing and usage reporting. A large part of the game usability came from observing the attempted and failed actions of users playing the game.
  • I’ve really boosted my nodeJS and javascript skills although the JS itself is still largely poor OO rather than dynamic.
  • I still prefer writing business logic over coding the “plumbing” but did occasionally have to drop back to some old-school computer-sciency work to achieve what I wanted.

Most importantly, whilst it’s a well-tested and working game and engine, I built a nasty legacy codebase that forces me to resort to writing tests and reworking/refactoring whenever I want to add something new and facing the pain of missing tests and regressions as I go.

I’ll continue updating the content over time but please dive in, have a play and waste a few hours.

Oh… and post comments here if you need “guess the verb” type hints or clues!

Cheers for reading!

Simon

 

Back to the Technical Stuff (or “Why so Quiet Cap’n?”)

Reading time ~ 5 minutes

5 months since my last post. What’s been happening?

Lots!

A recent entertaining high point was this photo of Alberto Brandolini‘s (@ziobrando on twitter) lightning talk at XP2014 going viral. Alberto Brandolini at XP2014 (Rome) It’s still doing the rounds on twitter 2 weeks later so if you’re here because of “that photo”, thanks for stopping by and don’t forget to congratulate Alberto! Sadly most of the content on here – whilst hopefully interesting and insightful – probably won’t have the same mass-appeal. Back in the day-job, I have 3 irons in the fire at the moment.

  • Build & cycle time
  • Agile staff development
  • Rebuilding my “developer empathy”

Build & Cycle Time

I’m working with two of the function heads at Red Gate on a project to improve our build and test cycle times. Chris and Jeff are doing all the hard work directly with the teams, I’m just offering support, guidance and occasional prods.

In the last 6 weeks they’ve managed to reduce the amount of processing on our build servers for one of our major products from 56 total hours (requiring a total test run duration of nearly 15 hours) to 10 hours processing, ~30 minutes total run time and a build time reduction from 40 minutes down to 6.

At the same time they managed to get all the tests back to green and fix the hidden bugs they were masking (just 3 luckily). Somewhat embarrassingly, this was a first in over 5 years). This has had an amazing positive effect on team morale and Chris’ visualization of status has really upped the game for all teams in the division. Chris' LED buld lights The team’s best run so far without a broken build or failing tests was a week but this time when things failed, they were fixed and re-tested in under an hour, not left for the next “overnight”.

That’s a major shift in behavior for us and means we can now be much more responsive in turning around customer issues too. The knock-on impact of this on all other projects is probably even greater.

Other teams are benefitting from a build queue that actually has free slots during the day (previously unheard of) and overnight runs that are significantly quicker due to the reduced load on infrastructure.

This is all part of a major push by our team to crank up the effort on our technical issues here. We’ve got agile project processes running to a level where we’re happy and continuous improvement is culturally part of every PMs daily life but we’d not focused on the technical side, XP practices and development culture so much. Until recently our teams had done “pretty well” with brute force and intellect but many hadn’t seen what “really good” looks like for a while. We’re fixing that 🙂

Agile Staff Development

This is where most of my time’s been going. I’ve been working closely with Brian, our head of Tech Comms to roll out training on our new “personal development planning” workshop and tools to all our line managers in development. This is the culmination of the work I wrote about back in September last year following a successful pilot.

We’ve overhauled the skills part of this area as a team with some great design assistance from our head of UX (Marine) resulting in a skills map for each of our main development roles.

Project Manager Skills Map

We’ve developed a half-day training course where participants explore the materials and gain a deep understanding of the concepts, goals and challenges and then actually do the personal development workshop itself as deliberate practice – with one participant leading and one participating – before unleashing their new skills back with their teams. We’re documenting much of our work at http://dev.red-gate.com/skills/  and have made our internal skills maps available under an open source license for others to adapt the concepts for their own use.

We’re now at the point where all development line managers have been trained, about half of our development staff have development plans (and are actively tracking and maintaining them) and we’re now looking at how to both make this scalable, sustainable and expand the next phase of the project to cover non-development roles.

There’s been a real buzz about the skills maps in particular outside Red Gate so we’re hoping to take these concepts out to a couple of conferences toward the end of the year.

 Rebuilding my “Developer Empathy”

Related to our technical push around the build and cycle times, I found that after too many years away from the technical coalface I was becoming somewhat jaded towards what I perceived as a lack of technical discipline from some of our developers. I needed a mental reset. Starting out with the premise that we hire great, motivated people who care, I needed to understand what it was that would make someone with the best of intentions fail to write and maintain unit tests, develop poor quality architecture and allow a codebase to fall into disrepair.

You see when you look at it like that, something else must be amiss.

Rather than simply looking, I’ve decided to do it myself. So in the last few months, in early mornings, lunchtimes and evenings I’ve taken up (simple) coding again. I’ve taken a domain I know well (text adventures) a project I knew I’d enjoy (writing a new game) and started deliberately building up a legacy codebase. Whilst trying to use at least some basic object concepts (in JavaScript), I’ve not used SOLID principles or composition (In my former Java life I generally avoided inheritance in favor of interfaces/composition), I haven’t written tests first and I’ve written some quite complex methods without tests.

3 months later I have a rather neat game and engine, a nasty codebase. 150 tests (all passing) where I should realistically have more than 1500 and amongst the full set of classes, a small number of 1500+ line “god classes”. I found myself repeatedly “play testing” new features rather then writing tests, blurring the boundaries of responsibility across objects and occasionally rushing work so that I could “get home” or switch back to my day-job.

I’ll write about this in much more detail in future (and will try to find somewhere to host the game) but in summary, I’ve reconnected my empathy circuit. There are many reasons why we don’t work at our best on legacy codebases. I don’t always know how we get there in the first place but I can see why we fail to dig ourselves out.

In short, there’s just “too much to do”. And so… Back to the big rocks and the oubliette.

Suboptimising Around Data

Reading time ~ 3 minutes

I originally wrote this article back in about 2009/2010 whilst working at a large corporation with a very strong measurement culture.

just some dataAs more teams and companies are adopting lean concepts and with the strong influence of the Lean Startup movement (which to reduce confusion is not the same as lean), this post feels relevant to finally publish publicly…

“Data is of course important in manufacturing, but I place the greatest emphasis on facts.” – Taiichi Ohno

There’s a great lean principle known as “Genchi Genbutsu” – “actual place, actual thing” Generally we interpret this as: “Go and See at the source”

When this critical pillar of lean thinking is eroded by a proxy we open ourselves to some painful problems.

Where organizations become too measurement focused, we risk our proxy for Genchi Genbutsu becoming data.

Lean and agile processes both rely on data but these are indicators only. Particularly in agile, there is a strong emphasis on data being used by the team for internal diagnostics – in fact very little agile material talks about data support for external management and there are good reasons for this.

Even where managers are fully aware that data are not the whole story, external or imposed measurement drives strange behaviors.

Talk to any individual or team that is measured on something that managers at one time or another thought was “reasonable” and chances are there will be a range of emotions from bemusement or cynicism to fear. All these responses will drive negative alterations in behavior. Occasionally there are good measures but they’re pretty rare

You’ll find individuals and teams that are working in the “expected” way will be absolutely fine. But their behavior will now be constrained by the metrics and their capacity to improve will be limited.

Those that are aware of their constraints (and are often limited in what they can actually influence to solve their problems) will at best sub-optimize around their own goals and at worst “game” the system in order to preserve themselves. This is a natural self-preservation response.

I’ve seen the most extreme example of this using game theory in simulated product management (in a session run by Rich Mironov back in 2009).

A team of 4 product managers were each given a personal $1M sales target, a set of delivery resources and a product. Performance against their target was measured on a quarterly basis. In the example, the game was deliberately rigged. It was impossible for all product managers to meet their personal targets with the limited resources they were given. However It was possible to achieve well above the total $4m target if product managers collaborated and in some cases were actually willing to sacrifice their own products, releases and resources in order to fund cycles on better-performing products.

Data may also only tell you how tools are being used. If a team is constantly inspecting and adapting, I would expect their tool usage to change. It may then not reflect expectations or worse, they may not be able to adapt for fear of damaging “the numbers”.

Here’s a great example of this from (of all places) a hair Salon… http://lssacademy.com/2007/08/05/bad-metrics-in-a-hair-salon/

For a closer to home example try this experiment…

If you’re not already doing so, start measuring story cycle time (from commencement of work to acceptance and ideally delivery).

Now try the following:

  1. Measure cycle times without giving feedback to the team. What are you observing in the data? What can you infer from this? What can’t you see?
  2. Continue measuring but start reporting the numbers to the team and discussing observations. Ask the team what they’re observing, what they can infer and what can’t they see. What would they change?
  3. Ask your teams if they’re willing to report their data to “management”. What responses do you get?
  4. If the teams are willing. Start reporting the data to management. Ask them what they’re observing and what they can infer? What Can’t they see? What would they change?
  5. Consider the level of trust in your organization. How would the experiment above change behavior if trust was significantly higher or lower than its current level?

Food for thought.