justin․searls․co

Rushing to Forget Clean Code

Ron Jeffries just posted a terrific case for clean code, and decoupled a recently-emerged “code can be too clean” meme from a question that has actual merit, “can we spend too much time making code clean?”

Upon discussing the post with Kevin Baribeau this evening, an anecdotal correlation was identified between folks who’ve said things akin to “code can be too clean” and folks who tend to succumb to the pressure to rush development of features.

I point this out not because this relationship would be surprising (it wouldn’t be), but because it might be causal. That is to say, antipathy toward time spent on clean code could be yet another dastardly consequence of rushing.

I seem to have observed the following downward spiral in the past. Given a developer who succumbs to the pressure to rush a feature or otherwise produce X widgets within a deadline when it stretches beyond his likely capacity:

  1. At work, there seems to be “no time” to either pause and write clean code or go back and clean up existing code.
  2. Due to stress or overtime, there is no energy or drive left over to practice clean code independently (e.g. doing katas, side projects).
  3. Falling out of practice, the individual’s ability to synthesize clean code will be weakened.
  4. Eventually, the individual will acclimate to the pains of dirty code, and the only positive reinforcement they ever experience will be the (mild) success of delivering working dirty code.
  5. Finally, the value of producing and maintaining clean code will be forgotten and a retort like “code can be too clean” might start to seem rational—particularly when viewed from the perspective of someone who began by overcommitting and for whom new features will be taking ever more time to write.

Kevin pointed out that for this reason you want your team to stabilize at delivering around 80% of their potential capacity, to discourage the introduction of feedback loops like this one and to explicitly invest in the ability to rapidly respond to future changes. I tend to agree. One small step your team could make toward this ends would be to commit to only picking up stabilization cards (i.e. bugs or technical debt) on the day or two leading up to the close of an iteration.

This all seems so obvious that I question whether it’s worth posting here. So in case you’re in the choir and I’m merely preaching, I’ll end by recommending Andy Hunt’s terrific Pragmatic Thinking and Learning in case you haven’t read it, as it contains a terrific treatment on a related observation that pressure kills cognition.