The Power of Prompts
This post is partly in response to to this tweet, and partly a follow-up to a teaser I tweeted earlier this week.
We humans are suckers for suggestion. If you need evidence of this, consider something as seemingly innocuous as the order in which we ask people questions.
If I were to ask you:
1. Does the following web site load properly in your browser? Make sure all the pictures load: cuteoverload.com
2. Do you support the euthanasia of stray bunnies brought to animal shelters?
Mere intuition confirms that prompting respondents to peruse photos of adorable bunnies would influence the results of a question regarding the appropriate fate of stray bunnies. (I’m convinced that pollsters regularly do this intentionally to achieve a desired result for clients with seemingly-fair questions.)
In the field of software development, we have ample opportunity to prompt the behavior of others as well as ourselves. In this post, I’d like to briefly discuss a few. I’ll start with some negative consequences of inadvertent prompting, and pivot into some positive consequences of intentional, thoughtful prompting.
Anchoring Estimates
One well-established type of prompting on agile development teams is that of estimate anchoring—that is, one’s knowledge of what others have estimated as the size of a particular story will almost certainly affect one’s own estimate.
When anchoring occurs, an estimate that equally blends the perspectives of each team member is no longer attainable. What you get instead is a much less valuable consensus reaction to the first (or loudest) team member’s perspective.
Pressuring Knowledge Workers
I once saw a team’s Product Owner use the morning standup as an opportunity to actively manage the team’s developers. Each developer reporting anything short of unfettered success was interrupted and subsequently peppered with questions like, “Your story is easy right?”, “It’ll be done by 3 PM for the demo to the executive council, right?”, and “What’s taking you so long?”
That is to say, each developer’s morning got started with a prompt of, “you’re not going fast enough! What’s wrong with you?”
Needless to say, this prompt did not serve the product owner’s goal (which, and I’m being really generous here, was to accelerate the product’s development).
Instead, the team responded as anyone under pressure would: their brains turned off. From Andy Hunt’s absolutely incredible Pragmatic Thinking & Learning:
But when the mind is pressured, it actively begins shutting things down. Your vision narrows—literally as well as figuratively. You no longer consider options.
Corners got cut. Collaboration ceased. Creative and counter-intuitive solutions were no longer visible, so they couldn’t be considered. Instead, the team produced increasingly procedural, poorly-factored, and untested spaghetti code. And the deadlines were still missed.
Teams in low-pressure environments, on the other hand, may also not go as fast as their management would like them to; however, they’re far more likely to produce reusable, well-factored, and fully-tested clean code.
The difference, therefore, is that a team prompted with high-pressure demands will decelerate over time (as they cope with the mess they’ve made), while a low-pressure team will accelerate over time (as they expand on the clean code they’ve built).
Behavior-Driven Development
In my opinion, BDD was born out of a respect for the power of prompts. From Dan North’s blog post introducing the term:
Then I came across the convention of starting test method names with the word "should." This sentence template – The class should do something – means you can only define a test for the current class. This keeps you focused. If you find yourself writing a test whose name doesn't fit this template, it suggests the behaviour may belong elsewhere.
In that example, the developer follows a particular motion to prompt himself to focus on what he’s trying to specify.
The above approach addresses one of the shortcomings of “plain ol’ TDD,” which is that it begs the question, “how do I know that I’ve written enough tests?” without giving a clear answer. In contrast, by talking about tests as a specification of behavior, it becomes abundantly obvious when you’ve written enough tests: in each relevant context, the thing you’re building does everything it apparently should do, and nothing more. Put another way: one knows they are done when they write the word “should” then, staring blankly at their monitor, finally realize that there’s nothing else the subject code should do.
This original observation has given birth to a veritable treasure trove of other practices, and as a result the BDD banner has expanded quite a lot since 2006. In fact, The RSpec Book makes the case for BDD as a full-blown “second-generation Agile methodology.” Ultimately, they summarize BDD as a rhythm:
The day-to-day rhythm of delivery involves decomposing requirements into features and then into stories and scenarios, which we automate to act as a guide to keep us focused on what we need to deliver. These automated scenarios become acceptance tests to ensure the application does everything we expect.
The above acknowledges something that traditional software project management does not: that building software isn’t like working on a factory line. If I’m building a car and the chassis has everything but a door on it, I have a very clear prompt of what I need to do next. But the ephemeral nature of software is a double-edged sword—after finishing something, we can literally do anything we’d like to next! What should come next, however, is a question that doesn’t necessarily answer itself for a software developer.
Dan’s “should” prompt works at the object level, but what about after we’ve finished our object? What then?
The RSpec Book addresses this by zooming out, turning the problem on its head, and prescribing “outside-in” software development, which defines multiple layers of activities. This enables each team member to answer questions like “why am I doing what I’m doing?” and “what should I do next?” with the following traceable rhythm:
- Potential value that software could deliver on is identified, which prompts:
- Stakeholders to identify a minimum feature set that could realize that value (at Pillar, we call these “Value Stories”), which prompts:
- Each stakeholder to articulate each feature as a set of scenarios that serve as a sort of script for the behavior of the software, which prompts:
- A developer to automate each step in a scenario with a tool like Cucumber; when the scenario fails (remember, the code doesn’t exist yet!), it prompts:
- A developer to specify the behavior of application code with a tool like RSpec or Jasmine; when the spec fails, it prompts:
- The developer to implement the code to make the spec pass, prompting them to refactor the code and then either repeat Step #4 if the scenario now passes or Step #5 if it does not.
So to Chris Powers’ original statement, “I’ve chalked [BDD] up to semantics at this point,” my response is: “Indeed!” And what started with semantics for better-defining code behavior has organically evolved into a complete approach to delivering value to stakeholders.
The fact that BDD is fundamentally all about semantics does not trivialize it, because our susceptibility to semantics influences everything from what we build to how efficiently we build it. And once we recognize the power of semantics when prompting our own behavior, we can use it to our advantage!
Verb-first Classes, or “Where does this code go?”
After a very successful morning building a little Java framework for my client I tweeted the following, and I want to expand on it here as an example of using a rule-of-thumb to prompt myself to write cleaner code:
Wrote a whole app using verb-noun class names (e.g. PerformsQuery) and it magically enforced SRP & niladic/monadic methods!
A question programmers ask themselves dozens of times a day is, “okay, the app needs to do _____ next; where should I put the code?”
Perhaps an existing component should be changed to incorporate the new functionality. Perhaps something entirely new should be built. What I’ve observed is that the names of things that already exist greatly influence how we answer that question.
In object-oriented programming, names of objects are usually nouns. And traditional examples of OOP are often completely unmodified nouns meant to model some real-world analogue (like “Dog”, “Car”, etc.).
To illustrate, let’s say we name an object “Bunny”. Naïvely, I started describing bunnies in a Ruby file for about 45 seconds and came up with:
45 seconds and already the Bunny class is a dumping ground of numerous responsibilities. This class obviously violates the SOLID “object-oriented-design” principles outlined by Uncle Bob, especially the Single Responsibility Principle.
Most developers have a hard time consistently writing lean, mean code that adheres to the SRP. Why? Because they keep naming classes as nouns like Bunny!
How so? Because, like clockwork, this scenario unfolds:
- User story is written, “In order to survive, as a bunny, I want to eat carrots”
- Create an object named “Bunny”
- Implement the carrot-eating behavior in Bunny
- …Time elapses…
- User story is written, “In order for the species to survive, as a bunny, I want to make new bunnies”
- Ask “where should the code about bunny mating go?”
- Search existing components
- Find a Bunny class
- Think, “Aha, Bunny! It should go there!”
- Implement the mating behavior in Bunny
- The Universe, seeking equilibrium upon the introduction of this SRP violation, murders a bona fide real-world bunny.
Now, one popular attempt to combat this pattern is to name objects as “noun-responsibility” phrases, but in my experience it has fallen short of solving the problem. In theory, the above stories may have resulted in separate “BunnyFeeder” and “BunnyMater” classes. In practice, however, I’ve seen this naming convention regularly result instead in “BunnyFeeder” being refactored into something more generic (like “BunnyManager”) just so that it can incorporate both behaviors.
At SCNA 2010, Corey Haines turned me on to a verb-first naming pattern that I’ve since adopted and had a lot of success with—presumably because the responsibility of the class becomes its primary designation. Since the noun it acts on is only secondary, I’m less inclined to presume that new responsibilities belong in existing classes.
So, given a “FeedsBunnies” class, it wouldn’t even cross my mind to try sticking mating behavior in there. And upon not finding an existing component to incorporate mating behavior, I’d be more inclined to write a new “MatesWithBunnies” class.
This, even though nothing changed except the word order! It may only be semantics, but it seems to work well enough for me.
[Another bonus is that I’m no longer hung up a dozen times a day trying to think of a way to end a noun-responsibility name in “-er” that doesn’t sound awkward. I mean, “BunnyMater?” Really?]
Thoughts? Please, Tweet’em! (Or e-mail’em; it’s searls AT gmail.)