justin․searls․co
What follows is an issue of my newsletter, Searls of Wisdom, recreated for you here in website form. For the full experience, subscribe and get it delivered to your inbox each month!

Searls of Wisdom for April 2023

Eileen Uchitelle, Aaron, and I at RailsConf 2023

Greetings! I hope you're doing well. Are you doing well? (Feel free to reply and tell me whether you're doing well!)

Personally, it's been an exciting year so far. I transitioned to a new role at Test Double that has me working on what I'm really good at: getting mad at computers when they don't immediately do what I want them to do. Back in February, I published a video that summarizes why we demoted me in case you missed it.

April in particular was a big month at Searls Enterprises, as Becky put the finishing touches on starting her new business, which by all accounts is going great so far. We also had a bunch of friends come down to Orlando for Spring Break (it still feels weird living someplace where other people want to be) and it was fun to catch up with people as I injected myself into their Disney vacations. I also hit a a significant milestone by graduating from "refusing to admit that I play golf" to merely "telling people I am embarrassingly bad at golf." That was a big day.

What was April all about?

Even though I'm no longer formally in a sales or leadership role, I'm fortunate to have a pretty great network of "industry insiders" (as in, people who work in the same industry I do), and April was the month people's initial reactions to mainstream generative AI tools—which ranged from dismissal to exuberance to panic—finally started to settle down and firm up a bit. Since writing a post about AI and jobs in mid-March (which feels like it was two years ago), I've seen two threads emerge which are both probably true, but nevertheless hard to hold in one's head at the same time:

  1. Generative AI typified by LLMs like GPT-4 will reach a point of diminishing returns if it hasn't already, and its "last mile" problems—like making up facts and failing to render the correct number of fingers—could take years or even decades to ferret out. The best analogy I've heard is to self-driving cars. Autonomous driving on familiar terrain, in perfect weather, and where everyone obeys the rules of the road is so (relatively) easy that it may as well represent a different category of problem to making sure a Tesla doesn't occasionally kill someone merely because it was snowing, or a flock of birds flew past a sensor, or another driver suddenly changed lanes without engaging their turn signal. To wit, despite the dizzying speed of improvement in AI tools over the last five months, they may look more alike than different 5 years from now

  2. Despite the fact generative AI will be biased and error-prone for years to come, it now seems inevitable that its introduction will radically change the white collar employment landscape. The mere promise of AI tools eliminating menial knowledge work is all the encouragement many executives will need to do what they have been putting off for years: more closely scrutinize the value of many positions (if not entire departments), scale more cautiously by waiting to hire humans until tangible needs arise, and finally gain some level of understanding that communication costs increase geometrically as you add humans to an organization. Why so many technology innovations like smartphones, the Internet, and Keurig machines haven't moved the needle with respect to worker productivity metrics has become an increasingly remarked-upon mystery in recent years. Something tells me that the dream of AI tools as being "right around the corner" will prompt a generation of business leaders to chase higher productivity by attacking bloated org charts, reducing layers of middle management, eliminating administrative roles, and preferring small highly-skilled teams over larger heterogeneous ones.

What's the upshot here? It's going to be a rough decade for juniors—be they lawyers, accountants, or programmers. (Yes, even programmers!)

AI tools' unreliability give a tremendous advantage to users who can competently judge the accuracy of their output. In the hands of highly-skilled and experienced professionals, AI probably represents a moderate accelerant to delivering work of sufficient quality, because those workers will be able to suss out the good parts from the bad. In the hands of a less experienced professional, however, AI may supercharge the quantity of one's work output, but often with easily-overlooked mistakes and unnecessary incidental complexity baked in. One executive recently told me that he used to assume "zero net productivity" from software developers with less than three years experience; with the rise of AI assistants, he now expects junior developers' median value to be net-negative.

Along these lines, I've also heard several anecdotes of junior developers contributing a suspiciously high volume of code review feedback since ChatGPT was launched and that their comments often exhibit knowledge far in excess of their skill level. Which would be fine, but for the fact these contributions are also also full of subtle, hard-to-spot errors. This, in turn, has led to teams wasting time analyzing and debating half-baked feedback as well as consternation about how to appropriately address the issue. One tech lead was particularly exasperated: he'd spent years trying to get more engagement in pull request reviews, but now he's faced with the prospect of calling out junior developers for effectively cheating on their homework. (My advice? Stop incentivizing pull request reviews and address quality concerns with shorter-lived branches, better test automation, and high-bandwidth collaboration like pairing.)

This dynamic is likely to run headlong into the second trend of businesses being more cautious of hiring. Leaders seeking to "do more with less" will naturally be less inclined to hire inexperienced staff, less patient to train and mentor them, and less eager to incur the cost of middle managers to route communication in larger human organizations. If AI tools do in fact exacerbate the gap in productivity between less-experienced and highly-skilled workers, the market's appetite for entry-level knowledge workers will be even lower than it is now.

Highly-compensated knowledge work already selects for people with the privilege of time and money to train themselves, and all signs point to that inequity growing substantially in the near future. What can we do about it? We can't halt the advancement of AI tools or reverse these macroeconomic trends, but we can do our part to foster an environment more hospitable to people trying to break into the industry. Support reputable vocational educators like Turing. Teach groups and mentor individuals in your community. Produce content about what you're learning to give novices solid resources from which to learn independently.

Please stay tuned for the June edition of Searls of Wisdom, when the world will have turned upside down again and rendered the above either dreadfully obvious or woefully outmoded. We live in interesting times.

What's worth your time?

I emit a lot of content across a number of platforms—my blog, Test Double's blog, GitHub, YouTube, etc.—so I won't blame you for not following every breadcrumb of my all-too-public misadventures online.

To distill it down to "Searls: The Good Parts", here are the 3 things I worked on this month that I'll still be thinking about a year from now.

###🍿 Something to Watch

I just capped off season one of Searls After Dark at ten hourish-long episodes. The project accomplished exactly what it set out to do: show others what it's like to build an application step-by-step, warts and all. Because I didn't write a single line of code without the camera rolling (nor did I cut a single minute of footage), it naturally varies unevenly in potency—whether as education or as entertainment. But that's also probably why I heard from a number of viewers that seeing someone with 20 years of experience trip over so many trivial frustrations helped lessen their imposter syndrome in a world where almost every book, blog post, code example, and video is edited down to show only the Perfect and Correct answers. If that sounds like it might be useful to you or someone you know, I hope you'll check it out.

🛠️ Something to Use

I put a lot of time into Standard Ruby this month: introducing a plugin specification called lint_roller, refactoring its rulesets in terms of that plugin API, and cutting the initial release of our new standard-rails plugin. The latter was the subject of my and Meagan Waller's talk at RailsConf, in which the audience voted on what linting rules we should enable for Ruby on Rails codebases. I'm excited for us to stitch together the video footage of the session, both because it was a lot of fun but also because it could serve as a model for more interactive and engaging conference sessions in general. If you write Ruby and you haven't tried Standard yet, there's never been a better time to adopt it: the rules are mature and reliable, our language server and VS Code extension are top-notch, and more teams than ever are wasting less time bikeshedding RuboCop configuration files by adopting Standard on day one.

📖 Something to Read

If today's treatise on AI wasn't enough for you, I wrote a blog post at the beginning of the month that—no exaggeration—is something I've wanted to write for over a decade and which I'm finding engineering leaders are finally ready to hear. It's called Never Staff to the Peak, and it's a reflection on why tech companies have been under a perverse incentive to grow headcount too aggressively, what the recent waves of tech layoffs mean, and how engineering leaders should think about hiring going forward. tl;dr: hire only as many full-timers as you'll need for the long-term operation of your systems, add some more to capture leading-edge R&D work, and bring in temporary help as needed to address spikes in demand for engineering capacity. If you and I see eye-to-eye on this, this post might be a good link to drop in your company's Slack—if only to gauge how your colleagues and managers weigh in.

What's next?

As I write this, I'm on a flight over the Pacific en route to Tokyo, Japan, where I'll be spending a few days touring about before attending RubyKaigi. This trip presents two really exciting opportunities for me. First, this is the only time I've traveled solo for more than a few days since I was in college, and—as someone who finds solitude to be energizing and inspiring—will give me a lens to the country that I haven't really had since a semester abroad back in 2005. Second, I'll be serving as a field reporter for Test Double, covering what it's like to attend one of the most interesting and distinct technical conferences on the planet. If you sign up at testdouble.com/field, you'll be notified when my slow-motion live blog goes up. There, I'll be aggregating pictures, videos, and blog posts from my trip. You can expect a healthy blend of travel tips, cocktail pics, and recaps of RubyKaigi presentations that might shed some light on what we can look forward to in Ruby 3.3 and beyond. Get excited!

Was this any good?

This is my first time writing a single email for so many friends, colleagues, and acquaintances. I don't consume many newsletters, and as a result I'm anxious about whether I really even understand the form. If you have any feedback or encouragement for me, please reply and let me know what you think! If you found this to be a valuable read, I hope you'll share it with others!