justin․searls․co
Breaking Change artwork

v30 - Mall of the Gulf of America

Breaking Change

If you're here for the spicy, good news: you're gettin' it spicy.

I am usually joking, but I mean it this time: I'm going to stop doing this podcast if you don't write in to podcast@searls.co. Give me proof of life, that's all I need. I won't even read it on air if you don't want!

If you're not bored yet, click the links:

Show those show notes…

Becky keeps encouraging me to try nutritional yeast as a healthy, protein-rich seasoning, but the wires in my brain got crossed and I keep calling it "informational yeast" and then wondering what the hell it has to tell me.

How to build a podcasting platform in under 8 hours

Last year at Rails World, I indulged in some horn tooting and victory-lap taking when I showed off the publishing platform and strength-training app I built to support Becky's business.

The paper-thin pretense of my talk was, "wow, look at how incredible Ruby on Rails is for empowering developers—even solo acts—to build ambitious products." And don't get me wrong, that was the main thesis. But the presentation was also an opportunity to show off my work and drop the mic.

As a consultant, I spent my entire career hearing how hard it is to build a real product. That as a Johnny-come-lately contractor, I could never know why things had to be slow. Or complicated. Or buggy. I lost track of how many times someone referenced Steve Jobs' epic judgment of consultants as the reason they wouldn't hire me on a contract basis and why I was only valuable if I joined their corporate family in W-2 matrimony.

Well, all that consultant FUD turned out to be bullshit. Simply by doing all the things I'd been telling others to do for two decades, I enjoyed the smoothest software development experience I've ever witnessed at any company of any scale. Literally everything went great. The resulting app looks, performs, and functions better than I ever imagined it would. My product owner / wife is thrilled with it. In its first 4 months on the market, only two bugs have been reported by customers and both were fixed in an hour or less.

Of course, one reason I held back on celebrating the success of my own ability to somehow form all the right opinions about good software was because it would have been premature. The true test of any software system is how easy it is to change later. Well, as you might be able to tell from the braggadocious tone of this post, I finally have an answer after delivering the platform's first major post-launch feature.

Becky has been wanting to start a podcast for a while, and given that we already had a bespoke CMS with her name on it lying around, it only made sense to try my hand at slotting in a podcast hosting component. I was nervous it'd take a while to get up to speed, as I hadn't touched the codebase in months. Nope: everything went so smoothly I can't believe I'm done already. the whole podcast system was finished in under 8 hours with plenty of time leftover to dunk on corporate America and the dysfunctional way seemingly every company on earth is intent on writing software.

Anyway, I've maintained a pretty meek stance over the years when it comes to dishing out categorical advice on how to write "good" software. Lots of disingenuous caveats like, "I've seen my approach work well, but maybe your solution is best for your situation." Most of that pussy-footing was born out of my own misplaced desire to please everyone. But at least some of my self-consciousness was on account of the narrative that consultants just couldn't hack it when it came to building something.

Well, I'm ready to call bullshit. Turns out, I'm better at building software than most people and, hell, most teams. If you want to get better at programming, the most important thing you can do is practice. But it wouldn't hurt for you to read or watch my stuff. 🎤🚮

Someone should start a "Delete TikTok" challenge, wherein you record yourself accessing a friend's phone, deleting the TikTok app, and then capturing their reaction when they realize they can't reinstall it.

New paper answers whether ChatGPT makes you lazier

Apple Intelligence summary of the abstract, which I couldn't be bothered to read:

A study comparing learners' motivations, self-regulated learning processes, and performance with different support agents (AI, human expert, writing analytics, or none) found no difference in motivation but significant differences in learning processes and performance. While AI support improved essay scores, it may also promote dependence and "metacognitive laziness."

No offense to the startup bros out there but “Agentic” sounds more like an Alzheimer’s drug that’s mired in a class action lawsuit for giving people cancer.

I find myself frequently wondering how much better LLMs would be at coding if they were trained to be humble and full of self-doubt instead of overconfident know-it-alls.

Breaking Change artwork

v29 - Super Switch

Breaking Change

In this episode: Justin goes to a birthday party, drives a Tesla, and configures your BIOS.

The compliments department is, as always, available at podcast@searls.co.

Have some URLs:

Show those show notes…

What is up with Apple Music recommendations?

Just me, or has Apple Music started giving top billing to some really weird recommendations? Every day I log in, the top recommendation is an artist I’ve never heard of, with a track or album that sounds like AI generated lofi or stock music. I admit I listen to a fair number of instrumental “Focus” playlists and channels, but I think they’re trying to do something clever with the backend algorithm and they’re failing to grasp that people use “lofi music” and “music music” completely differently.

How to make a HomeKit scene dim lights without turning them on

Update: and 20 minutes after posting this, it stopped working. HomeKit giveth and HomeKit taketh away.

Out of the box, Apple’s Home app will turn on any lights you add to a scene, even if it’s only to decrease their brightness level. As a result, if your goal is to simply dim the house’s lighting at nighttime, then your scene may have the unintended effect of actually turning on a bunch of lights.

While not the best-looking app in the world, third party apps can separate a light's power state from its brightness level in a HomeKit scene, and Eve is a free one that lets you configure this.

  1. First, make your HomeKit scene how you want it in the Home app, because that UI is nicer
  2. In Eve, open the "Automation" view from the sidebar
  3. Find the scene in the "Scenes" tab
  4. For each room with a light you want to dim without turning on, tap the > chevron to the right of the room name and then uncheck each light's "Power" setting while leaving the "Brightness" setting as-is

And there you go. Dimmer lights without inadvertently turning on all your lights. 🎉

A real-world example of a Mocktail test

A few years ago, I wrote this test double library for Ruby called Mocktail. Its README provides a choose-your-own-adventure interface as well as full API documentation, but it doesn't really offer a way to see a test at a glance—and certainly not a realistic one.

Since I just wrote my first test with Mocktail in a while, I figured I'd share it here for anyone who might have bounced off Mocktail's overly cute README or would otherwise be interested in seeing what an isolated unit test with Mocktail looks like.

The goal

Today I'm writing a class that fetches an Atom feed. It has three jobs:

  1. Respect caching and bail out if the feed hasn't been updated
  2. Parse the feed
  3. Persist the feed entries (and updated caching headers)

There is an unspoken fourth job here: coordinate these three tasks. The first class I write will be the orchestrator of these other three, which means its only job is to identify and invoke the right dependencies the right way under a given set of conditions.

The code

So we can focus on the tests, I'll spare you the test-driven development play-by-play and just show you the code that will pass the tests we're going to write:

class FetchesFeed
  def initialize
    @gets_http_url = GetsHttpUrl.new
    @parses_feed = ParsesFeed.new
    @persists_feed = PersistsFeed.new
  end

  def fetch(feed)
    response = @gets_http_url.get(feed.url, headers: {
      "If-None-Match" => feed.etag_header,
      "If-Modified-Since" => feed.last_modified_header
    }.compact)
    return if response.code == 304 # Unchanged

    parsed_feed = @parses_feed.parse(response.body)
    @persists_feed.persist(
      feed,
      parsed_feed,
      etag_header: response.headers["etag"],
      last_modified_header: response.headers["last-modified"]
    )
  end
end

As you can see, this fits a certain idiosyncratic style that I've been practicing in Ruby for a long-ass time at this point:

  • Lots of classes who hold their dependencies as instance variables, but zero "unit of work" state. Constructors only set up the object to do the work, and public methods perform that work on as many unrelated objects as needed
  • Names that put the verb before the noun. By giving the verb primacy, as the system grows and opportunities for reuse arise, this encourages me to generalize the direct object of the class via polymorphism as opposed to generalizing the core function of the class, violating the single-responsibility principle (i.e. PetsDog may evolve into PetsAnimal, whereas DogPetter is more likely to evolve into a catch-all DogManager)
  • Pretty much every class I write has a single public method whose name is the same as the class's verb
  • I write wrapper classes around third-party dependencies that establish a concrete contract so as to make them eminently replaceable. For starters, GetsHttpUrl and ParsesFeed will probably just delegate to httparty and Feedjira, but those wrappers will inevitably encapsulate customizations in the future

If you write code like this, it's really easy to write tests of the interaction without worrying about actual HTTP requests, actual XML feeds, and actual database records by using Mocktail.

The first test

Here's my first test, which assumes no caching headers are known or returned by a feed. I happen to be extending Rails' ActiveSupport::TestCase here, but that could just as well be Minitest::Test or TLDR:

require "test_helper"

class FetchesFeedTest < ActiveSupport::TestCase
  setup do
    @gets_http_url = Mocktail.of_next(GetsHttpUrl)
    @parses_feed = Mocktail.of_next(ParsesFeed)
    @persists_feed = Mocktail.of_next(PersistsFeed)

    @subject = FetchesFeed.new

    @feed = Feed.new(
      url: "http://example.com/feed.xml"
    )
  end

  def test_fetch_no_caching
    stubs {
      @gets_http_url.get(@feed.url, headers: {})
    }.with { GetsHttpUrl::Response.new(200, {}, "an body") }
    stubs { @parses_feed.parse("an body") }.with { "an parsed feed" }

    @subject.fetch(@feed)

    verify {
      @persists_feed.persist(
        @feed, "an parsed feed",
        etag_header: nil, last_modified_header: nil
      )
    }
  end
end

Here's the Mocktail-specific API stuff going on above:

  • Mocktail.of_next(SomeClass) - pass a class to this and you'll get a fake instance of it AND the next time that class is instantiated with new, it will return the same fake instance. This way, the doubles we're configuring in our test are the same as the ones the subjects' instance variables are set to in its constructor
  • stubs {…}.with {…} - call the dependency exactly as you expect the subject to in the first block and, if it's called in a way that satisfies that stubbing, return the result of the second block
  • verify {…} - call a dependency exactly as you expect it to be invoked by the subject, raising an assertion failure if it doesn't happen

You'll note that this test adheres to the arrange-act-assert pattern, in which first setup is performed, then the behavior being tested is invoked, and then the assertion is made. (Sounds obvious, but most mocking libraries violate this!)

The second test

Kicking the complexity up a notch, next I added a test wherein caching headers were known but they were out of date:

def test_fetch_cache_miss
  @feed.etag_header = "an etag"
  @feed.last_modified_header = "an last modified"
  stubs {
    @gets_http_url.get(@feed.url, headers: {
      "If-None-Match" => "an etag",
      "If-Modified-Since" => "an last modified"
    })
  }.with {
    GetsHttpUrl::Response.new(200, {
      "etag" => "newer etag",
      "last-modified" => "laster modified"
    }, "an body")
  }
  stubs { @parses_feed.parse("an body") }.with { "an parsed feed" }

  @subject.fetch(@feed)

  verify { @persists_feed.persist(@feed, "an parsed feed", etag_header: "newer etag", last_modified_header: "laster modified") }
end

This is longer, but mostly just because there's more sludge to pass through all the tubes from the inbound argument to each dependency. You might also notice I'm just using nonsense strings here instead of something that looks like a real etag or modification date. This is intentional. Realistic test data looks meaningful, but these strings are not meaningful. Meaningless test data should look meaningless (hence the grammar mistakes). If I see an error, I'd like to know which string I'm looking at, but I want the test to make clear that I'm just using the value as a baton in a relay race: as long as it passes an equality test, "an etag" could be literally anything.

The third test

The last test is easiest, because when there's a cache hit, there won't be a feed to parse or persist, so we can just bail out. In fact, all we really assert here is that no persistence call happens:

def test_fetch_cache_hit
  @feed.etag_header = "an etag"
  @feed.last_modified_header = "an last modified"
  stubs {
    @gets_http_url.get(@feed.url, headers: {
      "If-None-Match" => "an etag",
      "If-Modified-Since" => "an last modified"
    })
  }.with { GetsHttpUrl::Response.new(304, {}, nil) }

  assert_nil @subject.fetch(@feed)

  verify_never_called { @persists_feed.persist }
end

Note that verify_never_called doesn't ship with Mocktail, but is rather something I threw in my test helper this morning for my own convenience. Regardless, it does what it says on the tin.

Why make sure PersistsFeed#persist is not called, but avoid making any such assertion of ParsesFeed? Because, in general, asserting that something didn't happen is a waste of time. If you genuinely wanted to assert every single thing a system didn't do, no test would ever be complete. The only time I bother to test an invocation didn't happen is when an errant call would have the potential to waste resources or corrupt data, both of which would be risks if we persisted an empty feed on a cache hit.

The setup

To setup Mocktail in the context of Rails the way I did, once you've tossed gem "mocktail" in your Gemfile, then sprinkle this into your test/test_helper.rb file:

module ActiveSupport
  class TestCase
    include Mocktail::DSL

    teardown do
      Mocktail.reset
    end

    def verify(...)
      assert true
      Mocktail.verify(...)
    end

    def verify_never_called(&blk)
      verify(times: 0, ignore_extra_args: true, ignore_arity: true, &blk)
    end
  end
end

Here's what each of those do:

  • include Mocktail::DSL simply lets you call stubs and verify without prepending Mocktail.
  • Mocktail.reset will reset any state held by Mocktail between tests, which is relevant if you fake any singletons or class methods
  • verify is overridden because Rails 7.2 started warning when tests lacked assertions, and it doesn't recognize Mocktail.verify as an assertion (even though it semantically is). Since Rails removed the configuration to disable it, we just wrap the method ourselves with a dummy assert true to clear the error
  • verify_never_called is just shorthand for this particular convoluted-looking configuration of the verify method (times asserts the exact number times something is called, ignore_extra_args will apply to any invocations with more args than specified in the verify block, and ignore_arity will suppress any argument errors raised for not matching the genuine method signature)

The best mocking library

I am biased about a lot of things, but I'm especially biased when it comes to test doubles, so I'm unusually cocksure when I say that Mocktail is the best mocking library available for Ruby. There are lots of features not covered here that you might find useful. And there are a lot of ways to write code that aren't conducive to using Mocktail (but those code styles aren't conducive to writing isolation tests at all, and therefore shouldn't be using any mocking library IMNSHO).

Anyway, have fun playing with your phony code. 🥃

Just wasted 30 minutes trying to validate the following XML file:

justin.searls.co/atom.xml

AFAICT, 90% of XML validators don’t look up referenced XSD URLs and literally zero will validate against all of them if a default namespace is used at all. Prove me wrong!