If you're here for the spicy, good news: you're gettin' it spicy.
I am usually joking, but I mean it this time: I'm going to stop doing this podcast if you don't write in to podcast@searls.co. Give me proof of life, that's all I need. I won't even read it on air if you don't want!
If you're not bored yet, click the links:

Just a few days after launching my TOP SECRET syndication app with Bluesky support, I've written a new adapter for cross-posting to X. Updated the site’s POSSE roundup accordingly: justin.searls.co/posse/
How to build a podcasting platform in under 8 hours
Last year at Rails World, I indulged in some horn tooting and victory-lap taking when I showed off the publishing platform and strength-training app I built to support Becky's business.
The paper-thin pretense of my talk was, "wow, look at how incredible Ruby on Rails is for empowering developers—even solo acts—to build ambitious products." And don't get me wrong, that was the main thesis. But the presentation was also an opportunity to show off my work and drop the mic.
As a consultant, I spent my entire career hearing how hard it is to build a real product. That as a Johnny-come-lately contractor, I could never know why things had to be slow. Or complicated. Or buggy. I lost track of how many times someone referenced Steve Jobs' epic judgment of consultants as the reason they wouldn't hire me on a contract basis and why I was only valuable if I joined their corporate family in W-2 matrimony.
Well, all that consultant FUD turned out to be bullshit. Simply by doing all the things I'd been telling others to do for two decades, I enjoyed the smoothest software development experience I've ever witnessed at any company of any scale. Literally everything went great. The resulting app looks, performs, and functions better than I ever imagined it would. My product owner / wife is thrilled with it. In its first 4 months on the market, only two bugs have been reported by customers and both were fixed in an hour or less.
Of course, one reason I held back on celebrating the success of my own ability to somehow form all the right opinions about good software was because it would have been premature. The true test of any software system is how easy it is to change later. Well, as you might be able to tell from the braggadocious tone of this post, I finally have an answer after delivering the platform's first major post-launch feature.
Becky has been wanting to start a podcast for a while, and given that we already had a bespoke CMS with her name on it lying around, it only made sense to try my hand at slotting in a podcast hosting component. I was nervous it'd take a while to get up to speed, as I hadn't touched the codebase in months. Nope: everything went so smoothly I can't believe I'm done already. the whole podcast system was finished in under 8 hours with plenty of time leftover to dunk on corporate America and the dysfunctional way seemingly every company on earth is intent on writing software.
Anyway, I've maintained a pretty meek stance over the years when it comes to dishing out categorical advice on how to write "good" software. Lots of disingenuous caveats like, "I've seen my approach work well, but maybe your solution is best for your situation." Most of that pussy-footing was born out of my own misplaced desire to please everyone. But at least some of my self-consciousness was on account of the narrative that consultants just couldn't hack it when it came to building something.
Well, I'm ready to call bullshit. Turns out, I'm better at building software than most people and, hell, most teams. If you want to get better at programming, the most important thing you can do is practice. But it wouldn't hurt for you to read or watch my stuff. 🎤🚮

I found myself wanting a commercial license that gave me a way to share my work, but which conferred literally no other privileges.
So I wrote a new one: the Fuck You Pay Me License github.com/searls/fypm

"How hard could it possibly be to truncate a string while making sure it doesn't cut off any URLs or hashtags?" he asked, ignorantly. gist.github.com/searls/9d8ee42929da99ae268477eb20818da6
New paper answers whether ChatGPT makes you lazier
Apple Intelligence summary of the abstract, which I couldn't be bothered to read:
A study comparing learners' motivations, self-regulated learning processes, and performance with different support agents (AI, human expert, writing analytics, or none) found no difference in motivation but significant differences in learning processes and performance. While AI support improved essay scores, it may also promote dependence and "metacognitive laziness."
In this episode: Justin goes to a birthday party, drives a Tesla, and configures your BIOS.
The compliments department is, as always, available at podcast@searls.co.
Have some URLs:

Super Switch. There’s a lot to unpack here (plan to record an episode of Breaking Change this weekend to discuss), but the biggest “surprise” is that this is the least surprising console Nintendo has ever designed. youtube.com/watch?v=itpcsQQvgAQ

Test Double is running a survey to better understand YOUR HOTTEST TAKES about software development. Please fill this out as accurately and spicily as possible 🌶️ forms.gle/UcnjShcTUxPVJmTm6
Happy Birthday I Got You an Irrelevant Blog Post
Think of how much they saved by sending me this nonsensical content marketing collateral instead of a coupon.
What is up with Apple Music recommendations?
Just me, or has Apple Music started giving top billing to some really weird recommendations? Every day I log in, the top recommendation is an artist I’ve never heard of, with a track or album that sounds like AI generated lofi or stock music. I admit I listen to a fair number of instrumental “Focus” playlists and channels, but I think they’re trying to do something clever with the backend algorithm and they’re failing to grasp that people use “lofi music” and “music music” completely differently.
How to make a HomeKit scene dim lights without turning them on
Update: and 20 minutes after posting this, it stopped working. HomeKit giveth and HomeKit taketh away.
Out of the box, Apple’s Home app will turn on any lights you add to a scene, even if it’s only to decrease their brightness level. As a result, if your goal is to simply dim the house’s lighting at nighttime, then your scene may have the unintended effect of actually turning on a bunch of lights.
While not the best-looking app in the world, third party apps can separate a light's power state from its brightness level in a HomeKit scene, and Eve is a free one that lets you configure this.
- First, make your HomeKit scene how you want it in the Home app, because that UI is nicer
- In Eve, open the "Automation" view from the sidebar
- Find the scene in the "Scenes" tab
- For each room with a light you want to dim without turning on, tap the
>
chevron to the right of the room name and then uncheck each light's "Power" setting while leaving the "Brightness" setting as-is
And there you go. Dimmer lights without inadvertently turning on all your lights. 🎉
A real-world example of a Mocktail test
A few years ago, I wrote this test double library for Ruby called Mocktail. Its README provides a choose-your-own-adventure interface as well as full API documentation, but it doesn't really offer a way to see a test at a glance—and certainly not a realistic one.
Since I just wrote my first test with Mocktail in a while, I figured I'd share it here for anyone who might have bounced off Mocktail's overly cute README or would otherwise be interested in seeing what an isolated unit test with Mocktail looks like.
The goal
Today I'm writing a class that fetches an Atom feed. It has three jobs:
- Respect caching and bail out if the feed hasn't been updated
- Parse the feed
- Persist the feed entries (and updated caching headers)
There is an unspoken fourth job here: coordinate these three tasks. The first class I write will be the orchestrator of these other three, which means its only job is to identify and invoke the right dependencies the right way under a given set of conditions.
The code
So we can focus on the tests, I'll spare you the test-driven development play-by-play and just show you the code that will pass the tests we're going to write:
class FetchesFeed
def initialize
@gets_http_url = GetsHttpUrl.new
@parses_feed = ParsesFeed.new
@persists_feed = PersistsFeed.new
end
def fetch(feed)
response = @gets_http_url.get(feed.url, headers: {
"If-None-Match" => feed.etag_header,
"If-Modified-Since" => feed.last_modified_header
}.compact)
return if response.code == 304 # Unchanged
parsed_feed = @parses_feed.parse(response.body)
@persists_feed.persist(
feed,
parsed_feed,
etag_header: response.headers["etag"],
last_modified_header: response.headers["last-modified"]
)
end
end
As you can see, this fits a certain idiosyncratic style that I've been practicing in Ruby for a long-ass time at this point:
- Lots of classes who hold their dependencies as instance variables, but zero "unit of work" state. Constructors only set up the object to do the work, and public methods perform that work on as many unrelated objects as needed
- Names that put the verb before the noun. By giving the verb primacy, as the system grows and opportunities for reuse arise, this encourages me to generalize the direct object of the class via polymorphism as opposed to generalizing the core function of the class, violating the single-responsibility principle (i.e.
PetsDog
may evolve intoPetsAnimal
, whereasDogPetter
is more likely to evolve into a catch-allDogManager
) - Pretty much every class I write has a single public method whose name is the same as the class's verb
- I write wrapper classes around third-party dependencies that establish a concrete contract so as to make them eminently replaceable. For starters,
GetsHttpUrl
andParsesFeed
will probably just delegate to httparty and Feedjira, but those wrappers will inevitably encapsulate customizations in the future
If you write code like this, it's really easy to write tests of the interaction without worrying about actual HTTP requests, actual XML feeds, and actual database records by using Mocktail.
The first test
Here's my first test, which assumes no caching headers are known or returned by a feed. I happen to be extending Rails' ActiveSupport::TestCase
here, but that could just as well be Minitest::Test
or TLDR:
require "test_helper"
class FetchesFeedTest < ActiveSupport::TestCase
setup do
@gets_http_url = Mocktail.of_next(GetsHttpUrl)
@parses_feed = Mocktail.of_next(ParsesFeed)
@persists_feed = Mocktail.of_next(PersistsFeed)
@subject = FetchesFeed.new
@feed = Feed.new(
url: "http://example.com/feed.xml"
)
end
def test_fetch_no_caching
stubs {
@gets_http_url.get(@feed.url, headers: {})
}.with { GetsHttpUrl::Response.new(200, {}, "an body") }
stubs { @parses_feed.parse("an body") }.with { "an parsed feed" }
@subject.fetch(@feed)
verify {
@persists_feed.persist(
@feed, "an parsed feed",
etag_header: nil, last_modified_header: nil
)
}
end
end
Here's the Mocktail-specific API stuff going on above:
- Mocktail.of_next(SomeClass) - pass a class to this and you'll get a fake instance of it AND the next time that class is instantiated with
new
, it will return the same fake instance. This way, the doubles we're configuring in our test are the same as the ones the subjects' instance variables are set to in its constructor - stubs {…}.with {…} - call the dependency exactly as you expect the subject to in the first block and, if it's called in a way that satisfies that stubbing, return the result of the second block
- verify {…} - call a dependency exactly as you expect it to be invoked by the subject, raising an assertion failure if it doesn't happen
You'll note that this test adheres to the arrange-act-assert pattern, in which first setup is performed, then the behavior being tested is invoked, and then the assertion is made. (Sounds obvious, but most mocking libraries violate this!)
The second test
Kicking the complexity up a notch, next I added a test wherein caching headers were known but they were out of date:
def test_fetch_cache_miss
@feed.etag_header = "an etag"
@feed.last_modified_header = "an last modified"
stubs {
@gets_http_url.get(@feed.url, headers: {
"If-None-Match" => "an etag",
"If-Modified-Since" => "an last modified"
})
}.with {
GetsHttpUrl::Response.new(200, {
"etag" => "newer etag",
"last-modified" => "laster modified"
}, "an body")
}
stubs { @parses_feed.parse("an body") }.with { "an parsed feed" }
@subject.fetch(@feed)
verify { @persists_feed.persist(@feed, "an parsed feed", etag_header: "newer etag", last_modified_header: "laster modified") }
end
This is longer, but mostly just because there's more sludge to pass through all the tubes from the inbound argument to each dependency. You might also notice I'm just using nonsense strings here instead of something that looks like a real etag or modification date. This is intentional. Realistic test data looks meaningful, but these strings are not meaningful. Meaningless test data should look meaningless (hence the grammar mistakes). If I see an error, I'd like to know which string I'm looking at, but I want the test to make clear that I'm just using the value as a baton in a relay race: as long as it passes an equality test, "an etag"
could be literally anything.
The third test
The last test is easiest, because when there's a cache hit, there won't be a feed to parse or persist, so we can just bail out. In fact, all we really assert here is that no persistence call happens:
def test_fetch_cache_hit
@feed.etag_header = "an etag"
@feed.last_modified_header = "an last modified"
stubs {
@gets_http_url.get(@feed.url, headers: {
"If-None-Match" => "an etag",
"If-Modified-Since" => "an last modified"
})
}.with { GetsHttpUrl::Response.new(304, {}, nil) }
assert_nil @subject.fetch(@feed)
verify_never_called { @persists_feed.persist }
end
Note that verify_never_called
doesn't ship with Mocktail, but is rather something I threw in my test helper this morning for my own convenience. Regardless, it does what it says on the tin.
Why make sure PersistsFeed#persist
is not called, but avoid making any such assertion of ParsesFeed
? Because, in general, asserting that something didn't happen is a waste of time. If you genuinely wanted to assert every single thing a system didn't do, no test would ever be complete. The only time I bother to test an invocation didn't happen is when an errant call would have the potential to waste resources or corrupt data, both of which would be risks if we persisted an empty feed on a cache hit.
The setup
To setup Mocktail in the context of Rails the way I did, once you've tossed gem "mocktail"
in your Gemfile
, then sprinkle this into your test/test_helper.rb
file:
module ActiveSupport
class TestCase
include Mocktail::DSL
teardown do
Mocktail.reset
end
def verify(...)
assert true
Mocktail.verify(...)
end
def verify_never_called(&blk)
verify(times: 0, ignore_extra_args: true, ignore_arity: true, &blk)
end
end
end
Here's what each of those do:
include Mocktail::DSL
simply lets you callstubs
andverify
without prependingMocktail.
- Mocktail.reset will reset any state held by Mocktail between tests, which is relevant if you fake any singletons or class methods
verify
is overridden because Rails 7.2 started warning when tests lacked assertions, and it doesn't recognizeMocktail.verify
as an assertion (even though it semantically is). Since Rails removed the configuration to disable it, we just wrap the method ourselves with a dummyassert true
to clear the errorverify_never_called
is just shorthand for this particular convoluted-looking configuration of theverify
method (times
asserts the exact number times something is called,ignore_extra_args
will apply to any invocations with more args than specified in theverify
block, andignore_arity
will suppress any argument errors raised for not matching the genuine method signature)
The best mocking library
I am biased about a lot of things, but I'm especially biased when it comes to test doubles, so I'm unusually cocksure when I say that Mocktail is the best mocking library available for Ruby. There are lots of features not covered here that you might find useful. And there are a lot of ways to write code that aren't conducive to using Mocktail (but those code styles aren't conducive to writing isolation tests at all, and therefore shouldn't be using any mocking library IMNSHO).
Anyway, have fun playing with your phony code. 🥃