justin․searls․co

Did you come to my blog looking for blog posts? Here they are, I guess. This is where I post traditional, long-form text that isn't primarily a link to someplace else, doesn't revolve around audiovisual media, and isn't published on any particular cadence. Just words about ideas and experiences.


This Vision Pro strap is totally globular!

Who the fuck knows what a "globular cluster" is, but the Globular Cluster CMA1 is my new recommendation for Best Way to Wear Vision Pro. It replaces a lightly-modified BOBOVR M2 as the reining champ, primarily due to the fact it's a thing you can just buy on Amazon and slap on your face. It's slightly lighter, too. One downside: it places a wee bit more weight up front. I genuinely forget I'm wearing the BOBOVR M2 and I never quite forget I'm wearing this one.

Here's a picture. You can't tell, but I'm actually winking at you.

Also pictured, I've started wearing a cycling skull cap when I work with Vision Pro to prevent the spread of my ethnic greases onto the cushions themselves. By regularly washing the cap, I don't have to worry about having an acne breakout as a 40-year-old man. An ounce of prevention and all that.

You might be asking, "where's the Light Seal?" Well, it turns out if you're wearing this thing for more than a couple hours, having peripheral vision and feeling airflow over your skin is quite nice. Besides, all the cool kids are doing it. Going "open face" requires an alternative to Apple's official straps, of course, because Apple would prefer to give your cheek bones a workout as gravity leaves its mark on your upper-jowl region.

You might also be wondering, "does he realize he looks ridiculous?" All I can say is that being totally shameless and not caring what the fuck anyone thinks is always a great look.

The 12" MacBook was announced 10 years ago

On March 9, 2015, Apple announced a redesigned MacBook, notable for a few incredible things:

  • 2 pounds (the lightest computer Apple currently sells is 35% heavier at 2.7 pounds)
  • 13.1mm thin
  • A 12-inch retina screen (something the MacBook Air wouldn't receive until late 2018)
  • The Force Touch trackpad (which came to the MacBook Pro line the same day)

It also became infamous for a few less-than-incredible things:

  • A single port for charging and data transfer, heralding the dawn of the Dongle Era
  • That port was USB-C, which most people hadn't even heard of, and which approximately zero devices supported
  • A woefully-underpowered 5-watt TDP Intel chip
  • The inadvisably-thin butterfly keyboard, which would go on to hobble Apple's entire notebook line for 5 years (though my MacBooks never experienced any of the issues I had with later MacBooks Pro)

Still, the 2015 MacBook (and the 2016 and 2017 revisions Apple would go on to release) was, without-a-doubt, my favorite computer ever. When I needed power, I had uncompromised power on my desktop. When I needed portability, I had uncompromised portability in my bag.

It was maybe Phil Schiller's best pitch for a new Mac, too. Here's the keynote video, scrubbed to the MacBook part:

Literally the worst thing about traveling with the 12" MacBook was that I'd frequently panic—oh shit, did I forget my computer back there?—when in fact I had just failed to detect its svelte 2-pound presence in my bag. I lost track of how many times I stopped in traffic and rushed to search for it, only to calm down once I confirmed it was still in my possession.

I started carrying it in this ridiculous-looking 12-liter Osprey pack, because it was the only bag I owned that was itself light enough for me to feel the weight of the computer:

This strategy backfired when I carelessly left the bag (and computer) on the trunk of our car, only for Becky to drive away without noticing it (probably because it was barely taller than the car's spoiler), making the 12" MacBook the first computer I ever lost. Restoring my backup to its one-port replacement was a hilarious misadventure in retrying repeatedly until the process completed before the battery gave out.

I have many fond memories programming in the backyard using the MacBook as a remote client to my much more powerful desktop over SSH, even though bright sunlight on a cool day was all it took to discover Apple had invented a new modal overheating screen just for the device.

Anyway, ever since the line was discontinued in 2019, I've been waiting for Apple to release another ultraportable, and… six years later, I'm still waiting. The 11-inch MacBook Air was discontinued in late 2016, meaning that if your priority is portability, the 13" MacBook Air is the best they can offer you. Apple doesn't even sell an iPad and keyboard accessory that, in combination, weigh less than 2.3 pounds. Their current lineup of portable computers are just nowhere near light enough.

More than the raw numbers numbers, none of Apple's recent Macs have sparked the same joy in me that the 11" Air and 12" MacBook did. Throwing either of those in a bag had functionally zero cost. No thicker than a magazine. Lighter than a liter of water. Today, when I put a MacBook Air in my bag, it's because I am affirmatively choosing to take a computer with me. In 2015, I would regularly leave my MacBook in my bag even when I didn't expect to need it, and often because I was too lazy to take it out between trips. That is the benchmark for portable computing, and Apple simply doesn't deliver it anymore. Hopefully that will change someday.

How to run Claude Code against a free local model

Last night, Aaron shared the week-old Claude Code demo, and I was pretty blown away by it:

I've tried the "agentic" features of some editors (like Cursor's "YOLO" mode) and have been woefully disappointed by how shitty the UX always is. They often break on basic focus changes, hang at random, and frequently require fussy user intervention with a GUI. Claude Code, however, is a simple REPL, which is all I've ever really wanted from a coding assistant. Specifically, I want to be able to write a test in my editor and then tell a CLI to go implement code to pass the test and let it churn as long as it needs.

Of course, I didn't want to actually try Claude Code, because it would have required a massive amount of expensive API tokens to accomplish anything, and I'm a cheapskate who doesn't want to have to pay someone to perform mundane coding tasks. Fortunately, it took five minutes to find an LLM-agnostic fork of Claude Code called Anon Kode and another five minutes to contribute a patch to make it work with a locally-hosted LLM server.

Thirty minutes later and I have a totally-free, locally-hosted version of the Claude Code experience demonstrated in the video above working on my machine (an MacBook Pro with M4 Pro and 48GB of RAM). I figured other people would like to try this too, so here are step-by-step instructions. All you need is an app called LM Studio and Anon Kode's kode CLI.

Running a locally-hosted server with LM Studio

Because Anon Kode needs to make API calls to a server that conforms to the Open AI API, I'm using LM Studio to install models and run that server for me.

  1. Download LM Studio
  2. When the onboarding UI appears, I recommend unchecking the option to automatically start the server at login
  3. After onboarding, click the search icon (or hit Command-Shift-M) and install an appropriate model (I started with "Qwen2.5 Coder 14B", as it can fit comfortably in 48GB)
  4. Once downloaded, click the "My Models" icon in the sidebar (Command-3), then click the settings gear button and set the context length to 8192 (this is Anon Kode's default token limit and it currently doesn't seem to respect other values, so increasing the token limit in LM Studio to match is the easiest workaround)
  5. Click the "Developer" icon in the sidebar (Command-2), then in the top center of the window, click "Select a model to load" (Command-L) and choose whatever model you just installed
  6. Run the server (Command-R) by toggling the control in the upper left of the Developer view
  7. In the right sidebar, you should see an "API Usage" pane with a local server URL. Mine is (and I presume yours will be) http://127.0.0.1:1234

Configuring Anon Kode

Since Claude Code is a command-line tool, getting this running will require basic competency with your terminal:

  1. First up, you'll need Node.js (or an equivalent runtime) installed. I use homebrew and nodenv to manage my Node installation(s)
  2. Install Anon Kode (npm i -g anon-kode)
  3. In your terminal, change into your project directory (e.g. cd ~/code/searls/posse_party/)
  4. Run kode
  5. Use your keyboard to go through its initial setup. Once prompted to choose between "Large Model" and "Small Model" selections, hit escape to exit the wizard, since it doesn't support specifying custom server URLs
  6. When asked if you trust the files in this folder (assuming you're in the right project directory), select "Yes, proceed"
  7. You should see a prompt. Type /config and hit enter to open the configuration panel, using the arrow keys to navigate and enter to confirm
    1. AI Provider: toggle to "custom" by hitting enter
    2. Small model name:" to "LM Studio" or similar
    3. Small model base URL: http://127.0.0.1:1234/v1 (or whatever URL LM Studio reported when you started your server)
    4. API key for small model: provide any string you like, it just needs to be set (e.g. "NA")
    5. Large model name: to "LM Studio" or similar
    6. API key for large model: again, enter whatever you want
    7. Large model base URL: http://127.0.0.1:1234/v1
    8. Press escape to exit
  8. Setting a custom base URL resulted in Anon Kode failing to append v1 to the path of its requests to LM Studio until I restarted it (If this happens to you, press Ctrl-C twice and run kode again)
  9. Try asking it to do stuff and see what happens!

That's it! Now what?

Is running a bootleg version of Claude Code useful? Is Claude Code itself useful? I don't know!

I am hardly a master of running LLM locally, but the steps above at least got things working end-to-end so I can start trying different models and tweaking their configuration. If you try this out and have landed on a configuration that works really well for you, let me know!

Calling private methods without losing sleep at night

Today, I'm going to show you a simple way to commit crimes with a clean conscience.

First, two things I strive for when writing software:

  1. Making my code do the right thing, even when it requires doing the wrong thing
  2. Finding out that my shit is broken before all my sins are exposed in production

Today, I was working on a custom wrapper of Rails' built-in static file server, and deemed that it'd be wiser to rely on its internal logic for mapping URL paths (e.g. /index.html) to file paths (e.g. ~/app/public/index.html) than to reinvent that wheel myself.

The only problem? The method I need, ActionDispatch::FileHandler#find_file is private, meaning that I really "shouldn't" be calling it. But also, it's a free country, so whatever. I wrote this and it worked:

filepath, _ = @file_handler.send(:find_file,
  request.path_info, accept_encoding: request.accept_encoding)

If you don't know Ruby, send is a sneaky backdoor way of calling private methods. Encountering send is almost always a red flag that the code is violating the intent of whatever is being invoked. It also means the code carries the risk that it will quietly break someday. Because I'm calling a private API, no one on the Rails team will cry for me when this stops working.

So, anyway, I got my thing working and I felt supremely victorious… for 10 whole seconds. Then, the doubts crept in. "Hmm, I'm gonna really be fucked if 3 years from now Rails changes this method signature." After 30 seconds hemming and hawing over whether I should inline the functionality and preemptively take ownership of it—which would separately run the risk of missing out on any improvements or security fixes Rails makes down the road—I remembered the answer:

I can solve this by codifying my assumptions at boot-time.

A little thing I tend to do whenever I make a dangerous assumption is to find a way to pull forward the risk of that assumption being violated as early as possible. It's one reason I first made a name for myself in automated testing—if the tests fail, the code doesn't deploy, and nothing breaks. Of course, I could write a test to ensure this method still works, but I didn't want to give this method even more of my time. So instead, I codified this assumption in an initializer:

# config/initializers/invariant_assumptions.rb
Rails.application.config.after_initialize do
  next if Rails.env.production?

  # Used by lib/middleware/conditional_get_file_handler.rb
  unless ActionDispatch::FileHandler.instance_method(:find_file).parameters == [[:req, :path_info], [:keyreq, :accept_encoding]]
    raise "Our assumptions about a private method call we're making to ActionDispatch::FileHandler have been violated! Bailing."
  end
end

Now, if I update Rails and try to launch my dev server or run my tests, everything will fail immediately if my assumptions are violated. If a future version of Rails changes this method's signature, this blows up. And every time I engage in risky business in the future, I can just add a stanza to this initializer. My own bespoke early warning system.

Writing this note took 20 times longer than the fix itself, by the way. The things I do for you people.

Turning your audio podcast into a video-first production

I was chatting with Adam Stacoviak over at Changelog a couple weeks back, and he mentioned that this year they've taken their podcast "video-first" via their YouTube channel.

I hadn't heard the phrase "video-first" before, but I could imagine he meant, "you record the show for video—which is more complex than recording audio alone—and then the audio is a downstream artifact from that video production." Of course, key to my personal brand is failing to demonstrate curiosity in the moment by simply asking Adam what he does, and instead going on an individual two-week-long spirit quest to invent all the wheels myself based on the possibly incorrect assumption of what he meant in the first place.

Anyway, as of v31 of Breaking Change, my podcast is now, apparently, a video-first production. I figured I'd share my notes on the initial changes to the workflow, along with links to the products I'm using.

Here's the video:

And here's the extremely simple and easy 10 step process that got me there (with affiliate links throughout):

  1. Record audio and video in OBS
    • Video is recorded in 4k@60fps in 8-bit HEVC as an MKV file (because MKV files, unlike MOV, can be interrupted by a crash without losing the entire recording)
    • I use a Sony a7 mark IV over HDMI with Elgato Camlink 4K mounted via the Elgato Master Mount system and flanked on either side by Elgato Key Lights Air. I also have an Elgato Prompter in front of the lens that displays two windows, side-by-side: OBS on the left and my Things project with my show topics on the right
    • I record audio tracks from an SSL 2+ USB interface
      • Track 1 is reserved for the first XLR input, which has a bog-standard SM7b microphone plugged into it
      • Track 2 is L&R of the loopback interface for music and stingers (here's my guide on setting up SSL2+ for loopback that allowed me to avoid software like Audio Hijack)
    • Music and stingers are played manually from shortcuts on my Stream Deck XL
    • While recording, if I need a break, I only hit PAUSE/UNPAUSE instead of STOP/START to ensure only one file is created
    • When finished, leave it recording and then LEAVE THE ROOM for a minute to create some dead air I can later use to sample the room noise with iZotope's RX Spectral Denoise plugin

What happens next will shock you…

A script to validate videos for the Instagram API

If you are publishing videos via the Instagram API (as I do for my feed2gram gem and for Beckygram), one of the first things you notice is that it is a lot less forgiving than their app is.

From their docs that spell this out:

The following are the specifications for Reels:

  • Container: MOV or MP4 (MPEG-4 Part 14), no edit lists, moov atom at the front of the file.
  • Audio codec: AAC, 48khz sample rate maximum, 1 or 2 channels (mono or stereo).
  • Video codec: HEVC or H264, progressive scan, closed GOP, 4:2:0 chroma subsampling.
  • Frame rate: 23-60 FPS.
  • Picture size:
  • Maximum columns (horizontal pixels): 1920
  • Required aspect ratio is between 0.01:1 and 10:1 but we recommend 9:16 to avoid cropping or blank space.
  • Video bitrate: VBR, 25Mbps maximum
  • Audio bitrate: 128kbps
  • Duration: 15 mins maximum, 3 seconds minimum
  • File size: 300MB maximum

If you get this wrong, you'll receive a mostly-unhelpful-but-better-than-nothing-error message that looks like this:

{
  "message": "The video file you selected is in a format that we don't support.",
  "type": "OAuthException",
  "code": 352,
  "error_subcode": 2207026,
  "is_transient": false,
  "error_user_title": "Unsupported format",
  "error_user_msg": "The video format is not supported. Please check spec for supported CodedException format",
  "fbtrace_id": "AvU9fEFKlA8Z7RLRlZ1j9w_"
}

I was sick of hobbling together the same half dozen ffprobe commands and then eyeballing the results (which are typically inscrutable if you don't know what you're looking for), so I wrote a script to test this for me.

For example, a recent clip failed to syndicate to Instagram and I wondered why that was, so I ran this little script, which I put on my PATH and named validate_video_for_instagram. It output:

Validating video: /Users/justin/Documents/podcast/clips/v30-the-startup-shell-game.mp4

✅ container
✅ audio_codec
✅ max_audio_sample_rate
✅ video_codecs
✅ color_space
✅ min_frame_rate
✅ max_frame_rate
❌ max_horizontal_pixels - Maximum columns (horizontal pixels): 1920 required; got: 2160
✅ min_aspect_ratio
✅ max_aspect_ratio
✅ max_video_bitrate_mbps
❌ max_audio_bitrate_kbps - Audio bitrate: 128kbps maximum required; got: 256.073
✅ min_duration_seconds
✅ max_duration_seconds
✅ max_size_megabytes

❌ Video had 2 error(s) preventing API upload to Instagram.
Docs: https://developers.facebook.com/docs/instagram-platform/instagram-graph-api/reference/ig-user/media

Is it surprising that the Instagram API won't accept 4K video? Yes. Especially since the videos weighs in at less than 100MB.

Want this for yourself?

To run this, you'll need a modern Ruby installed and ffprobe installed (on a Mac with homebrew, brew install ffprobe should do).

But wait, there's more…

A real-world example of a Mocktail test

A few years ago, I wrote this test double library for Ruby called Mocktail. Its README provides a choose-your-own-adventure interface as well as full API documentation, but it doesn't really offer a way to see a test at a glance—and certainly not a realistic one.

Since I just wrote my first test with Mocktail in a while, I figured I'd share it here for anyone who might have bounced off Mocktail's overly cute README or would otherwise be interested in seeing what an isolated unit test with Mocktail looks like.

The goal

Today I'm writing a class that fetches an Atom feed. It has three jobs:

  1. Respect caching and bail out if the feed hasn't been updated
  2. Parse the feed
  3. Persist the feed entries (and updated caching headers)

There is an unspoken fourth job here: coordinate these three tasks. The first class I write will be the orchestrator of these other three, which means its only job is to identify and invoke the right dependencies the right way under a given set of conditions.

The code

So we can focus on the tests, I'll spare you the test-driven development play-by-play and just show you the code that will pass the tests we're going to write:

class FetchesFeed
  def initialize
    @gets_http_url = GetsHttpUrl.new
    @parses_feed = ParsesFeed.new
    @persists_feed = PersistsFeed.new
  end

  def fetch(feed)
    response = @gets_http_url.get(feed.url, headers: {
      "If-None-Match" => feed.etag_header,
      "If-Modified-Since" => feed.last_modified_header
    }.compact)
    return if response.code == 304 # Unchanged

    parsed_feed = @parses_feed.parse(response.body)
    @persists_feed.persist(
      feed,
      parsed_feed,
      etag_header: response.headers["etag"],
      last_modified_header: response.headers["last-modified"]
    )
  end
end

As you can see, this fits a certain idiosyncratic style that I've been practicing in Ruby for a long-ass time at this point:

  • Lots of classes who hold their dependencies as instance variables, but zero "unit of work" state. Constructors only set up the object to do the work, and public methods perform that work on as many unrelated objects as needed
  • Names that put the verb before the noun. By giving the verb primacy, as the system grows and opportunities for reuse arise, this encourages me to generalize the direct object of the class via polymorphism as opposed to generalizing the core function of the class, violating the single-responsibility principle (i.e. PetsDog may evolve into PetsAnimal, whereas DogPetter is more likely to evolve into a catch-all DogManager)
  • Pretty much every class I write has a single public method whose name is the same as the class's verb
  • I write wrapper classes around third-party dependencies that establish a concrete contract so as to make them eminently replaceable. For starters, GetsHttpUrl and ParsesFeed will probably just delegate to httparty and Feedjira, but those wrappers will inevitably encapsulate customizations in the future

If you write code like this, it's really easy to write tests of the interaction without worrying about actual HTTP requests, actual XML feeds, and actual database records by using Mocktail.

The first test

Here's my first test, which assumes no caching headers are known or returned by a feed. I happen to be extending Rails' ActiveSupport::TestCase here, but that could just as well be Minitest::Test or TLDR:

require "test_helper"

class FetchesFeedTest < ActiveSupport::TestCase
  setup do
    @gets_http_url = Mocktail.of_next(GetsHttpUrl)
    @parses_feed = Mocktail.of_next(ParsesFeed)
    @persists_feed = Mocktail.of_next(PersistsFeed)

    @subject = FetchesFeed.new

    @feed = Feed.new(
      url: "http://example.com/feed.xml"
    )
  end

  def test_fetch_no_caching
    stubs {
      @gets_http_url.get(@feed.url, headers: {})
    }.with { GetsHttpUrl::Response.new(200, {}, "an body") }
    stubs { @parses_feed.parse("an body") }.with { "an parsed feed" }

    @subject.fetch(@feed)

    verify {
      @persists_feed.persist(
        @feed, "an parsed feed",
        etag_header: nil, last_modified_header: nil
      )
    }
  end
end

Here's the Mocktail-specific API stuff going on above:

  • Mocktail.of_next(SomeClass) - pass a class to this and you'll get a fake instance of it AND the next time that class is instantiated with new, it will return the same fake instance. This way, the doubles we're configuring in our test are the same as the ones the subjects' instance variables are set to in its constructor
  • stubs {…}.with {…} - call the dependency exactly as you expect the subject to in the first block and, if it's called in a way that satisfies that stubbing, return the result of the second block
  • verify {…} - call a dependency exactly as you expect it to be invoked by the subject, raising an assertion failure if it doesn't happen

You'll note that this test adheres to the arrange-act-assert pattern, in which first setup is performed, then the behavior being tested is invoked, and then the assertion is made. (Sounds obvious, but most mocking libraries violate this!)

The second test

Kicking the complexity up a notch, next I added a test wherein caching headers were known but they were out of date:

def test_fetch_cache_miss
  @feed.etag_header = "an etag"
  @feed.last_modified_header = "an last modified"
  stubs {
    @gets_http_url.get(@feed.url, headers: {
      "If-None-Match" => "an etag",
      "If-Modified-Since" => "an last modified"
    })
  }.with {
    GetsHttpUrl::Response.new(200, {
      "etag" => "newer etag",
      "last-modified" => "laster modified"
    }, "an body")
  }
  stubs { @parses_feed.parse("an body") }.with { "an parsed feed" }

  @subject.fetch(@feed)

  verify { @persists_feed.persist(@feed, "an parsed feed", etag_header: "newer etag", last_modified_header: "laster modified") }
end

This is longer, but mostly just because there's more sludge to pass through all the tubes from the inbound argument to each dependency. You might also notice I'm just using nonsense strings here instead of something that looks like a real etag or modification date. This is intentional. Realistic test data looks meaningful, but these strings are not meaningful. Meaningless test data should look meaningless (hence the grammar mistakes). If I see an error, I'd like to know which string I'm looking at, but I want the test to make clear that I'm just using the value as a baton in a relay race: as long as it passes an equality test, "an etag" could be literally anything.

The third test

The last test is easiest, because when there's a cache hit, there won't be a feed to parse or persist, so we can just bail out. In fact, all we really assert here is that no persistence call happens:

def test_fetch_cache_hit
  @feed.etag_header = "an etag"
  @feed.last_modified_header = "an last modified"
  stubs {
    @gets_http_url.get(@feed.url, headers: {
      "If-None-Match" => "an etag",
      "If-Modified-Since" => "an last modified"
    })
  }.with { GetsHttpUrl::Response.new(304, {}, nil) }

  assert_nil @subject.fetch(@feed)

  verify_never_called { @persists_feed.persist }
end

Note that verify_never_called doesn't ship with Mocktail, but is rather something I threw in my test helper this morning for my own convenience. Regardless, it does what it says on the tin.

Why make sure PersistsFeed#persist is not called, but avoid making any such assertion of ParsesFeed? Because, in general, asserting that something didn't happen is a waste of time. If you genuinely wanted to assert every single thing a system didn't do, no test would ever be complete. The only time I bother to test an invocation didn't happen is when an errant call would have the potential to waste resources or corrupt data, both of which would be risks if we persisted an empty feed on a cache hit.

The setup

To setup Mocktail in the context of Rails the way I did, once you've tossed gem "mocktail" in your Gemfile, then sprinkle this into your test/test_helper.rb file:

module ActiveSupport
  class TestCase
    include Mocktail::DSL

    teardown do
      Mocktail.reset
    end

    def verify(...)
      assert true
      Mocktail.verify(...)
    end

    def verify_never_called(&blk)
      verify(times: 0, ignore_extra_args: true, ignore_arity: true, &blk)
    end
  end
end

Here's what each of those do:

  • include Mocktail::DSL simply lets you call stubs and verify without prepending Mocktail.
  • Mocktail.reset will reset any state held by Mocktail between tests, which is relevant if you fake any singletons or class methods
  • verify is overridden because Rails 7.2 started warning when tests lacked assertions, and it doesn't recognize Mocktail.verify as an assertion (even though it semantically is). Since Rails removed the configuration to disable it, we just wrap the method ourselves with a dummy assert true to clear the error
  • verify_never_called is just shorthand for this particular convoluted-looking configuration of the verify method (times asserts the exact number times something is called, ignore_extra_args will apply to any invocations with more args than specified in the verify block, and ignore_arity will suppress any argument errors raised for not matching the genuine method signature)

The best mocking library

I am biased about a lot of things, but I'm especially biased when it comes to test doubles, so I'm unusually cocksure when I say that Mocktail is the best mocking library available for Ruby. There are lots of features not covered here that you might find useful. And there are a lot of ways to write code that aren't conducive to using Mocktail (but those code styles aren't conducive to writing isolation tests at all, and therefore shouldn't be using any mocking library IMNSHO).

Anyway, have fun playing with your phony code. 🥃

Review of the Bullstrap Leather NavSafe Wallet

A few years ago, I bought a leather Apple MagSafe Wallet with its hobbled Find My integration (wherein your phone merely tracks the location at which it was last disconnected from the wallet, as opposed to tracking the wallet itself). And that was a couple years before they made the product even worse by switching to the vegan FineWoven MagSafe Wallet.

Well, this wallet I never really liked is falling apart, and so I went searching for something better. All I want is a leather wallet that has a strong magnet and can reliably fit 3 or 4 cards without eventually stretching to the point that a mild shake will cause your cards to slide out.

After hearing the hosts of ATP talk up the company Bullstrap as a great iPhone case maker that continues to have the #courage to use real cow-murdererin' leather, I figured I'd try out their Leather NavSafe Wallet in the burnt sienna color. I was extremely excited to switch to this wallet, because it promised genuine leather, real Find My integration, a really strong magnet, and a rechargeable battery. Finally, a MagSafe wallet with no compromises!

It sat in my mailbox for about a week because my dad died, and between receiving the delivery notification and returning home, I thought about how excited I was for this wallet every single time I had to pay for something. That's why I tore open the bag and set it up right in the little community mailbox parking lot, instead of waiting the 30 seconds it would take for me to drive home first.

First impressions? Well, dad always read everything I posted to this website and he hated it when I swore gratuitously, so I guess the gloves are finally off.

This motherfucking Bullstrap Leather NavSafe Wallet is a goddamn piece of shit.

To be clear, I am recommending you not purchase the Bullstrap Leather NavSafe Wallet. You probably don't want to buy their Leather Magnetic Wallet either, and—given that they're charging $79.99 for this trash—I plan on avoiding all their bullshit until one of them contacts me to explain why their wallet isn't as bad as my hyperbolic ass is making it out to be. Despite wanting a leather wallet, I believe the life of every cow has value, so it's a goddamn shame to see this one's wasted on piss-poor products like this.

Key points follow, in descending order of positivity:

  1. The Find My setup works, that much I know. I decided to return it within 30 seconds, so I can't attest to its actually-finding-it functionality—all I can say is that I would feel profound disappointment upon successfully locating a Bullstrap Leather NavSafe Wallet
  2. The Qi charging of that wallet might work too, beats me. I won't be holding onto this thing long enough to drain the battery
  3. The wallet purports to fit 3 cards, but it's obscenely, ridiculously tight. It's so tight that I barely got the second card in. It was only due to my journalistic commitment to fully evaluating the product that I attempted to wedge in a third—a decision I immediately regretted. One presumes this rich Corinthian leather will stretch with time, but it won't be on my watch. Perhaps this wallet was designed for a simpler time, back when nice credit cards were made out of plastic and not bulletproof steel
  4. Speaking of the leather, it is extremely genuine, because it already had a scratch along the entire bottom edge before I'd even removed it from the insufficiently-protective plastic bag it was shipped in. Since their return policy requires products to be returned in "unused, re-sellable condition", perhaps this one had been sold, unused, and returned a few times already
  5. Instead of a thumbhole on the back side through which to slide your cards upwards and eject them from the wallet, you're left with only this weird little cloaca at the bottom that no earthly finger could ever squeeze into. Maybe they imagine customers taking a tiny flathead screwdriver and shoving it up the glory hole in order to get their cards out? Because that's what I had to do
  6. The magnet is so fucking weak I thought that I might have forgotten a step in the setup instructions. I literally double-checked the box to make sure there wasn't some kind of adhesive magnet I was supposed to affix on my own. Whatever this magnet is, I would not call it load bearing—two metallic cards and a driver's license left it so precariously attached to the back of my buck-naked iPhone 16 Pro that a heavy breath and a generous jiggle was all it took to dislodge it. To make sure the weight of my cards wasn't to blame, I tested the wallet empty and the magnet is quite a lot weaker than the already-way-too-weak Apple MagSafe wallet and absolutely no match for any of the sub-$20 junk you can find on Amazon in this category

I ordered it directly from their store, which means I also apparently have to pay $7.99 to return it, which feels like bullshit. Come to think of it, the fact I have to wait to hear back from their customer support to get a shipping label is actually why I'm writing this review. I just needed someone to talk to, apparently.

Anyway, this is your regular reminder of why we all ought to just keep ordering garbage products on Amazon and making liberal use of their generous free return policy while we let independent resellers and the resiliency of the US economy rot on the vine. Fuck's sake.

How to transcribe a Podcast with Whisper on an ARM Mac using Homebrew

Goofing around with podcast transcripts today. Here's what I did to transcribe version 26 of the Breaking Change podcast, after a couple hours of being mad at how hard the Internet was making it:

  1. Run brew install whisper-cpp, because I'm fucking sick of cloning one-off Python repos.
  2. Download a model and put it somewhere (I chose ggml-large-v3-turbo-q8_0.bin because it's apparently slower but more accurate than "q5", whatever the hell any of this means)
  3. Since your podcast is probably an MP3, you'll have to convert it to a WAV file for Whisper. Rather than create an interstitial file we'd have to clean up later, we'll just pipe the conversion from ffmpeg. That bit of the command looks like: ffmpeg -i "v26.mp3" -ar 16000 -ac 1 -f wav -
  4. Next is the actual Whisper command, which requires us to reference both the Metal stuff (which ships with whisper-cpp) as well as our model (which I just put in iCloud Drive so I could safely forget about it). I also set it to output SRT (because I wrote a Ruby gem that converts SRT files to human-readable transcripts) and hint that I'm speaking in English. That bit of the command looks like this: GGML_METAL_PATH_RESOURCES="$(brew --prefix whisper-cpp)/share/whisper-cpp" whisper-cpp --model ~icloud-drive/dotfiles/models/whisper/ggml-large-v3-turbo-q8_0.bin --output-srt --language en --output-file "v26.srt"

Here's the above put together into a brief script I named transcribe-podcast that will just transcribe whatever file you pass to it:

# Check if an input file is provided
if [ -z "$1" ]; then
  echo "Usage: $0 input_audio_file"
  exit 1
fi

input_file="$1"
base_name=$(basename "$input_file" | sed 's/\.[^.]*$//')

# Convert input audio to 16kHz mono WAV and pipe to whisper-cpp
ffmpeg -i "$input_file" -ar 16000 -ac 1 -f wav - | \
  GGML_METAL_PATH_RESOURCES="$(brew --prefix whisper-cpp)/share/whisper-cpp" \
  whisper-cpp --model ~/icloud-drive/dotfiles/models/whisper/ggml-large-v3-turbo-q8_0.bin \
  --output-srt --language en --output-file "$base_name" -

If you're writing a script like this for yourself, just replace the path to the --model flag and you too will be able to do cool stuff like this:

$ transcribe-podcast your-podcast.mp3

As for performance, on an M4 Pro with 14 CPU cores, the above three-and-a-half hour podcast took a bit over 11 minutes. On an M2 Ultra with 24 cores, the same file was finished in about 8 minutes. Cool.

How to add a headrest to a Steelcase Leap chair

The Steelcase Leap (v2) is a good office chair in a world of mostly bad office chairs. I've been using it since 2020 and I don't love it, but I definitely hate it less than every other office chair I've ever owned. That's one reason I find myself vexed that Steelcase does not offer an after-market headrest for the chair (and no longer seems to let you configure one with a built-in headrest). In fact, so few office chairs offer headrests that I was briefly tempted to buy a "gaming chair" (do not buy a gaming chair).

And if you're reading this and identify as an Online Ergonomics Expert, I know you're champing at the bit to tell me, "headrests are bad, actually."

But if you're like me and have an incredibly large and heavy head, and/or you spend most of your time at the computer leaning back and pondering what to do next between furious-but-sporadic bouts of typing, then I'm happy to report I have a solution for what ails you.

I tried four different DIY solutions for slapping a third-party headrest onto the Steelcase Leap that were dreamed up by randos on Reddit, but only one of them worked. And the best part is that the winning thread only requires the headrest and a couple of zip ties, meaning that this approach shouldn't void your warranty by requiring you to drill into the back of the chair.

All you need:

  1. This exact headrest made by Engineered Now
  2. These heavy-duty zip ties
  3. These images and maybe also these images that more-or-less tell you how to secure the headrest with the ties to the chair itself

If you're visiting here from a search engine or an AI assistant's generous citation, I hope you find this helpful! I can only speak for myself, but I am quite glad that I didn't have to buy a new chair just to keep my 15-pound head upright at the end of a long day.

Apple's own documentation doesn’t know about watchOS 11’s biggest feature

From Apple's support page for connecting an Apple Watch to Wi-Fi:

Note: Apple Watch won’t connect to public networks that require logins, subscriptions, or profiles. These networks, called captive networks, can include free and pay networks in places like businesses, schools, dorms, apartments, hotels, and stores.

This has indeed been my experience ever since buying the Series 0 in 2015. But because the Apple Watch can piggyback off its parent iPhone for data over Bluetooth—and because most people are never more than a few feet from their phone—odds are you've never even noticed that attempting to join a Wi-Fi network with a captive portal would silently fail instead of bringing up a WebKit view.

You'll never guess what happens next…

The Empowered Programmer citations

Update: As promised, the talk is now up! Go check it out if you want.

I meant to be more on top of it than this, but thanks to some day-of turbulence, I failed to do two things before my Rails World talk on Thursday:

  1. Post this promised post of links to my blog so people could see all the various tools and advice I'd referenced
  2. Redirect Becky's old site (buildwithbecky.com) to the new one (betterwithbecky.com)

Whoops!

Anyway, better late than never. Here are the things I mentioned in the talk:

Of course, most of you reading this weren't in the audience in Toronto and haven't seen the talk. Sit tight, I'm told that Rails World's turnaround time for getting the video online won't be too long. 🤞

There are a bunch of other things about the app's design and architecture that I had to cut for time and which I hope to share in the future, as well as a behind-the-scenes look at how I put together the presentation. Stay tuned!

Drive-by Active Storage advice

This post is also available in Japanese, care of Shozo Hatta

I'm working on a conference talk and there won't be time for me to detail each and every piece of advice I've accrued for each technical topic, so I'm going to dump some of them here and link back to them from the slides.

Today's topic is Active Storage, the Ruby on Rails feature that makes it easy to store user-generated assets like photos and videos in the cloud without clogging up your application or database servers.

Before you do anything, read this absolutely stellar post describing how to get the most out of the feature and avoid its most dangerous foot-guns.

Here goes.

Turns out, there's more to it…

A decoupled approach to relaying events between Stimulus controllers

Part of the allure of Stimulus is that you can attach rich, dynamic behavior to the DOM without building out a long-lived stateful application in the browser.

The pitch is that each controller is an island unto itself, with each adding a particular kind of behavior (e.g. a controller for copying to clipboard, another for displaying upload status, another for drag-and-drop reordering), configured entirely via data attributes. This works really well when user behavior directly initiates all of the behaviors a Stimulus controller needs to implement.

This works markedly less well when a controller's behavior needs to be triggered by another controller.

But wait, there's more…

What one must pass to includes() to include Active Storage attachments

If you're using Active Storage, eager-loading nested associations that contain attachments in order to avoid the "N + 1" query problem can quickly reach the point of absurdity.

Working on the app for Becky's strength-training business, I got curious about how large the array of hashes being sent to the call to includes() is whenever the overall strength-training program is loaded by the server. (This only happens on a few pages, like the program overview page, which genuinely does contain a boatload of information and images).

Each symbol below refers to a reference from one table to another. Every one that descends from :file_attachment is a reference to one of the tables managed by Active Storage for keeping track of cloud-hosted images and videos. Those hashes were extracted from the with_all_variant_records scope that Rails provides.

I mean, look at this:

[{:overview_video=>
   {:file_attachment=>
     {:blob=>
       {:variant_records=>{:image_attachment=>:blob}, :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}},
 {:overview_thumbnail=>
   {:file_attachment=>
     {:blob=>
       {:variant_records=>{:image_attachment=>:blob}, :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}},
 {:warmup_movement=>
   {:movement_video=>
     {:file_attachment=>
       {:blob=>
         {:variant_records=>{:image_attachment=>:blob}, :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}},
    :movement_preview=>
     {:file_attachment=>
       {:blob=>
         {:variant_records=>{:image_attachment=>:blob}, :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}}},
 {:workouts=>
   {:blocks=>
     {:mobility_movement=>
       [{:primary_equipment=>
          {:equipment_image=>
            {:file_attachment=>
              {:blob=>
                {:variant_records=>{:image_attachment=>:blob},
                 :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}},
         :secondary_equipment=>
          {:equipment_image=>
            {:file_attachment=>
              {:blob=>
                {:variant_records=>{:image_attachment=>:blob},
                 :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}},
         :tertiary_equipment=>
          {:equipment_image=>
            {:file_attachment=>
              {:blob=>
                {:variant_records=>{:image_attachment=>:blob},
                 :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}},
         :movement_video=>
          {:file_attachment=>
            {:blob=>
              {:variant_records=>{:image_attachment=>:blob},
               :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}},
         :movement_preview=>
          {:file_attachment=>
            {:blob=>
              {:variant_records=>{:image_attachment=>:blob},
               :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}}],
      :exercises=>
       {:exercise_options=>
         {:movement=>
           [{:primary_equipment=>
              {:equipment_image=>
                {:file_attachment=>
                  {:blob=>
                    {:variant_records=>{:image_attachment=>:blob},
                     :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}},
             :secondary_equipment=>
              {:equipment_image=>
                {:file_attachment=>
                  {:blob=>
                    {:variant_records=>{:image_attachment=>:blob},
                     :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}},
             :tertiary_equipment=>
              {:equipment_image=>
                {:file_attachment=>
                  {:blob=>
                    {:variant_records=>{:image_attachment=>:blob},
                     :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}},
             :movement_video=>
              {:file_attachment=>
                {:blob=>
                  {:variant_records=>{:image_attachment=>:blob},
                   :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}},
             :movement_preview=>
              {:file_attachment=>
                {:blob=>
                  {:variant_records=>{:image_attachment=>:blob},
                   :preview_image_attachment=>{:blob=>{:variant_records=>{:image_attachment=>:blob}}}}}}}]}}}}}]

By my count, that's 167 relationships! Of course, in practice it's not quite this bad since the vast majority are repeated, and as a result this winds up executing "only" 50 queries or so. But that's… a lot!

Okay, I'm interested…

Broadcasting real-time database changes on a budget

While building Becky's (yet unreleased) app for her strength training business, I've been taking liberal advantage of the Hotwire combo of Turbo and Stimulus to facilitate dynamic frontend behavior without resorting to writing separate server-side and client-side apps. You can technically use these without Rails, but let's be honest: few people do.

Here are a few capabilities this broader suite of libraries give you, in case you're not familiar:

  • Rails Request.js offers a minimal API for sending conventional HTTP requests from JavaScript with the headers Rails expects like X-CSRF-Token handled for you
  • Turbo streams can send just a snippet of HTML over the wire (a fetch/XHR or an Action Cable socket) to re-render part of a page, and support was recently added for Custom Actions that sorta let you send anything you want over the wire
  • The turbo-rails gem adds some very handy glue code to broadcast model updates in your database and render Turbo streams to subscribers via an Action Cable socket connection
  • Stimulus values are synced with the DOM as data attributes on the owning controller's element, and Object serialization is supported (as one might guess) via JSON serialization. Stimulus controllers, by design, don't do much but they do watch the DOM for changes to their values' data attributes

Is your head spinning yet? Good.

But wait, there's more…

Photo Shuffle is still broken in the iOS 18 Lock Screen

For iOS 16, Apple overhauled the iPhone lock screen and the one feature they shipped that I really, really wanted was the ability to shuffle depth-effect photos of my spouse. It's called "Photo Shuffle", and you get there by adding a new lock screen, tapping "Photo Shuffle", and selecting "People". The Big Idea is that your phone would use machine learning to select great photos and then apply a depth effect (i.e. clipping the subject in front of the time). However, instead of having users select "People & Pets" from a standard iCloud Photos picker, you get an arbitrary smattering of a couple dozen randos in a bare bones custom UI.

So what's my beef with this feature? Over the course of 2 years and 7 devices, my wife has never been among the options presented to me. Can't select her. Doesn't matter that I've named her in the Photos app. Or favorited her. My library has over 25,000 photos of her for crissakes. Who can I pick from instead? Well, there are least 3 kids whose names I never knew and for whom Becky appears to have had as Spanish students for a single semester in 2009. Great job, everyone.

As it turns out, I am not alone.

I first encountered this bug in iOS 16 developer beta 1 on June 6, 2022. It has persisted across four iPhones and three iPads, even when set up fresh, not-from-backup. Not only that, I always see the exact same list of people I don't care about. Most of whom I never even bothered to name in Photos, which suggests the bug lives in the cloud, which is just great.

Jason Snell reported on this feature's problematic design last year (during iOS 17 beta season), for MacWorld:

Photo Shuffle's method of offering people to display appears utterly broken. It offered my wife a small number of faces, most of whom were completely random and fairly uncommon. She's got hundreds, if not thousands, of pictures of me and our kids on her phone, and yet we weren't among the faces offered. And if the faces you're looking for aren't in Photo Shuffle's very small list of options, there's no recourse. You're stuck.

Well, here we are, one year later, and I'm unhappy to report: Photo Shuffle is still broken in iOS 18. It doesn't seem to have been touched at all.

When people talk about the inscrutability of machine-learning and AI as being problematic, this is as practical an example as I can think of. All I want to do is shuffle photos of my wife on my lock screen, but there's no action I can take as a user—no amount of hardware purchases, software updates, or device factory resets—to make that happen. Apple Support can't do anything either. I doubt the engineers who worked on it could. Whenever anyone says "AI", everyone involved quickly absolves themselves of responsibility—it's a black box.

Recipe: Swapping out a model div with Turbo Streams and Stimulus

Rails + Hotwire is very capable of dynamic behavior like replacing a component in the DOM by sending HTML over-the-wire in response to a user action, but the fact it requires you to touch half a dozen files in the process can make the whole thing feel daunting. Rails itself has always been this way (with each incremental feature requiring a route, model, controller, view, etc.), but I've been using it long enough that I sometimes forget that—similar to learning a recipe—I originally needed months of intentional practice to internalize and gain comfort with the framework's most routine of workflows.

So, like a recipe card, here is a reusable approach to swapping out a <div> rendered by a Rails partial with a turbo stream whenever a user selects an alternate model from an input (specifically, a select) using Turbo 8, Stimulus 1.3, and Rails 7.1.

The ingredients

  1. Partial: Extract a partial to be rendered inside the element you wish to replace, so that both your view and your turbo stream can render the same markup for a given model
  2. Routing: Add a route specifying a one-off controller action that will respond with a turbo stream
  3. Controller Action: Define an action that takes your model ID and the DOM element's ID and responds with a turbo stream to update the element's contents
  4. Turbo Stream View: Create a turbo stream view for the action that invokes the partial
  5. Stimulus Controller: Create a generic Stimulus controller that can swap any model type when given a path, ID, and container
  6. View: Wire up the Stimulus controller to the view's select box and the to-be-replaced element

That's it, 6 key ingredients. If you're curious, step 5 contains the most magic flavoring. 🪄

The actions

Ingredients in hand, let's walk through each of the steps needed to go end-to-end with this feature.

1. Set Up the Rails Partial

First, create a partial that you want to render inside the <div>. Let's assume we want users to be able to change out a generic model named Item, which has a conventional ItemsController.

In that case, let's place a partial that renders the details about an item alongside the controller's views, in _detail.html.erb:

<!-- app/views/items/_detail.html.erb -->
<div>
  <%= item.title %>
  <!-- Other item stuff… -->
</div>

2. Add Routes

Next, we'll add the necessary route for the detail action:

# config/routes.rb
resources :items do
  get :detail, on: :collection
end

This will define a path helper detail_items_path, which works out to "/items/detail".

Note that I threw this on the :collection so that our stimulus controller can more easily specify the URL via query parameters instead of interpolating a fancier member route (e.g. "items/42/detail").

3. Define the Controller Action

With the route defined, we'll add a simple controller action that only responds to turbo stream requests.

Here's what that might look like:

# app/controllers/items_controller.rb
class ItemsController < ApplicationController
  def detail
    @dom_id = params[:dom_id]
    @item = Item.find(params[:id])
  end
end

This dom_id param might throw you off at first, but it's important to keep in mind that unique HTML IDs are the coin of the realm in Turbo-land. You'll see how it gets set later, in step 5.

4. Create the Turbo Stream View

To finish the route-controller-view errand, we'll create a view for the detail action, with the turbo_stream.erb extension instead of html.erb:

<!-- app/views/items/detail.turbo_stream.erb -->
<%= turbo_stream.update @dom_id do %>
  <%= render partial: "detail", locals: { item: @item } %>
<% end %>

Because both the turbo stream and the original view need to render items in exactly the same way, detail.turbo_stream.erb view responds by rendering the _detail.html.erb partial. If you inspect the HTML that comes over the wire, you'll see that only the turbo stream tag containing this partial is transferred, which often means barely more data is transferred than had we implemented this as a single-page JavaScript by making a similar HTTP request for JSON.

5. Define the Stimulus Controller

In order for users' selections to have any effect, we need JavaScript. We could write a Stimulus controller that's coupled specifically to this Item model, but it's no more work to make it generic, which would allow us to reuse this functionality elsewhere in our app. So let's do that.

You can do this the hard way by using the browser's built-in fetch API to construct the URL, set the Accept header to text/vnd.turbo-stream.html, and replace the element's innerHTML in the DOM, but that's easy to screw up (in fact, I screwed it up twice while writing this). So instead, I'd recommend pulling in the requestjs-rails gem, by first chucking it in your Gemfile alongside any other front-end related gems:

gem "requestjs-rails"

Here's the final Stimulus controller. Deep breath, as I haven't explained all this yet:

// app/javascript/controllers/model_swap_controller.js
import { get } from '@rails/request.js'
import { Controller } from '@hotwired/stimulus'

export default class ModelSwapController extends Controller {
  static targets = ['container']
  static values = {
    path: String,
    originalId: String
  }

  swap (event) {
    const modelId = event.currentTarget.value || // Value from input action
      event.detail?.value || // Value from custom event (e.g. hotwire_combobox)
      this.originalIdValue // Fallback to original value if input value is cleared

    get(this.pathValue, {
      query: {
        id: modelId,
        dom_id: this.containerTarget.id
      },
      responseKind: 'turbo-stream'
    })
  }
}

That get function from @rails/request.js handles all the housekeeping you might hope it would. When I switched to it, the fact it worked the instant I plopped it in gave me Dem Magic Vibes that keep me coming back to Rails 18 years in.

This controller also contains two values and a target:

  • path value: this is just a URL, which we'll set to our intentionally-parameter-free detail_items_path
  • originalId value: this is the Item ID that was first rendered when the page loaded. By having this available as a fallback, we'll be able to gracefully handle the user choosing a blank option from the select by restoring the original item
  • container target: this is the DOM element containing the partial we're going to swap out. Note that it must have a unique id attribute, which we're including in our request to the server as dom_id

If this doesn't make perfect sense, I recommend wiring it up anyway and getting it working first, then debugging to inspect the values in motion.

6. Wiring up the Stimulus Controller in the View

Finally, we'll visit the original view from which the _detail.html.erb partial was initially extracted.

Right off the bat, you might notice that I like to use content_tag whenever I need to specify numerous attributes with Ruby expressions, as it requires far fewer <%=%> interpolations than specifying a literal <div>:

<!-- app/views/items/show.html.erb -->
<%= content_tag :div, data: {
    controller: "model-swap",
    model_swap_path_value: detail_items_path,
    model_swap_original_id_value: @item.id,
  } do %>
  <%= collection_select :item, :id, Item.all, :id, :title,
    {include_blank: true},
    {data: {action: "model-swap#swap"}} %>

  <div id="<%= dom_id(@item, "detail") %>" data-model-swap-target="container">
    <%= render partial: "detail", locals: { item: @item } %>
  </div>
<% end %>

The above will probably look immediately familiar to anyone who's done a lot of work with Stimulus before and utterly arcane otherwise. Helping you sort out the latter is beyond the scope of this article, though. Ask ChatGPT or something.

The only thing in the above template that wasn't completely preordained by the first 5 steps was the id attribute of the wrapping div element, so I'll explain that here. For illustration purposes, I set the container div to dom_id(@item, "detail") (which would work out to something like "detail_item_42") to give an example of something that's likely to be unique, but in truth, the most appropriate ID will depend on what's going on in the broader page. For example, in the UI that inspired this blog post, I am allowing users to replace any of a variable array of items across 3 options, so my IDs are based on those indices, like option_2_item_4, as opposed to the database ID of any models. All that really matters is that the ID be unique.

That's it!

Pulling off functionality like this with Turbo and Stimulus feels extra delightful, I think, if you (like so many of us) spent the last decade assuming that this kind of snappy, dynamic behavior would require a front-end JavaScript framework that would live forever and keep track of a duplicated copy of the app's state. Instead, because the server-side rendered view can draw the entire page without any JavaScript involved, any client-side changes we introduce to the state of the DOM can operate on attributes first defined by the view, keeping the entire source of truth of the current application state in one place (the DOM) instead of two (a server database and in-memory JavaScript objects).

Anyway, when it works, it's great. And when the lego bricks aren't snapping together for whatever reason, it's infuriating. Which maybe makes Hotwire the most Rails-assed extension to the framework since Rails itself. If you find yourself losing a bunch of time to what seem like trivial naming issues, just know that you're not alone. This stuff takes practice to get used to.

If you've worked through this guide, hopefully you have a functioning feature that you can continue iterating on. If you stumbled over any errata above, please let me know.

Make Command-Return submit your web form

Hello, I just wanted to say that if you want your web app to feel Cool and Modern, one of the easiest things you can do is make it so that Mac users can hit command-return anywhere in the form to submit it.

Some web sites map this to control-enter for other platforms, and that's fine, but I don't bother. Truth be told, I used to bother, but after adding it to a few web apps years ago, I actually had multiple Windows and Linux users complain to me about unintended form submissions.

I am not making a comment on the sophistication of non-Apple users, but I am saying that if you just stick this code at the top of your app, it will make it more Exclusive and feel Snappier and I will thank you for it.

Here, just copy and paste this. Don't even bother reading it first:

document.addEventListener('keydown', (event) => {
  if (event.key === 'Enter' && event.metaKey) {
    if (!document.activeElement) return
    const closestForm = document.activeElement.closest('form')
    if (closestForm) {
      event.preventDefault()
      closestForm.requestSubmit()
    }
  }
})

Why I just uninstalled my own VS Code extension

After a little over a year of prodding by Vini Stock to ship a Standard Ruby add-on for Ruby LSP, and thanks to a lot of help from Shopify's Ruby DX team, I've finally done it! In fact, so long as your Gemfile's version of standard is at least 1.39.1, you already have the new Ruby LSP add-on. It's built-in!

Ruby LSP supports any editor with language server support, but configuration varies from editor to editor. Since VS Code is relatively dominant, I added some docs on how to set it up, but most Ruby LSP users will just need these settings to select Standard as their linter and formatter:

"[ruby]": {
  "editor.defaultFormatter": "Shopify.ruby-lsp"
},
"rubyLsp.formatter": "standard",
"rubyLsp.linters": [
  "standard"
]

I've been using this configuration for a bit over a week and I've decided: it's time to uninstall my own bespoke extension that we launched early last year .

I've also updated Standard's README to explain why the new Ruby LSP add-on is superior to our own built-in language server. In short, the Ruby LSP add-on supports pull diagnostics and code actions, and the built-in server does not.

Standard Ruby's built-in language server and existing VS Code extension will continue to work and be supported for the forseeable future, but it doesn't make much sense to invest heavily into new features, when the Ruby LSP add-on will get them "for free".

Why make the switch?

Three reasons:

  1. Capability. Ruby LSP centralizes the pain of figuring out how to build a full-featured, performant language server. The issue isn't that implementing a basic STDIO server is All That Hard, it's that rolling your own utilities like logging, debugging, and test harnesses are a huge pain in the ass. By plugging into Ruby LSP as an add-on, library authors can integrate with simpler high-level APIs, exploit whatever LSP capabilities it implements and whatever utilities it exposes, and spare themselves from re-inventing Actually Hard things like project-scoped code indexing (instead, leveraging Ruby LSP's robust, well-tested index)
  2. Duplication. RuboCop maintainer Koichi Ito gave the closest thing to a barn-burner presentation about language servers at RubyKaigi that I could imagine, where he discussed the paradoxical wastefulness of every library author hand-rolling the same basic implementation while simultaneously needing their own tightly-integrated language server to push their tools' capabilities forward. In the case of Standard Ruby, we're squeezed on both sides: at one end, a Ruby LSP add-on would be a more convenient, batteries-included solution than publishing our own extension; at the other, nuking our own custom LSP code and delegating to RuboCop's built-in language server would unlock capabilities we couldn't hope to provide ourselves
  3. Maintainability. You think I enjoy maintaining any of this shit?

Embracing defeat

So yeah, in the medium-term future, I see Ruby LSP and RuboCop as being better-positioned to offer a language server than Standard itself. Thanks to Will Leinweber's implementation, we may have been there first, but I have nothing to gain by my spending free time to ensure our server is somehow better than everyone else's. In the long-term, even more consolidation seems likely—which probably means Ruby LSP will become dominant. But ultimately, they're called language servers for a reason, and if Ruby shipped with a built-in language server (and an API that any code could easily plug into), it could prove a competitive advantage over other languages while simultaneously enabling a new class of tools that could each pitch in to enhance the developer experience in distinct, incremental ways.

On a human level, I think it's important not to associate the prospect of retiring one's own work with feelings of failure. Code is a liability, not an asset. Whenever I can get by with less of it, I feel relief after discarding it. If relief isn't your default reaction to a competing approach winning out on the merits (and it's understandable if it isn't; pride of authorship is a thing), I encourage you to figure out how to adopt this mindset. There are far too many problems out there worth solving to waste a single minute defending the wrong solution.

Anyway, go try out Standard with Ruby LSP and tell me how it goes! I'll be bummed if I didn't manage to break at least something.