justin․searls․co

Abusing Rails' content_for to push data attributes up the DOM

(I just thought of this today, and it's probably a terrible idea, but it seems to work. If you have reason to believe this is really stupid, please let me know!)

If you use Hotwired with Rails, you probably find yourself writing a lot of data attributes. This is great, because (similar to HTMX), it makes the DOM the primary source of truth and all that, but it sometimes imposes one of several vexing constraints on Stimulus components:

  • Stimulus values must be set on the same element that specifies the controller (i.e. whatever tag has data-controller="cheese" must contain all its value attributes, like data-cheese-smell-value="stinky"). This can be a problem when you'll only have easy access to the value at some later point in your ERB template. You can't just set it on a descendant
  • A Stimulus controller's targets (as in, data-cheese-target="swiss") must be descendants of the controller, which can present design challenges when those targets appear in wildly different areas of the DOM, rendered by unrelated templates
  • Actions will only reach a controller if that controller is an ancestor of whatever node triggered the event (i.e. data-action="click->cheese#sniff" only works if it's placed on a descendant of the element with data-controller="cheese")

I often find myself writing Stimulus components that would be easier to implement if any of the above three things weren't true, and it can occasionally lead me to wishing I could just chuck a data attribute near the top of the DOM in my layout from an individual view or partial to ensure every element involved shares a certain controller as a common ancestor. The alternatives are all worse: storing one-off data attributes as values (which don't benefit from Stimulus nifty Values API), binding actions to global events (@window), or indirect inter-controller communication.

An example problem

In my particular case, I have a bit of UI in my layout that resembles an iOS navigation bar. That bar will render a search field for certain views that have a number of elements that should be filterable by that search bar. The DOM tree looks like this:

  • A top-level layout template:
    • A navigation bar partial (that allows customization via yield :navbar)
    • A view container containing the layout's primary yield to each view. Each view, in turn:
      • Renders whatever content they need
      • [Optional] Configures whether the navigation bar renders a search field for that page (via content_for :navbar)
      • [Optional] Renders a list of elements that should be filterable

Say this filtering behavior is governed by a Stimulus controller called Filterable. This setup raises the question: where should the data-controller="filterable" attribute go? It can't go in the navigation bar, because then the target elements to be filtered would not descend from the controller. It can't go in the view, because then the search bar's events wouldn't trigger actions on the controller. Of course, it could go on the layout's <body> tag, but what if only a handful of pages offer filter functionality? Binding every possible Stimulus controller to the body of every single page is obviously the wrong answer.

My solution? Abuse the hell out of Action View's content_for and yield methods. (Here are the relevant Rails guides if you're not familiar).

My hacked-up solution

In short, I just encode my desired data attributes from views and partials as JSON in a call to content_for and then render them upstream to the layout with yield as attributes on the <body> (or some other common ancestor element).

In this very simple case, the only thing I needed to push up the DOM to a shared ancestor was a data-controller="filterable", which just required this bit of magic at the top of the view containing my filterable items:

<% json_content_for :global_data_attrs, {controller: "filterable"} %>

And this update to my layout:

<%= content_tag :body, data: json_content_from(yield(:global_data_attrs)) do %>
  <!-- Everything goes here -->
<% end %>

And… boom! The page's body tag now contains data-controller="filterable", which means:

  • The items in the view (each with data-filterable-target="item" set) are now valid targets
  • The search bar's actions (with data-action="input->filterable#update") are now reaching the body's Filterable controller.

How it works

Here's how I implemented the helper methods json_content_for and json_content_from to facilitate this:

# app/helpers/content_for_abuse_helper.rb
module ContentForAbuseHelper
  STUPID_SEPARATOR = "|::|::|"

  def json_content_for(name, json)
    content_for name, json.to_json.html_safe + STUPID_SEPARATOR
  end

  def json_content_from(yielded_content)
    yielded_content.split(STUPID_SEPARATOR).reduce({}) { |memo, json|
      memo.merge(JSON.parse(json)) { |key, val_1, val_2|
        token_list(val_1, val_2)
      }
    }
  end
end

Take particular note of the call to token_list there. Because it's being called in a block passed to Hash#merge, that means any duplicate data attribute names will have their contents concatenated with whitespace between them.

This way you could have one view or partial contain:

<% json_content_for :global_data_attrs, {controller: "cheese"} %>

And another with:

<% json_content_for :global_data_attrs, {controller: "meats veggies"} %>

And a layout like the one above will ensure all of your controllers come along for the ride:

<body data-controller="cheese meats veggies">

Cool.

This is a hack

The content_for and yield methods were invented in a simpler time when developers had the basic decency to keep HTML in their HTML files, CSS in their CSS files, and JavaScript in their JavaScript files. But thanks to Tailwind and Stimulus, I find I'm writing more and more of my styling and behavior in HTML attributes, which is why I contorted these two methods to do what I wanted.

I'd advise anyone tread lightly with approaches like this. Any time you muck with a higher-order thing from a lower-order thing, you're creating an opportunity for confusion, making caching more difficult, and opening the door to conflicts between unrelated pieces of code.

But, I dunno. Seems to work. 🤷‍♂️

Breaking Change artwork

v15 - An E Ink iPod Touch

Breaking Change

I bought a new gadget! And it runs Android! And I don't hate it!

Tell me about things you hate and don't hate and I might just read your feelings on air, for others to have opinions about! The e-mail, as always, is podcast@searls.co.

Sources below:

Show those show notes…

When people ask the "secret to my success," I like to respond with any of the many attributes that set me apart from my peers.

Here's one: whenever someone says anything remotely hurtful, think about it several times every week for the next twenty years and get very, very sad.

Hey, check out this infuriating Safari bug

It appears that Safari 17.5 (as well as the current Safari Technology Preview, "Safari 18.0, WebKit 19619.1.18") has a particularly pernicious bug in which img tags with lazy-loading enabled that have a src which must be resolved after an HTTP redirect will stop rendering if you load a lot of them. But only sometimes. And then continuously for, like, 5 minutes.

Suppose I have a bunch of images like this in list:

<img loading="lazy" src="/a/redirect/to/some.webp">

Seems reasonable, right? Well, here's what I'm seeing:

Weirdly, when the bug is encountered:

  • Safari won't "wake up" to load the image in response to scrolling or resizing the window
  • Nothing is printed to the development console and no errors appear in the Network tab
  • The bug will persist after countless page refreshes for at least several minutes (almost as if a time-based cache expiry is at play)
  • It also persists after fully quitting and relaunching Safari (suggesting a system-wide cache or process is responsible)

I got tripped up on this initially, because I thought the bug was caused by the fact I was loading WebP files, but the issue went away as soon as I started loading the static file directly, without any redirect. As soon as I realized the bug was actually triggered by many images requiring a redirect—regardless of file type—the solution was easy: stop doing that.

(Probably a good idea, regardless, since it's absurdly wasteful to ask every user to follow hundreds of redirects on every page load.)

So why was I redirecting so many thumbnail images in the first place? Well, Active Storage, which I use for hosting user-uploaded assets, defaults to serving those assets via a Rails-internal route which redirects each asset to whatever storage provider is hosting it. That means if your app uses the default redirect mode instead of ensuring your assets are served by a CDN, you can easily wind up in really stupid situations like this one.

Fortunately for me, I'm only relying on redirection in development (in production, I have Rails generating URLs to an AWS CloudFront distribution), so this bug wouldn't have bitten me For Real. Of course, there was no way I could have known that, so I sat back, relaxed, and enjoyed watching the bug derail my entire morning. Not like I was doing anything.

Being a programmer is fun! At least it's Friday. 🫠

How to loopback computer audio with SSL2/+ and Logic Pro

I use an SSL2+ interface to connect my XLR microphone to my Mac via USB-C when I record Breaking Change. When it launched, the SSL2 and SSL2+ interfaces could monitor your computer audio (as in, pipe it into the headphones you had plugged into it, which you would need if you were recording during a discussion with other people), but there was no way to capture that audio. Capturing computer audio is something you might want to do if you had a Stream Deck or mix board configured to play certain sounds when you hit certain buttons during a stream or other audio production. And up until the day you stumbled upon this blog post, you'd need a software solution like Audio Hijack to accomplish that.

Today, that all changes!

Last year, Solid State Logic released v1.2 firmware update for the SSL2 and SSL2+ to add official loopback support. (At the time of this writing, the current version is 1.3.)

Impressively, the firmware update process couldn't have been easier. Even though the custom UI looks a bit janky, it only required one button press to run and it was all over in about 30 seconds:

Once installed, I had no idea what to do. It appears nobody updated the device's documentation to explain how to actually record looped back audio. I'm brand new to audio engineering (and reluctant to get too deep into it), so it's a small miracle I figured the following out on my own without too much hassle:

  • The SSL2/+ ships with 2 physical channels (one for each XLR input), but what's special about the v1.2 firmware update was that it added two virtual channels (3 and 4, representing the left and right channels of whatever audio is sent from the computer to the interface)
  • In your DAW (which audio people like me know stands for "Digital Audio Workstation", which in my case is Logic Pro), you can set up two tracks for recording:
    • Input 1 to a mono channel to capture your voice (as you probably have been doing all along)
    • Input 3 & 4 to a new stereo channel to capture your computer audio via loopback
  • Both tracks need to be enabled for recording (blinking red in Logic) when you hit the record button.

Here's what that all looks like for me:

Previously, I've been recording the podcast with QuickTime Player, but I'll have to give that up and start recording in Logic (or another DAW) capable of separating multiple channels streaming in from a single input and recording them to separate tracks. Even if you could get all channels recording through a single channel in another app, you probably wouldn't like the result: voices recorded by microphones require significant processing—processing you'd never want to run computer audio through.

Anyway, hopefully this post helps somebody else figure this little secret out. Nothing but love for Rogue Amoeba, but I sure am glad I don't need to add Audio Hijack to my ever-increasing stack of audio tools to get my podcast out the door!

Yesterday, Gruber broke what, in my opinion, is the most important news story regarding Apple Vision Pro since its launch in February. Emphasis mine:

VisionOS 2 is not getting any Apple Intelligence features, despite the fact that the Vision Pro has an M2 chip. One reason is that VisionOS remains a dripping-wet new platform — Apple is still busy building the fundamentals, like rearranging and organizing apps in the Home view. VisionOS 2 isn't even getting features like Math Notes, which, as I mentioned above, isn't even under the Apple Intelligence umbrella. But another reason is that, according to well-informed little birdies, Vision Pro is already making significant use of the M2's Neural Engine to supplement the R1 chip for real-time processing purposes — occlusion and object detection, things like that. With M-series-equipped Macs and iPads, the Neural Engine is basically sitting there, fully available for Apple Intelligence features. With the Vision Pro, it's already being used.

Not being able to run Apple Intelligence would be a devastating blow to any role Vision Pro might serve as Apple's halo car—an expensive gadget most people won't (and shouldn't) buy, but which plays an aspirational role in the lineup and demonstrates their technology and design prowess.

Now, couple this with rumors that work on Vision Pro 2 has been suspended, and it starts to look like we won't see any Apple Intelligence features on the visionOS platform until late 2026 at the earliest. How dated and limited will Apple Vision Pro seem in late 2026 if most new features coming to Apple's other platforms—including, one imagines, updated Watch, Apple TV, and HomePod hardware—don't find their way to Vision Pro, putting its user experience further and further behind?

At launch, I heard a lot of people jokingly refer to Vision Pro as, "an iPad strapped to your face." Recently, as it's become clear most people are using it to watch TV and for Mac screen sharing, Marco Arment said it was more like a mere Apple TV strapped to your face. But if the hardware really can't support Apple Intelligence and isn't going to be updated for several years, how long before Vision Pro feels like an original HomePod strapped to your face?

Outed

I hate when the algorithm nails me.

Since starting Breaking Change earlier this year, I've wanted to start publishing stock-video-laden clips of my favorite little rants and flourishes. The trouble is, video is a huge time sink. At least for me, relative to the other things I do. So I pulled out a timer and ran an experiment to see if I could turn around a ~3 minute clip in under an hour.

And I succeeded! I think with some template setup work, I could get it down to even less time. Hopefully this means more video shenanigans in the future.

Adding vertical screen size media queries to Tailwind

Learning Tailwind was the first time I've felt like I had a chance in hell of expressing myself visually with precision and maintainability on the web. I'm a fan. If your life currently involves CSS and you haven't tried Tailwind, you should spend a few days with it and get over the initial learning curve.

One thing I like about Tailwind is that it's so extensible. It ships with utility classes for dealing with screen sizes to support responsive designs, but out-of-the-box it doesn't include any vertical screen sizes. This isn't surprising as they're usually not necessary.

But, have you ever tried to make your webapp work when a phone is held sideways? When you might literally only have 330 pixels of height after accounting for the browser's toolbar? If you have, you'll appreciate why you might want to make your design respond differently to extremely short screen heights.

Figuring out how to add this in Tailwind took less time than writing the above paragraph. Here's all I added to my tailwind.config.js:

module.exports = {
  theme: {
    extend: {
      screens: {
        short: { raw: '(max-height: 400px)' },
        tall: { raw: '(min-height: 401px and max-height: 600px)' },
        grande: { raw: '(min-height: 601px and max-height: 800px)' },
        venti: { raw: '(min-height: 801px)' }
      }
    }
  }
}

And now I can accomplish my goal of hiding elements on very short screens or otherwise compressing the UI. Here's me hiding a logo:

<img class="w-6 short:hidden" src="/logo.png">

Well, almost…

Unfortunately, because of this open issue, defining any screens with raw will inadvertently break variants like max-sm:, which is bad. So in the meantime, a workaround would be to define those yourself. Here's what that would look like:

const defaultTheme = require('tailwindcss/defaultTheme')

module.exports = {
  theme: {
    extend: {
      screens: {
        short: { raw: '(max-height: 400px)' },
        tall: { raw: '(min-height: 401px and max-height: 600px)' },
        grande: { raw: '(min-height: 601px and max-height: 800px)' },
        venti: { raw: '(min-height: 801px)' },

        // Manually generate max-<size> classes due to this bug https://github.com/tailwindlabs/tailwindcss/issues/13022
        'max-sm': { raw: `not all and (min-width: ${defaultTheme.screens.sm})` },
        'max-md': { raw: `not all and (min-width: ${defaultTheme.screens.md})` },
        'max-lg': { raw: `not all and (min-width: ${defaultTheme.screens.lg})` },
        'max-xl': { raw: `not all and (min-width: ${defaultTheme.screens.xl})` },
        'max-2xl': { raw: `not all and (min-width: ${defaultTheme.screens['2xl']})` },
      }
    }
  }
}

Okay, yeah, so this was less of a slam dunk than I initially suggested, but I'm still pretty happy with it!

Breaking Change artwork

v14 - Actual Intelligence

Breaking Change

WWDC came and went and now we're all just left to ponder what life will be like under the thumb of our new AI overlords (and underlords). But at least for now, humans are still allowed to produce podcasts on their own, which means we can safely opine about the fresh hell we're all hard at work creating.

I finally got a chance to get to some mailbag questions in this episode! If you want to be a part of the ✨magic🪄, shoot me an e-mail to podcast@searls.co. I won't read your last name on air, unless I accidentally do. Promise!

Here are some links to stuff from this episode:

Show those show notes…

Running Rails System Tests with Playwright instead of Selenium

Last week, when David declared that system tests have failed, my main reaction was: "well, yeah." UI tests are brittle, and if you write more than a handful, the cost to maintain them can quickly eclipse any value they bring in terms of confidence your app is working.

But then I had a second reaction, "come to think of it, I wrote a smoke test of a complex UI that relies heavily on Turbo and it seems to fail all the damn time." Turbo's whole cloth replacement of large sections of the DOM seemed to be causing numerous timing issues in my system tests, wherein elements would frequently become stale as soon as Capybara (under Selenium) could find them.

Finally, I had a third reaction, "I've been sick of Selenium's bullshit for over 14 years. I wonder if I can dump it without rewriting my tests?" So I went looking for a Capybara adapter for the seemingly much-more-solid Playwright.

And—as you might be able to guess by the fact I bothered to write a blog post—I found one such adapter! And it works! And things are better now!

So here's my full guide on how to swap Selenium for Playwright in your Rails system tests:

But wait, there's more…

Pro-tip to the 8 people out there using Vision Pro and Mac Virtual Display: don't let your Mac enter low power mode.

I set Low Power Mode to turn on when my MacBook Air is on battery and then spent 3 days being confused why the virtual display latency skyrocketed from "instant" to "slideshow".