I bought a new gadget! And it runs Android! And I don't hate it!
Tell me about things you hate and don't hate and I might just read your feelings
on air, for others to have opinions about! The e-mail, as always, is
podcast@searls.co.
It appears that Safari 17.5 (as well as the current Safari Technology Preview,
"Safari 18.0, WebKit 19619.1.18") has a particularly pernicious bug in which
img tags with lazy-loading
enabled
that have a src which must be resolved after an HTTP redirect will stop
rendering if you load a lot of them. But only sometimes. And then continuously
for, like, 5 minutes.
Suppose I have a bunch of images like this in list:
<imgloading="lazy"src="/a/redirect/to/some.webp">
Seems reasonable, right? Well, here's what I'm seeing:
Weirdly, when the bug is encountered:
Safari won't "wake up" to load the image in response to scrolling or resizing the window
Nothing is printed to the development console and no errors appear in the
Network tab
The bug will persist after countless page refreshes for at least several
minutes (almost as if a time-based cache expiry is at play)
It also persists after fully quitting and relaunching Safari (suggesting a
system-wide cache or process is responsible)
I got tripped up on this initially, because I thought the bug was caused by the
fact I was loading WebP files, but
the issue went away as soon as I started loading the static file directly,
without any redirect. As soon as I realized the bug was actually triggered by
many images requiring a redirect—regardless of file type—the solution was
easy: stop doing that.
(Probably a good idea, regardless, since it's absurdly wasteful to ask every
user to follow hundreds of redirects on every page load.)
Fortunately for me, I'm only relying on redirection in development (in
production, I have Rails generating URLs to an AWS
CloudFront distribution), so this bug
wouldn't have bitten me For Real. Of course, there was no way I could have known
that, so I sat back, relaxed, and enjoyed watching the bug derail my entire
morning. Not like I was doing anything.
Being a programmer is fun! At least it's Friday. 🫠
TIL you can force a browser to print backgrounds of certain elements with CSS print-color-adjust: exact;
So long as you promise to vote for "Standard (gem)" as your preferred quality tool, I encourage you to take this year's Ruby on Rails community survey! railsdeveloper.com/survey/
I use an SSL2+ interface to
connect my XLR microphone to my Mac via USB-C when I record Breaking
Change. When it launched, the SSL2 and SSL2+ interfaces could monitor
your computer audio (as in, pipe it into the headphones you had plugged into it,
which you would need if you were recording during a discussion with other
people), but there was no way to capture that audio. Capturing computer audio
is something you might want to do if you had a Stream
Deck or mix board configured to
play certain sounds when you hit certain buttons during a stream or other audio
production. And up until the day you stumbled upon this blog post, you'd need a
software solution like Audio Hijack to
accomplish that.
Today, that all changes!
Last year, Solid State Logic released v1.2 firmware
update
for the SSL2 and SSL2+ to add official loopback support. (At the time of this
writing, the current version is 1.3.)
Impressively, the firmware update process couldn't have been easier. Even though
the custom UI looks a bit janky, it only required one button press to run and it
was all over in about 30 seconds:
Once installed, I had no idea what to do. It appears nobody updated the device's
documentation to explain how to actually record looped back audio. I'm brand new
to audio engineering (and reluctant to get too deep into it), so it's a small
miracle I figured the following out on my own without too much hassle:
The SSL2/+ ships with 2 physical channels (one for each XLR input), but what's
special about the v1.2 firmware update was that it added two virtual channels
(3 and 4, representing the left and right channels of whatever audio is sent
from the computer to the interface)
In your DAW (which audio people like me know stands for "Digital Audio
Workstation", which in my case is Logic Pro), you can set up two tracks for
recording:
Input 1 to a mono channel to capture your voice (as you probably have been doing all along)
Input 3 & 4 to a new stereo channel to capture your computer audio via loopback
Both tracks need to be enabled for
recording
(blinking red in Logic) when you hit the record button.
Here's what that all looks like for me:
Previously, I've been recording the podcast with QuickTime Player, but I'll have
to give that up and start recording in Logic (or another DAW) capable of
separating multiple channels streaming in from a single input and recording them
to separate tracks. Even if you could get all channels recording through a
single channel in another app, you probably wouldn't like the result: voices
recorded by microphones require significant processing—processing you'd never
want to run computer audio through.
Anyway, hopefully this post helps somebody else figure this little secret out.
Nothing but love for Rogue Amoeba, but I sure am glad I don't need to add Audio
Hijack to my ever-increasing stack of audio tools to get my podcast out the
door!
VisionOS 2 is not getting any Apple Intelligence features, despite the fact
that the Vision Pro has an M2 chip. One reason is that VisionOS remains a
dripping-wet new platform — Apple is still busy building the fundamentals,
like rearranging and organizing apps in the Home view. VisionOS 2 isn't even
getting features like Math Notes, which, as I mentioned above, isn't even
under the Apple Intelligence umbrella. But another reason is that, according
to well-informed little birdies, Vision Pro is already making significant use
of the M2's Neural Engine to supplement the R1 chip for real-time processing
purposes — occlusion and object detection, things like that. With
M-series-equipped Macs and iPads, the Neural Engine is basically sitting
there, fully available for Apple Intelligence features. With the Vision Pro,
it's already being used.
Not being able to run Apple
Intelligence would be a devastating
blow to any role Vision Pro might serve as Apple's halo
car—an expensive gadget
most people won't (and shouldn't) buy, but which plays an aspirational role in the
lineup and demonstrates their technology and design prowess.
Now, couple this with rumors that work on Vision Pro 2 has been
suspended,
and it starts to look like we won't see any Apple Intelligence features on the
visionOS platform until late 2026 at the earliest. How dated and limited will
Apple Vision Pro seem in late 2026 if most new features coming to Apple's other
platforms—including, one imagines, updated Watch, Apple TV, and HomePod
hardware—don't find their way to Vision Pro, putting its user experience further
and further behind?
At launch, I heard a lot of people jokingly refer to Vision Pro as, "an iPad
strapped to your face." Recently, as it's become clear most people are using it
to watch TV and for Mac screen sharing, Marco Arment
said it was more like a mere Apple TV strapped to your face. But if the hardware
really can't support Apple Intelligence and isn't going to be updated for
several years, how long before Vision Pro feels like an original HomePod
strapped to your face?
Outed
I hate when the algorithm nails me.
I like to show off my progressive bona fides by exclusively drinking Bud Light.
Since starting Breaking Change earlier this year, I've wanted to start
publishing stock-video-laden clips of my favorite little rants and flourishes.
The trouble is, video is a huge time sink. At least for me, relative to the
other things I do. So I pulled out a timer and ran an experiment to see if I
could turn around a ~3 minute clip in under an hour.
And I succeeded! I think with some template setup work, I could get it down to
even less time. Hopefully this means more video shenanigans in the future.
My friend Eric is a painter. Here's a quick video showing of his process to produce a commissioned painting of Mr. Toad – impressive! youtube.com/watch?v=-jBN5ykqutY
Learning Tailwind was the first time I've felt like I
had a chance in hell of expressing myself visually with precision and
maintainability on the web. I'm a fan. If your life currently involves CSS and
you haven't tried Tailwind, you should spend a few days with it and get over the
initial learning curve.
One thing I like about Tailwind is that it's so extensible. It ships with utility
classes for dealing with screen sizes to
support responsive designs, but out-of-the-box it doesn't include any vertical
screen sizes. This isn't surprising as they're usually not necessary.
But, have you ever tried to make your webapp work when a phone is held
sideways? When you might literally only have 330 pixels of height after
accounting for the browser's toolbar? If you have, you'll appreciate why you might
want to make your design respond differently to extremely short screen heights.
Figuring out how to add this in Tailwind took less time than writing the above
paragraph. Here's all I added to my tailwind.config.js:
module.exports={theme:{extend:{screens:{short:{raw:'(max-height: 400px)'},tall:{raw:'(min-height: 401px and max-height: 600px)'},grande:{raw:'(min-height: 601px and max-height: 800px)'},venti:{raw:'(min-height: 801px)'}}}}}
And now I can accomplish my goal of hiding elements on very short screens or
otherwise compressing the UI. Here's me hiding a logo:
Unfortunately, because of this open
issue, defining any
screens with raw will inadvertently break variants like max-sm:, which is
bad. So in the meantime, a workaround would be to define those yourself. Here's
what that would look like:
constdefaultTheme=require('tailwindcss/defaultTheme')module.exports={theme:{extend:{screens:{short:{raw:'(max-height: 400px)'},tall:{raw:'(min-height: 401px and max-height: 600px)'},grande:{raw:'(min-height: 601px and max-height: 800px)'},venti:{raw:'(min-height: 801px)'},// Manually generate max-<size> classes due to this bug https://github.com/tailwindlabs/tailwindcss/issues/13022
'max-sm':{raw:`not all and (min-width: ${defaultTheme.screens.sm})`},'max-md':{raw:`not all and (min-width: ${defaultTheme.screens.md})`},'max-lg':{raw:`not all and (min-width: ${defaultTheme.screens.lg})`},'max-xl':{raw:`not all and (min-width: ${defaultTheme.screens.xl})`},'max-2xl':{raw:`not all and (min-width: ${defaultTheme.screens['2xl']})`},}}}}
Okay, yeah, so this was less of a slam dunk than I initially suggested, but I'm
still pretty happy with it!
v14 -
Actual Intelligence
Breaking Change
Subscribe:
WWDC came and went and now we're all just
left to ponder what life will be like under the thumb of our new AI overlords
(and underlords). But at least for now,
humans are still allowed to produce podcasts on their own, which means we can
safely opine about the fresh hell we're all hard at work creating.
I finally got a chance to get to some mailbag questions in this episode! If you
want to be a part of the ✨magic🪄, shoot me an e-mail to
podcast@searls.co. I won't read your last name on
air, unless I accidentally do. Promise!
Now that I'm syndicating to Threads via my new feed2thread gem, I've updated my POSSE Pulse to explain all the site's automations. Seven so far!
justin.searls.co/posse
Just published feed2thread, a Ruby gem that reads your site's feed and posts each entry to Threads, using the newly-released Threads API. Can run in Docker, similar to feed2toot for Mastodon github.com/searls/feed2thread
Last week, when David declared that system tests have
failed, my main
reaction was: "well, yeah." UI tests are brittle, and if you write more than a
handful, the cost to maintain them can quickly eclipse any value they bring in
terms of confidence your app is working.
But then I had a second reaction, "come to think of it, I wrote a smoke test of
a complex UI that relies heavily on Turbo and it
seems to fail all the damn time." Turbo's whole cloth replacement of large
sections of the DOM seemed to be causing numerous timing issues in my system
tests, wherein elements would frequently become stale as soon as
Capybara (under
Selenium) could find them.
Finally, I had a third reaction, "I've been sick of Selenium's bullshit for
over 14 years. I wonder if I can dump it without rewriting my tests?" So I went
looking for a Capybara adapter for the seemingly much-more-solid
Playwright.
And—as you might be able to guess by the fact I bothered to write a blog post—I
found one such adapter! And it works! And things are better now!
So here's my full guide on how to swap Selenium for Playwright in your Rails
system tests:
Pro-tip to the 8 people out there using Vision Pro and Mac Virtual Display: don't let your Mac enter low power mode.
I set Low Power Mode to turn on when my MacBook Air is on battery and then spent 3 days being confused why the virtual display latency skyrocketed from "instant" to "slideshow".