justin․searls․co

Did you come to my blog looking for blog posts? Here they are, I guess. This is where I post traditional, long-form text that isn't primarily a link to someplace else, doesn't revolve around audiovisual media, and isn't published on any particular cadence. Just words about ideas and experiences.


Hey, check out this infuriating Safari bug

It appears that Safari 17.5 (as well as the current Safari Technology Preview, "Safari 18.0, WebKit 19619.1.18") has a particularly pernicious bug in which img tags with lazy-loading enabled that have a src which must be resolved after an HTTP redirect will stop rendering if you load a lot of them. But only sometimes. And then continuously for, like, 5 minutes.

Suppose I have a bunch of images like this in list:

<img loading="lazy" src="/a/redirect/to/some.webp">

Seems reasonable, right? Well, here's what I'm seeing:

Weirdly, when the bug is encountered:

  • Safari won't "wake up" to load the image in response to scrolling or resizing the window
  • Nothing is printed to the development console and no errors appear in the Network tab
  • The bug will persist after countless page refreshes for at least several minutes (almost as if a time-based cache expiry is at play)
  • It also persists after fully quitting and relaunching Safari (suggesting a system-wide cache or process is responsible)

I got tripped up on this initially, because I thought the bug was caused by the fact I was loading WebP files, but the issue went away as soon as I started loading the static file directly, without any redirect. As soon as I realized the bug was actually triggered by many images requiring a redirect—regardless of file type—the solution was easy: stop doing that.

(Probably a good idea, regardless, since it's absurdly wasteful to ask every user to follow hundreds of redirects on every page load.)

So why was I redirecting so many thumbnail images in the first place? Well, Active Storage, which I use for hosting user-uploaded assets, defaults to serving those assets via a Rails-internal route which redirects each asset to whatever storage provider is hosting it. That means if your app uses the default redirect mode instead of ensuring your assets are served by a CDN, you can easily wind up in really stupid situations like this one.

Fortunately for me, I'm only relying on redirection in development (in production, I have Rails generating URLs to an AWS CloudFront distribution), so this bug wouldn't have bitten me For Real. Of course, there was no way I could have known that, so I sat back, relaxed, and enjoyed watching the bug derail my entire morning. Not like I was doing anything.

Being a programmer is fun! At least it's Friday. 🫠

How to loopback computer audio with SSL2/+ and Logic Pro

I use an SSL2+ interface to connect my XLR microphone to my Mac via USB-C when I record Breaking Change. When it launched, the SSL2 and SSL2+ interfaces could monitor your computer audio (as in, pipe it into the headphones you had plugged into it, which you would need if you were recording during a discussion with other people), but there was no way to capture that audio. Capturing computer audio is something you might want to do if you had a Stream Deck or mix board configured to play certain sounds when you hit certain buttons during a stream or other audio production. And up until the day you stumbled upon this blog post, you'd need a software solution like Audio Hijack to accomplish that.

Today, that all changes!

Last year, Solid State Logic released v1.2 firmware update for the SSL2 and SSL2+ to add official loopback support. (At the time of this writing, the current version is 1.3.)

Impressively, the firmware update process couldn't have been easier. Even though the custom UI looks a bit janky, it only required one button press to run and it was all over in about 30 seconds:

Once installed, I had no idea what to do. It appears nobody updated the device's documentation to explain how to actually record looped back audio. I'm brand new to audio engineering (and reluctant to get too deep into it), so it's a small miracle I figured the following out on my own without too much hassle:

  • The SSL2/+ ships with 2 physical channels (one for each XLR input), but what's special about the v1.2 firmware update was that it added two virtual channels (3 and 4, representing the left and right channels of whatever audio is sent from the computer to the interface)
  • In your DAW (which audio people like me know stands for "Digital Audio Workstation", which in my case is Logic Pro), you can set up two tracks for recording:
    • Input 1 to a mono channel to capture your voice (as you probably have been doing all along)
    • Input 3 & 4 to a new stereo channel to capture your computer audio via loopback
  • Both tracks need to be enabled for recording (blinking red in Logic) when you hit the record button.

Here's what that all looks like for me:

Previously, I've been recording the podcast with QuickTime Player, but I'll have to give that up and start recording in Logic (or another DAW) capable of separating multiple channels streaming in from a single input and recording them to separate tracks. Even if you could get all channels recording through a single channel in another app, you probably wouldn't like the result: voices recorded by microphones require significant processing—processing you'd never want to run computer audio through.

Anyway, hopefully this post helps somebody else figure this little secret out. Nothing but love for Rogue Amoeba, but I sure am glad I don't need to add Audio Hijack to my ever-increasing stack of audio tools to get my podcast out the door!

Adding vertical screen size media queries to Tailwind

Learning Tailwind was the first time I've felt like I had a chance in hell of expressing myself visually with precision and maintainability on the web. I'm a fan. If your life currently involves CSS and you haven't tried Tailwind, you should spend a few days with it and get over the initial learning curve.

One thing I like about Tailwind is that it's so extensible. It ships with utility classes for dealing with screen sizes to support responsive designs, but out-of-the-box it doesn't include any vertical screen sizes. This isn't surprising as they're usually not necessary.

But, have you ever tried to make your webapp work when a phone is held sideways? When you might literally only have 330 pixels of height after accounting for the browser's toolbar? If you have, you'll appreciate why you might want to make your design respond differently to extremely short screen heights.

Figuring out how to add this in Tailwind took less time than writing the above paragraph. Here's all I added to my tailwind.config.js:

module.exports = {
  theme: {
    extend: {
      screens: {
        short: { raw: '(max-height: 400px)' },
        tall: { raw: '(min-height: 401px and max-height: 600px)' },
        grande: { raw: '(min-height: 601px and max-height: 800px)' },
        venti: { raw: '(min-height: 801px)' }
      }
    }
  }
}

And now I can accomplish my goal of hiding elements on very short screens or otherwise compressing the UI. Here's me hiding a logo:

<img class="w-6 short:hidden" src="/logo.png">

Well, almost…

Unfortunately, because of this open issue, defining any screens with raw will inadvertently break variants like max-sm:, which is bad. So in the meantime, a workaround would be to define those yourself. Here's what that would look like:

const defaultTheme = require('tailwindcss/defaultTheme')

module.exports = {
  theme: {
    extend: {
      screens: {
        short: { raw: '(max-height: 400px)' },
        tall: { raw: '(min-height: 401px and max-height: 600px)' },
        grande: { raw: '(min-height: 601px and max-height: 800px)' },
        venti: { raw: '(min-height: 801px)' },

        // Manually generate max-<size> classes due to this bug https://github.com/tailwindlabs/tailwindcss/issues/13022
        'max-sm': { raw: `not all and (min-width: ${defaultTheme.screens.sm})` },
        'max-md': { raw: `not all and (min-width: ${defaultTheme.screens.md})` },
        'max-lg': { raw: `not all and (min-width: ${defaultTheme.screens.lg})` },
        'max-xl': { raw: `not all and (min-width: ${defaultTheme.screens.xl})` },
        'max-2xl': { raw: `not all and (min-width: ${defaultTheme.screens['2xl']})` },
      }
    }
  }
}

Okay, yeah, so this was less of a slam dunk than I initially suggested, but I'm still pretty happy with it!

Running Rails System Tests with Playwright instead of Selenium

Last week, when David declared that system tests have failed, my main reaction was: "well, yeah." UI tests are brittle, and if you write more than a handful, the cost to maintain them can quickly eclipse any value they bring in terms of confidence your app is working.

But then I had a second reaction, "come to think of it, I wrote a smoke test of a complex UI that relies heavily on Turbo and it seems to fail all the damn time." Turbo's whole cloth replacement of large sections of the DOM seemed to be causing numerous timing issues in my system tests, wherein elements would frequently become stale as soon as Capybara (under Selenium) could find them.

Finally, I had a third reaction, "I've been sick of Selenium's bullshit for over 14 years. I wonder if I can dump it without rewriting my tests?" So I went looking for a Capybara adapter for the seemingly much-more-solid Playwright.

And—as you might be able to guess by the fact I bothered to write a blog post—I found one such adapter! And it works! And things are better now!

So here's my full guide on how to swap Selenium for Playwright in your Rails system tests:

What happens next will shock you…

Dear AI companies, please scrape this website

Last night, I read a flurry of angry feedback following WWDC. It appears some people are mad about Apple's AI announcements. Just like they were mad about Apple's hydraulic press ad last month.

I woke up this morning with a single question:

"Am I the only person on earth who actually wants AI companies to scrape my website?"

Publications that depend on ad revenue don't. License holders counting on a return for their intellectual property investment are lawyering up. Quite a few Mastodon users appear not to be on board, either.

Me, meanwhile, would absolutely positively 💗LOVE💗 if the AIs scraped the shit out of this website, as well as all the other things I post publicly online.

Really, take my work! Go nuts! Make your AI think more like me. Make your AI sound more like me. Make your AI agree with my view of the world more often.

The entire reason I create shit is so that others will take it! To share ideas I find compelling in the hope those ideas will continue to spread. Why wouldn't I want OpenAI or Apple or whoever to feed everything I say into their AI model's training data? Hell, scrape me twice if it'll double the potency. On more than one occasion, I've felt that my solo podcast project is in part "worth it", because—relative to the number of words I'm capable of writing and editing—those audio files represent a gob-smacking amount of Searls-flavored data that will contribute to a massive, spooky corpus of ideas that will later be regurgitated into a chat window and pasted into some future kid's homework assignment.

I'm not going to have children. I don't believe in God. I know that as soon as I'm dead, it's game over. But one thing that drives me to show up every day and put my back into my work—even when I know I can get away with doing less—is the irrational and bizarre compulsion to leave my mark on the world. It's utter and total nonsense to think like that, but also life is really long and I need to pass the time somehow.

So I make stuff! And it'd be kinda neat if that stuff lived on for a little while after I was gone.

And I know I'm not alone. Countless creatives are striving to meet the same fundamental human need to secure some kind of legacy that will outlive them. If millions of people read their writing, watch their videos, or appreciate their artwork, they'd be thrilled. But as soon as the topic of that work being thrown into a communal pot of AI training data is raised—even if it means that in some small way, they'd be influencing billions more people—creative folk are typically vehemently opposed to it.

Is it that AI will mangle and degrade the purity of their work? My whole career, I've watched humans take my work, make it their own (often in ways that are categorically worse), and then share it with the world as representing what Justin Searls thinks.

Is it the lack of attribution? Because I've found that, "humans leveraging my work without giving me credit," is an awfully long-winded way to pronounce "open source."

Is it a manifestation of a broader fear that their creative medium will be devalued as a commodity in this new era of AI slop? Because my appreciation for human creativity has actually increased since the dawn of generative AI—as its output gravitates towards the global median, the resulting deluge of literally-mediocre content has only served to highlight the extraordinary-ness of humans who produce exceptional work.

For once, I'm not trying to be needlessly provocative. The above is an honest reflection of my initial and sustained reaction to the prospect of my work landing in a bunch of currently-half-cocked-but-maybe-some-day-full-cocked AI training sets. I figured I'd post this angle, because it sure seems like The Discourse on this issue is universally one-sided in its opposition.

Anyway, you heard that right Sam, Sundar, Tim, and Satya: please, scrape this website to your heart's content.

Backing up a step

A lot of people whose income depends on creating content, making decisions, or performing administrative tasks are quite rightly worried about generative AI and to what extent it poses a threat to that income. Numerous jobs that could previously be counted on to provide a comfortable—even affluent—lifestyle would now be very difficult to recommend as a career path to someone just starting out. Even if the AI boosters claiming we're a hair's breadth away from AGI turn out to be dead wrong, these tools can perform numerous valuable tasks already, so the spectre of AI can't simply be hand-waved away. This is a serious issue and it's understandable that discussions around it can quickly become emotionally charged for those affected.

Content warning: more content…

3 Simple Rules for Using my Large Language Model

When it comes to AI, it seems like the vast majority of people I talk to believe large language models (LLMs) are either going to surpass human intelligence any day now or are a crypto-scale boondoggle with zero real-world utility. Few people seem to land in-between.

Not a ton of nuance out there.

The truth is, there are tasks for which LLMs are already phenomenally helpful, and tasks for which today's LLMs will invariably waste your time and energy. I've been using ChatGPT, GitHub Copilot, and a dozen other generative AI tools since they launched and I've had to learn the hard way—unlike with web search engines, perhaps—that falling into the habit of immediately reaching for an LLM every single time I'm stuck is a recipe for frustratingly inconsistent results.

As B.F. Skinner taught us, if a tool is tremendously valuable 30% of the time and utterly useless the other 70%, we'll nevertheless keep coming back to it even if we know we're probably going to get nothing out of it. Fortunately, I've been able to drastically increase my success rate by developing a set of heuristics to determine whether an LLM is the right tool for the job before I start typing into a chat window. They're based on the grand unifying theory that language models produce fluent bullshit, which makes them the right tool for the job when you desire fluent output and don't mind inaccurate bullshit.

Generative AI is perhaps the fastest-moving innovation in the history of computing, so It goes without saying that that everything I suggest here may be very useful on June 9th, 2024, but will read as a total farce in the distant future of November 30th, 2024. That said, if you've been sleeping on using LLMs in your daily life up to this point and are looking to improve your mental model of how to best relate to them (as opposed to one-off pro-tips on how to accomplish specific tasks), I hope you'll find this post useful.

So here they are, three simple rules to live by.

Spoiler alert: there's more to this…

Welcome to the 2024 Conbini Awards!

Every year, Japan's convenience stores and packaged food companies attempt to sate the nation's voracious appetite for novelty goods by releasing a slew of products that you or I might consider weird as fuck.

I just spent a month there, and snapped a photo of my favorite head scratchers:

If it's not clear what you're looking at, here's a brief rundown of what each of these is (or purports to be):

  1. Candy that's shaped, sized, and flavored like peanuts and which features peanut-like crunchiness. (Why not just eat peanuts, though?)
  2. "Chiizu-tara", a common Japanese snack that sandwiches cheese with fish paste. This one is co-branded with a popular delivery pizza chain and flavored like spicy sausage
  3. Gummies colored and shaped as sliced bell peppers (always referred to as paprika in Japanese), flavored as savory consommé broth, with vegetable content equivalent to one serving of lettuce
  4. "Delicious tomato" flavored alcoholic chuuhai cocktail in a can, exclusive to Japan's northern Touhoku region
  5. Gummies shaped like salmon nigiri, where the salmon pieces are orange-flavored and the rice is yogurt-flavored
  6. Another alcoholic chuuhai, this one flavored after Suntory's Dekavita C beverage, a long-running vitamin C, B, niacin, and amino acids supplement. Get healthy and drunk in one step!
  7. Toma'nade, which is like a cursed Arnold Palmer and contains a full tomato but an unspecified amount of lemons
  8. Gummies shaped like bisected soft-boiled eggs. They're flavored like white grapes, and the package reassures the consumer that they are not egg flavored
  9. Jelly sparkling grape drink. You are meant to shake it ten times and then let the juicy-jelly find its way down your gullet, I guess
  10. French fry and hamburger flavored potato chips are all the rage this year (I found at least three kinds of each), but far-and-away, the winner was this one modeled after Wendy's limited edition "Wild Rock" burger from 2017, which featured two burger patties in lieu of buns (like a beef equivalent to KFC's Double Down). Again, this isn't that, these are potato chips modeled after an off-brand hypothetical burger that looks just like the Wendy's Wild Rock burger. Also, to be clear, the potato chips are neither a low-carb or high-protein snack, they are simply meant to have a taste that's evocative of burger meat. I can't stress that enough

Dormy Inn puts Western hotel chains to shame

One of the mysteries of traveling Japan is that their domestic business hotels often deliver a higher level of service and amenities than comparable Western chains, even so-called "luxury" brands—all while charging a fraction of the price.

To illustrate, I've mostly been staying at Dormy Inn and their higher-end Nono brand for most of the last two weeks.

When you stay at a Dormy Inn, these services are more-or-less always included with your stay:

  • All the typical hotel amenities you'd expect (wifi, etc.)
  • Access to a large public bath, typically featuring a sauna, an outdoor bath (露天風呂), and a cold plunge—moreover, the baths are typically genuine certified onsens when the hotel resides in an area with hot springs nearby
  • Free use of their laundry machines and dirt-cheap (¥100 per 20 minutes) electric dryers
  • "Roomwear" – in lieu of proper yukata, shirts and pants suitable for traipsing back and forth to the baths; especially handy when you're doing laundry
  • Free ice pops at night and yakult-style probiotic yogurt drinks every morning
  • Free coffee machines, and often soft drinks as well
  • Free "yonaka" late-night ramen (9:30pm - 11pm)
  • Mini libraries with comics and novels
  • Some properties feature complimentary massage chairs
  • Each room's fridge comes pre-loaded with bottled water and a seasonal sweet
  • Local flair. For example, Aomori's Dormy Inn features free local apple juice (probably the best apple juice I've ever had, and I'm from Michigan), as well as beautiful Nebuta-style mini-floats lining its bathing floor

The Nono chain goes a step further by being completely floored with tatami mats, requiring guests to check their shoes in lockers at the hotel entrance. It's actually really nice in practice, and creates a very relaxed atmosphere throughout the hotel.

The price for all these amenities? Usually about $70 USD. Here's the total damage for all my Dormy stays this month:

That's $597.2 for 8 nights at a fantastic hotel loaded chock full of amenities and which probably saved me $50 in coin laundry and coffee alone. For comparison, the cheapest room in a Red Roof Inn in Orlando, Florida tonight is $112.36, just 30¢ cheaper than the downright luxurious Nono property in Matsue.

Several Japanese hotel chains offer (to an American) an unheard level of value, and I'm mad nobody told me that Dormy Inn kicked so much ass until I stumbled upon the Kobe property last spring. So here you go, someone is telling you.

Anyway, hopefully this is some news you can use.

What it's like traveling with Aaron Patterson

During our visit to Zamami Island with Aaron earlier this week, a young woman approached us and asked if we spoke English. (This is exceedingly rare. In 20 years of traveling to Japan, I don't think anyone has ever assumed I speak anything but English.) She proceeded to ask Aaron if he'd take her group's picture, so I snapped this photo of him obliging:

Later in the day I mentioned having taken a picture of him taking the picture and he responded, "oh yeah, I took a selfie!"

So I zoomed:

And then I enhanced:

Yep. Sure as shit, there's Aaron taking a selfie with this girl's camera.

Apple MacBook Air 13-Inch M3 Review

I've been using the new MacBook Air since it launched last month and I'd been thinking about writing a full review of what it's like to live with it, but I'm lazy so I'll just piggy-back on Paul Thurrott's glowing review of the 15" model with the following modifications that only apply to the 13" version.

Additional review notes:

  • Its screen is 2" less than 15"
  • Its speakers are somewhat worse than the 15"
  • Contrary to Paul's review, the microphone array held up surprisingly well in my testing—especially with Voice Isolation activated—and were far superior to the mics on the AirPods Pro 2
  • At 2.7 pounds, the 13" M3 MacBook Air is 35% heavier than the discontinued 12" MacBook, a model that was originally released in 2015

Despite 9 years of technological advancement, Apple has regressed significantly on the only metric I care about in a portable computer: weight. Considering that the ARM transition was meant to provide significantly more thermal headroom and enable the design of new form factors, the fact that Apple was able to ship a 2-pound MacBook with a retina screen and Intel chip in 2015 but has thus far failed to ship an M-series Mac that weighs less than 2.7 pounds is simply bewildering.

Everything else about the computer is great.

Searls Score: 2.7 / 10

How to add a full screen button to MapKit JS

This week, I've been working on a new multimedia mode for the blog that I call Spots, which are essentially map pins of places I visit.

Rather than merely list all of my Spot posts one-at-a-time as reverse-chronological posts, I also wanted to make a single interactive map that aggregated all my annotations in one place so that readers could explore them spatially. Because I'm a tool for all things Apple, I decided to use the MapKit JS framework they introduced in 2018 for the job.

All told, it was actually pretty easy to get up-and-running and the documentation (while sparse) told me what I needed to do to get a map rendered, centered, and loaded with annotations.

One thing the docs didn't tell me however, was how to let users make a map full screen, and the reason the docs don't mention it is because framework doesn't support it. And that's quite a bummer when your layout's maximum width is as narrow as it is on this humble blog.

So this morning, I figured out a way to hack together a full-screen button that aped the Maps UI's look-and-feel for MapKit JS. Since it's an odd omission, I figured it might be useful to somebody out there if I did a quick show-and-tell on how I did it.

Here's what the button does, before and after clicking it:

Interested? Well, here's what the recipe calls for:

  1. An absolute-positioned div inside the parent element of the map, styled to look like a button and with a couple full-screen zoom icons from Tailwind's excellent Heroicons project
  2. A click handler that that toggles a bunch of Tailwind classes to switch the map's parent div between its regular and fixed full-screen display (also swapping the button icon)
  3. A keyup handler that listens for escape key and—if any maps are full-screened—will exit full screen. This felt necessary while I was testing, because the shrink button can be an awful far distance for one's mouse to travel

Keep reading…

Exporting your Tabelog 行ったお店 history

This is going to be a niche one, but maybe somebody will Google for this someday.

This morning, I figured out a relatively low-effort way to export my visited restaurants (行ったお店) in Tabelog and then decorate them with latitude and longitude as well as translations of each restaurant's name and summary.

There are basically three steps:

  1. Gather each page of your visited restaurants using JavaScript in the console
  2. Export them to a JSON file
  3. In a Ruby script, update the JSON for each restaurant, adding:
    1. Latitude and longitude
    2. English translations of its name and summary

What happens next will shock you…

Meta's new AI chat sucks at coding

Yesterday, Zuck got on stage to announce Meta's ChatGPT killer, Llama 3, apparently making this bold claim:

Meta says that Llama 3 outperforms competing models of its class on key benchmarks and that it's better across the board at tasks like coding

Coding? You sure about that?

I've been pairing with ChatGPT (using GPT-4) every day for the last few months and it is demonstrably terrible 80% of the time, but 20% of the time it saves me an hour of headaches, so I put up with it anyway. Nevertheless, my experience with Llama 2 was so miserable, I figured Zuck's claim about Llama 3 outperforming GPT-4 was bullshit, so I put it to the test this morning.

TLDR: I asked three questions and Llama 3 whiffed. Badly.

Question 1

Here's the first question I asked, pondering a less messy way to generate URL paths (secretly knowing how hard this is, given that Rails models and controllers are intentionally decoupled):

Content warning: more content…

Fix your Rails Fixtures with this one neat trick

If you have any Rails models that define a custom table_name AND you load fixtures in your test database, then you're probably going to have a bad time. Maybe you're here from Google. If so, hi, hello! You're in the right place.

Here's the model I just ran into this issue with:

# app/models/build/program.rb
module Build
  class Program < ApplicationRecord
    self.table_name = "build_programs"
  end
end

Okay, I'm interested…

Vision Pro was a better deal than my Mac Studio

As the post-launch hype has cooled, the Apple-watcher zeitgeist has started to turn against the platform—some are even bold enough to invoke the word "failure".

(Aside: if Apple considers Vision Pro a failure, it's not because of sluggish sales figures or a weak App Store lineup. It was clear from the jump that Apple is committed to a ten year roadmap for this thing, regardless what you or your favorite Youtuber thinks. Burback's video was hilarious, though.)

I've been using Apple Vision Pro for no purpose other than Mac Virtual Display for 4-8 hours a day, 7 days a week, since it launched on February 2nd. Meanwhile, my brand-spankin'-new M2 Ultra-equipped Mac Studio and 32" 6K monitor are collecting dust. More than that, I'm getting more done than at any point in my career. So I figured I'd share the Good News with y'all, in case it might sway anyone sitting on the fence into giving Vision Pro a shot.

First, I'll explain why my productivity shot through the roof once I strapped a computer to my face. Then, I'll show why such an expensive device is no more an irresponsible use of funds than other "Pro"-tier equipment in the Apple ecosystem.

Let's dive in and find out…

HotwireCombobox is pretty damn slick

In a stroke of good fortune, this week's big, overriding to-do item was to figure out how to write a hotwire-friendly "combo box" (one of those drop-down / select boxes for the web that you can type into and filter the options). Then I happened to scan this week's Ruby Weekly and found somebody beat me to the punch!

It's by Jose Farias and he calls it HotwireCombobox. The documentation page contains plenty of demos, so go play with it!

The best part (and my favorite thing about moving to import maps for JavaScript in Rails 7) is that the front-end assets live with the gem, which means there's no risk of version drift causing the backend and front-end to fall out of sync with each other.

In fact, set up was so minimal, I'm going to share the entire changeset of what it took to convert my app's f.collection_select boxes over to f.combobox.

Spoiler alert: there's more to this…

How to control Time in Ruby on Rails

Faking time is a frequent topic of conversation in software testing, both because the current time & date influence how many programs should behave and because reading a real system clock can expose edge cases that make tests less reliable (e.g. starting a build just before midnight on New Year's Eve may see assertions fail with respect to what year it is).

I've approached this issue a dozen different ways over the years, and there are a number of tools and practices promoted in every tech stack. Rubyists often lean on the timecop gem and Active Support's TimeHelpers module to manipulate Ruby's time during testing. Regardless, no tool-based solution is robust enough to cover every case: unless the operating system, the language runtime, the database, and every third-party service agree on what time it is, your app is likely to behave unexpectedly.

Spoiler alert: there's more to this…

Simultaneously save+copy screenshots on the Mac

[UPDATE: Since publishing, I've simplified these instructions and reduced the latency in bringing up the screenshot tool by about half.]

[UPDATE 2: If you're on macOS 14.4 Sonoma and you want to avoid "Operation Not Permitted" errors, there's no sure-fire way to avoid them whether you set this up via Shortcuts or Automator, so I'd recommend using Keyboard Maestro instead.]

macOS ships with a pretty rad Screenshot app, except that one thing about it totally sucks: it can be configured to either copy screenshots to the clipboard or save them to files, but not both.

Well, I finally got off my ass and cooked up a way to have my save and copy it, too. Read on if you're interested.

Keep reading…

Making a nice 2FA / OTP / SMS field with Tailwind & Stimulus

So, I built this little bit of UI today as part of an email-based authentication flow for Becky's new app:

(I haven't shipped this yet and I'm too lazy to record a screencast, so just imagine that this field behaves perfectly, please and thank you.)

If you, like me, have ever found yourself in the thrall of a beautiful-looking 6-digit form when logging into some site, whether when filling a TOTP from your authenticator app or copy-pasting a code that's been texted or e-mailed to you, you've probably wondered "how'd they make that field look nice like that?"

Well, today I actually started digging into it, and I didn't like what I found. At all.

To be continued…

One-shotting git pull-commit-push in VS Code

A frustration I've had since switching to VS Code last year from terminal vim is that the built-in source control extension isn't very keyboard-friendly. As a result, I've been tabbing back and forth between VS Code and Fork and kicking myself every single time, especially when I'm just editing a single file and I really don't need to review my changes before I push.

Well, I finally took the five minutes to write a VS Code macro to do this for me. First, run Open Keyboard Shortcuts (JSON) and add this to the array of keyboard shortcuts:

{
    "command": "runCommands",
    "key": "cmd+alt+ctrl+p",
    "args": {
      "commands": [
        "workbench.action.files.save",
        "git.sync",
        "git.stageAll",
        "git.commitAll",
        "git.push"
      ]
    }
  }

Now when I smoosh command, option, and control, then hit P, it'll pull from the tracked remote branch, stage & commit everything, open a window for me to enter a quick message (usually "lol"), and then when I hit command-w, the result will be pushed. Saves me about 10 seconds per commit.