justin․searls․co

Adding vertical screen size media queries to Tailwind

Learning Tailwind was the first time I've felt like I had a chance in hell of expressing myself visually with precision and maintainability on the web. I'm a fan. If your life currently involves CSS and you haven't tried Tailwind, you should spend a few days with it and get over the initial learning curve.

One thing I like about Tailwind is that it's so extensible. It ships with utility classes for dealing with screen sizes to support responsive designs, but out-of-the-box it doesn't include any vertical screen sizes. This isn't surprising as they're usually not necessary.

But, have you ever tried to make your webapp work when a phone is held sideways? When you might literally only have 330 pixels of height after accounting for the browser's toolbar? If you have, you'll appreciate why you might want to make your design respond differently to extremely short screen heights.

Figuring out how to add this in Tailwind took less time than writing the above paragraph. Here's all I added to my tailwind.config.js:

module.exports = {
  theme: {
    extend: {
      screens: {
        short: { raw: '(max-height: 400px)' },
        tall: { raw: '(min-height: 401px and max-height: 600px)' },
        grande: { raw: '(min-height: 601px and max-height: 800px)' },
        venti: { raw: '(min-height: 801px)' }
      }
    }
  }
}

And now I can accomplish my goal of hiding elements on very short screens or otherwise compressing the UI. Here's me hiding a logo:

<img class="w-6 short:hidden" src="/logo.png">

Well, almost…

Unfortunately, because of this open issue, defining any screens with raw will inadvertently break variants like max-sm:, which is bad. So in the meantime, a workaround would be to define those yourself. Here's what that would look like:

const defaultTheme = require('tailwindcss/defaultTheme')

module.exports = {
  theme: {
    extend: {
      screens: {
        short: { raw: '(max-height: 400px)' },
        tall: { raw: '(min-height: 401px and max-height: 600px)' },
        grande: { raw: '(min-height: 601px and max-height: 800px)' },
        venti: { raw: '(min-height: 801px)' },

        // Manually generate max-<size> classes due to this bug https://github.com/tailwindlabs/tailwindcss/issues/13022
        'max-sm': { raw: `not all and (min-width: ${defaultTheme.screens.sm})` },
        'max-md': { raw: `not all and (min-width: ${defaultTheme.screens.md})` },
        'max-lg': { raw: `not all and (min-width: ${defaultTheme.screens.lg})` },
        'max-xl': { raw: `not all and (min-width: ${defaultTheme.screens.xl})` },
        'max-2xl': { raw: `not all and (min-width: ${defaultTheme.screens['2xl']})` },
      }
    }
  }
}

Okay, yeah, so this was less of a slam dunk than I initially suggested, but I'm still pretty happy with it!

Breaking Change artwork

v14 - Actual Intelligence

Breaking Change

WWDC came and went and now we're all just left to ponder what life will be like under the thumb of our new AI overlords (and underlords). But at least for now, humans are still allowed to produce podcasts on their own, which means we can safely opine about the fresh hell we're all hard at work creating.

I finally got a chance to get to some mailbag questions in this episode! If you want to be a part of the ✨magic🪄, shoot me an e-mail to podcast@searls.co. I won't read your last name on air, unless I accidentally do. Promise!

Here are some links to stuff from this episode:

Show those show notes…

Running Rails System Tests with Playwright instead of Selenium

Last week, when David declared that system tests have failed, my main reaction was: "well, yeah." UI tests are brittle, and if you write more than a handful, the cost to maintain them can quickly eclipse any value they bring in terms of confidence your app is working.

But then I had a second reaction, "come to think of it, I wrote a smoke test of a complex UI that relies heavily on Turbo and it seems to fail all the damn time." Turbo's whole cloth replacement of large sections of the DOM seemed to be causing numerous timing issues in my system tests, wherein elements would frequently become stale as soon as Capybara (under Selenium) could find them.

Finally, I had a third reaction, "I've been sick of Selenium's bullshit for over 14 years. I wonder if I can dump it without rewriting my tests?" So I went looking for a Capybara adapter for the seemingly much-more-solid Playwright.

And—as you might be able to guess by the fact I bothered to write a blog post—I found one such adapter! And it works! And things are better now!

So here's my full guide on how to swap Selenium for Playwright in your Rails system tests:

Turns out, there's more to it…

Pro-tip to the 8 people out there using Vision Pro and Mac Virtual Display: don’t let your Mac enter low power mode.

I set Low Power Mode to turn on when my MacBook Air is on battery and then spent 3 days being confused why the virtual display latency skyrocketed from “instant” to “slideshow”.

I was excited to be hosted by the Changelog folks for a recap discussion following Apple's WWDC keynote on Monday. If you listened to my WWDC spoiler cast, you might like this after-action report.

Changelog & Friends 48: Putting the Apple in AI – Listen on Changelog.com

A few errata and missed pitches:

  • I didn't mention this on the podcast, but I was deeply disappointed to see Apple didn't expose any system-level model that can just be invoked as a general purpose LLM API. This would have been a game-changer for small developers who are currently hobbled by figuring out how to roll out meaningful LLM features without risking that the cost of calling through to the OpenAI API will eclipse the revenue generated by app sales and subscriptions
  • When discussing why Apple Intelligence requires an iPhone 15 Pro, I whiffed on the reason (which became clear later that day) that the root cause is memory. Devices with less than 8GB of RAM probably can't run Apple Intelligence without the base operating system falling over
  • I referenced Mac Virtual Display working with the "Wireless NIC turned off". That's not quite right. The network interface needs to be on, but if neither device is connected to a wifi network, screen sharing will work over a peer-to-peer wifi connection

I hope you'll listen! I like Jerod and Adam a lot. The whole Changelog family of podcasts is fantastic. You can tell they're smart because they have yet to invite Breaking Change to join the network.

Was thinking last night how much I loved Fallout 2 when it first released and how much of the magic was lost when Bethesda moved it to a high-budget FPS format.

Given all the Fallout buzz right now and the nature of Game Pass, Microsoft should find a studio to make a lower-budget Fallout 2.5 in the style of the original games but for modern platforms. Could be great.

Dear AI companies, please scrape this website

Last night, I read a flurry of angry feedback following WWDC. It appears some people are mad about Apple's AI announcements. Just like they were mad about Apple's hydraulic press ad last month.

I woke up this morning with a single question:

"Am I the only person on earth who actually wants AI companies to scrape my website?"

Publications that depend on ad revenue don't. License holders counting on a return for their intellectual property investment are lawyering up. Quite a few Mastodon users appear not to be on board, either.

Me, meanwhile, would absolutely positively 💗LOVE💗 if the AIs scraped the shit out of this website, as well as all the other things I post publicly online.

Really, take my work! Go nuts! Make your AI think more like me. Make your AI sound more like me. Make your AI agree with my view of the world more often.

The entire reason I create shit is so that others will take it! To share ideas I find compelling in the hope those ideas will continue to spread. Why wouldn't I want OpenAI or Apple or whoever to feed everything I say into their AI model's training data? Hell, scrape me twice if it'll double the potency. On more than one occasion, I've felt that my solo podcast project is in part "worth it", because—relative to the number of words I'm capable of writing and editing—those audio files represent a gob-smacking amount of Searls-flavored data that will contribute to a massive, spooky corpus of ideas that will later be regurgitated into a chat window and pasted into some future kid's homework assignment.

I'm not going to have children. I don't believe in God. I know that as soon as I'm dead, it's game over. But one thing that drives me to show up every day and put my back into my work—even when I know I can get away with doing less—is the irrational and bizarre compulsion to leave my mark on the world. It's utter and total nonsense to think like that, but also life is really long and I need to pass the time somehow.

So I make stuff! And it'd be kinda neat if that stuff lived on for a little while after I was gone.

And I know I'm not alone. Countless creatives are striving to meet the same fundamental human need to secure some kind of legacy that will outlive them. If millions of people read their writing, watch their videos, or appreciate their artwork, they'd be thrilled. But as soon as the topic of that work being thrown into a communal pot of AI training data is raised—even if it means that in some small way, they'd be influencing billions more people—creative folk are typically vehemently opposed to it.

Is it that AI will mangle and degrade the purity of their work? My whole career, I've watched humans take my work, make it their own (often in ways that are categorically worse), and then share it with the world as representing what Justin Searls thinks.

Is it the lack of attribution? Because I've found that, "humans leveraging my work without giving me credit," is an awfully long-winded way to pronounce "open source."

Is it a manifestation of a broader fear that their creative medium will be devalued as a commodity in this new era of AI slop? Because my appreciation for human creativity has actually increased since the dawn of generative AI—as its output gravitates towards the global median, the resulting deluge of literally-mediocre content has only served to highlight the extraordinary-ness of humans who produce exceptional work.

For once, I'm not trying to be needlessly provocative. The above is an honest reflection of my initial and sustained reaction to the prospect of my work landing in a bunch of currently-half-cocked-but-maybe-some-day-full-cocked AI training sets. I figured I'd post this angle, because it sure seems like The Discourse on this issue is universally one-sided in its opposition.

Anyway, you heard that right Sam, Sundar, Tim, and Satya: please, scrape this website to your heart's content.

Backing up a step

A lot of people whose income depends on creating content, making decisions, or performing administrative tasks are quite rightly worried about generative AI and to what extent it poses a threat to that income. Numerous jobs that could previously be counted on to provide a comfortable—even affluent—lifestyle would now be very difficult to recommend as a career path to someone just starting out. Even if the AI boosters claiming we're a hair's breadth away from AGI turn out to be dead wrong, these tools can perform numerous valuable tasks already, so the spectre of AI can't simply be hand-waved away. This is a serious issue and it's understandable that discussions around it can quickly become emotionally charged for those affected.

You'll never guess what happens next…

The moment people most often update a dependency is when it isn’t acting as they expect, and an update might fix it. The likeliest outcome is that the updated dependency will continue to not work as expected while also breaking in new and unrelated ways.

Yesterday in the Platform State of the Union, Apple celebrated 10 years of Swift and then in the very next slide announced they finally built a testing framework for it.

Digging in more, I found this forum post that a "vision document" (I'm not familiar with Swift people's vernacular for this stuff) for testing direction had been accepted.

Anyway, this is all interesting in its own right and something I'll be following generally, but I have to say that for such a broad and important topic, that forum thread had an absolutely incredible time-to-mocking-derail metric. The very first reply is about mocking and then seemingly half of the subsequent posts in the thread are people spouting off their personal opinions on mocking instead of any of the more important stuff.

I don't claim to be an expert in very much, but having built several mocking frameworks, spoken about mocking practices (including a stealth mocking keynote at RailsConf and a not-so-stealth mocking closing talk at JSConf), and even named the company I co-founded after a mocking term, I feel like I have some authority to say the following: boy howdy is it a bummer that most developers only understand mocking as a utility and lack any comprehension of how to deploy mocks in a well-defined, consistently-executed software development workflow.

I'm very happy to have nailed an approach that works really well for me, but I'll probably always view it as a major failure of my career that I was never able to market that approach effectively. Even now, I don't have a single authoritative URL to point you to for my "Discovery Testing" approach that would provide a clear explanation of what it is, why it's good, and how to use it. And for me personally, the moment has probably passed and I just need to live with that failure, I imagine.

One of the things I think about a lot with respect to testing practices (including test-driven development) and their failure to really "stick" or spread more broadly is that they transcend any one testing framework or programming language. As a result, consultants like me were so absorbed just porting slightly-different versions of the various tools we needed to every new language that there's no such thing as a single README or book that could explain to a normal developer how to be successful. I tried to write a book once that would serve as a tabula rasa across languages and I almost immediately became trapped in a web of complexity. Other programming idioms and methods that apply across languages are similarly fraught, but mocking's relative unimportance to the primary task of shipping working software probably doomed it from the start.

Anyway, it's cool that mocking will be possible in Swift Testing.

Putting your phone away to focus? Fixed that for you

If you've ever felt distracted and thrown your phone in your bag so you can focus on your Mac, you're going to need a new strategy to achieve self control thanks to macOS Sequoia's new iPhone Mirroring feature.

Worried that I've heard Craig Federighi say, "it’s a great way to ___," more times than I've heard most relatives say, "I love you."

The #1 app on iPad is a calculator

Saving this for posterity as it seems likely Apple is 90 minutes away from announcing a first-party Calculator app for iPad. Only took 14 years.

If you’re the DOJ, this is definitely a sign that they’re abusing their market position and stifling competition!

If you’re anyone else, you’re amazed that Apple let them use an icon so evocative of their own Calculator app on iPhone.

3 Simple Rules for Using my Large Language Model

When it comes to AI, it seems like the vast majority of people I talk to believe large language models (LLMs) are either going to surpass human intelligence any day now or are a crypto-scale boondoggle with zero real-world utility. Few people seem to land in-between.

Not a ton of nuance out there.

The truth is, there are tasks for which LLMs are already phenomenally helpful, and tasks for which today's LLMs will invariably waste your time and energy. I've been using ChatGPT, GitHub Copilot, and a dozen other generative AI tools since they launched and I've had to learn the hard way—unlike with web search engines, perhaps—that falling into the habit of immediately reaching for an LLM every single time I'm stuck is a recipe for frustratingly inconsistent results.

As B.F. Skinner taught us, if a tool is tremendously valuable 30% of the time and utterly useless the other 70%, we'll nevertheless keep coming back to it even if we know we're probably going to get nothing out of it. Fortunately, I've been able to drastically increase my success rate by developing a set of heuristics to determine whether an LLM is the right tool for the job before I start typing into a chat window. They're based on the grand unifying theory that language models produce fluent bullshit, which makes them the right tool for the job when you desire fluent output and don't mind inaccurate bullshit.

Generative AI is perhaps the fastest-moving innovation in the history of computing, so It goes without saying that that everything I suggest here may be very useful on June 9th, 2024, but will read as a total farce in the distant future of November 30th, 2024. That said, if you've been sleeping on using LLMs in your daily life up to this point and are looking to improve your mental model of how to best relate to them (as opposed to one-off pro-tips on how to accomplish specific tasks), I hope you'll find this post useful.

So here they are, three simple rules to live by.

And then what happened?…