justin․searls․co

A lot of content around here boils down to links to someplace else, for which all I have to add is a brief call-out or commentary. As such, the headlines for each of these posts link to the original source article. (If you want a permalink to my commentary, try clicking the salt shaker.)


Doc Searls (no relation) writes over at searls.com (which is why this site's domain is searls.co) about how the concept of human agency is being lost in the "agentic" hype:

My concern with both agentic and agentic AI is that concentrating development on AI agents (and digital “twins”) alone may neglect, override, or obstruct the agency of human beings, rather than extending or enlarging it. (For more on this, read Agentic AI Is the Next Big Thing but I’m Not Sure It’s What, by Adam Davidson in How to Geek. Also check out my Personal AI series, which addresses this issue most directly in Personal vs. Personal AI.)

Particularly interesting is that he's doing something about it, by chairing a IEEE spec dubbed "MyTerms":

Meet IEEE P7012, which “identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines.” It has been in the works since 2017, and should be ready later this year. (I say this as chair of the standard’s working group.) The nickname for P7012 is MyTerms (much as the nickname for the IEEE’s 802.11 standard is Wi-Fi). The idea behind MyTerms is that the sites and services of the world should agree to your terms, rather than the other way around.

MyTerms creates a new regime for privacy: one based on contract. With each MyTerm you are the first party. Not the website, the service, or the app maker. They are the second party. And terms can be friendly. For example, a prototype term called NoStalking says “Just show me ads not based on tracking me.” This is good for you, because you don’t get tracked, and good for the site because it leaves open the advertising option. NoStalking lives at Customer Commons, much as personal copyrights live at Creative Commons. (Yes, the former is modeled on the latter.)

How are the terms communicated? So MyTerms is expressed as some kind of structured data (JSON? I haven't read the spec) codification presented by the user's client (HTTP headers or some kind of handshake?), to which the server either agrees to or something-something (blocks access?). Then both parties record the agreement:

On your side—the first-party side—browser makers can build something into their product, or any developer can make a browser add-on (Firefox) or extension (the rest of them). On the site’s side—the second-party side—CMS makers can build something in, or any developer can make a plug-in (WordPress) or a module (Drupal).

Not answered in Doc's post (and I suspect, the rub) is how any of this will be enforced. In the late 90s, browser makers added a bold, green lock symbol to the location bar to convey a sense of safety to users that they were communicating over HTTPS. Then, there was a lucrative incentive at play: secure communications were necessary to get people to type their credit cards into a website. Today, the largest browser makers don't have any incentive to promote this. Could you imagine Microsoft, Google, or Apple making any of their EULA terms negotiable?

Maybe the idea is to put forward this spec and hope future regulations akin to the Digital Services Act will force sites to adopt it. I wish them luck with that.

Tuesday, while recording an episode of The Changelog, Adam reminded me that my redirects from possyparty.com to posseparty.com didn't support HTTPS. Naturally, because this was caught live and on air and was my own damn fault, I immediately rushed to cover for the shame I felt by squirreling away and writing custom software. As we do.

See, if you're a cheapskate like me, you might have noticed that forwarding requests from one domain or subdomain to another while supporting HTTPS isn't particularly cheap with many DNS hosts. But the thing is, I am particularly cheap. So I built a cheap solution. It's called redirect-dingus:

What is it? It's a tiny Heroku nginx app that simply reads a couple environment variables and uses them to map request hostnames to your intended redirect targets for cases when you have some number of domains and subdomains that should redirect to some other canonical domain.

Check out the README for instructions on setting up your own Heroku app with it for your own domain redirect needs. I recommend forking it (just in case I decide to change the nginx config to redirect to an offshore casino or crypto scam someday), but you do you.

RevenueCat seems like a savvy, well-run business for mobile app developers trying to subscription payments in the land of native in-app purchase APIs. Every year they take the data on their platform and publish a survey of the results. Granted, there's definitely a selection bias at play—certain kinds of developers are surely more inclined to run their payments through a third-party as opposed to Apple's own APIs.

That said, it's a large enough sample size that the headline results are, as Scharon Harding at Ars Technica put it, "sobering". From the report itself:

Across all categories, nearly 20 percent reach $1,000 in revenue, while 5 percent reach the $10,000 mark. Revenue drop-off is steep, with many categories losing ~50 percent of apps at each milestone, emphasizing the challenge of sustained growth beyond early revenue benchmarks.

Accepted without argument is that subscription-based apps are the gold standard for making money on mobile, so one is left to surmise that these developers are way better off than the ones trying to charge a one-time, up front price for their apps. And only 5% of all of subscription apps earn enough revenue to replace a single developer salary for any given year.

Well, if you've ever wondered why some startup didn't have budget to hire you or your agency to build a native mobile app for them, here you go. Outside free-to-play games, the real money is going to companies that merely use mobile apps as a means of distribution and who generally butter their bread somehow else (think movie tickets, car insurance, sports betting).

Anyway, super encouraging thing to read first thing while sitting down to map out this subscription-based iOS app I'm planning to create. Always good to set expectations low, I guess.

Benji Edwards for Ars Technica:

According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The user wasn't having it:

"Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding."

If some little shit talked to me that way and expected me to code for free, I'd tell him to go fuck himself, too.

Existence of the imminent Oblivion remake was leaked months ago, and maybe I just missed this tidbit, but today Andy Robinson reported for Video Games Chronicle:

The Oblivion remake is reportedly “fully remade” with Unreal Engine 5, with six reworked gameplay systems: stamina, sneaking, blocking, archery, hit reaction and HUD.

If this is the case and because Elder Scrolls VI is still being developed on the Gamebryo/Creation Engine, I can't wait to see a side-by-side analysis of image quality, performance, and overall "Bethesda jank" between the two. I've been saying Bethesda needs to ditch its in-house engine since two-thousand-fucking-eight when Fallout 3 shipped broken and required years of patches to even feel playable. If this Oblivion remake is a lights-out technical success and I'm Phil Spencer, I'd be kicking shins left and right to convince Bethesda to take the time to replatform Elder Scrolls VI now before it ends up becoming another dud like Starfield.

Joe Rossignol at MacRumors:

Apple says turning on Low Power Mode reduces the Mac Studio's fan noise, which is useful for tasks that require a quieter environment, and it also allows for reduced power consumption if the computer is left running continuously.

The reduced fan noise aspect of Low Power Mode requires macOS Sequoia 15.1 or later. The new Mac Studio ships with macOS Sequoia 15.3.

A few Reddit users said macOS Sequoia 15.3 enabled Low Power Mode on the previous-generation Mac Studio with the M2 Max chip, and presumably on M2 Ultra configurations too. This is not reflected in Apple's support document.

I can confirm, a "Low Power Mode" toggle appears in the Energy settings of my M2 Ultra Mac Studio.

I really put this thing through the ringer with video and AI workloads and I have never been able to hear the fan (even with my ear right to the back of the thing), so I guess I was lucky to get one whose fan holes don't whistle. I'm always glad to receive new features through software, but am comfortable promising you that I will never turn this on.

Gurman with the scoop, summarized by MacRumors:

An updated version of the Mac Studio could launch as soon as this week, reports Bloomberg's Mark Gurman. The new machine is expected to be equipped with the M4 Max chip that we first saw in the 2024 MacBook Pro models, but Apple apparently does not have an M4 Ultra chip ready to go. Instead, there could be a version of the ‌Mac Studio‌ that uses an M3 Ultra chip. Apple didn't release an M3 Ultra chip alongside the M3 chip lineup, so it would be a new chip even though it's not part of the current M4 family. The current ‌Mac Studio‌ has an M2 Ultra chip, as does the Mac Pro.

Releasing a previous-generation, higher-end chip is utterly routine from every other manufacturer, but Apple doesn't sell chips, it sells computers.

Offering a Mac Studio in M4 Max and M3 Ultra configurations would give Apple’s marketing team a really fucking narrow needle to thread. One imagines the Ultra will be better for massive video exports and the Max will be better for literally every other workflow. Woof.

Welp, I just did the thing I promised I'd never do and read the Hacker News comments for this otherwise lovely post pointing out the durable relevance of Ruby on Rails twenty years later. One comment stood out as so wrong, however, I couldn't help but clap back.

It started when someone wrote, presumably in praise of Rails, "I really like web apps that are just CRUD forms." CRUD is shorthand for the basic operations of "create, read, update, and delete", and those four verbs can express the vast majority of what anyone has ever wanted to do with a computer. It's why the spreadsheet was the killer app of the 80s and early 90s. It's why Rails 2's embrace of REST as a way to encapsulate CRUD over HTTP redefined how pretty much everyone has made web applications ever since.

Anyway, in response to the fellow above said he enjoys simple CRUD apps, somebody else wrote:

I really like easy problems too. Unfortunately, creating database records is hardly a business. With a pure CRUD system you're only one step away from Excel really. The business will be done somewhere else and won't be software driven at all but rather in people's heads and if you're lucky written in "SOP" type documents.

This struck me as pants-on-head levels of upside down. Father forgive me, but I hit reply with this:

As someone who co-founded one of the most successful Ruby on Rails consultancies in the world: building CRUD apps is a fantastic business.

There are two types of complexity: essential and incidental. Sometimes, a straightforward CRUD app won't work because the product's essential complexity demands it. But at least as often, apps (and architectures, and engineering orgs, and businesses) are really just CRUD apps with a bunch of incidental complexity cluttering up the joint and making everything confusing, painful, and expensive.

I've served dozens of clients over my career, and I can count on one hand the number of times I've found a company whose problem couldn't more or less be solved with "CRUD app plus zero-to-one interesting features." No technologist wants to think they're just building a series of straightforward CRUD apps, so they find ways to complicate it. No businessperson wants to believe their company isn't a unique snowflake, so they find ways to complicate it. No investor wants to pour their money into yet another CRUD app, so they invent a story to complicate it.

IME, >=90% of application developers working today are either building CRUD apps or would be better off if they realized they were building CRUD apps. To a certain extent, we're all just putting spreadsheets on the Internet. I think this—more than anything else—explains Rails' staying power. I remember giving this interview on Changelog and the host Adam asking about the threat Next.js posed to Rails, and—maybe I'd just seen this movie too many times since 2005—it didn't even register as a possible contender.

Any framework that doesn't absolutely nail a batteries-included CRUD feature-set as THE primary concern will inevitably see each app hobbled with so much baggage trying to roundaboutly back into CRUD that it'll fall over on itself.

Anyway, I believe in this point enough that I figured I should write it down someplace that isn't a deeply-nested comment at a place where I'd recommend you not read the comments, so that's how you and I wound up here just now.

I hate to spoil the end of this wonderful post for you, but:

In short, if your priority is developer experience, you won’t find a better option than Heroku.

Heroku's greatest sin wasn't in selling out (in fact, its Salesforce acquisition was one of the least disruptive in the history of SaaS), it was in well and truly solving the problem it set out to address. Then, after that, …they didn't have much new and shiny to talk about. Everything Just Worked™.

There are a handful of rough edges that they're still sanding down—HTTP 2 support was the big one, but they've recently covered that. I take the fact that I reacted to their banner about a next generation paltform with worry it might make something worse as a GOOD sign. Those among us who actually give a shit about working software should do a better job of rewarding the handful of companies that pull it off.

(And if you do use Heroku, Judoscale's founder Adam is an old friend and a Searls-certified badass and you should check them out.)

Namanyay Goel with a post that appears to be lighting the Internet on fire and is making me feel pretty smart about emphasizing the generational transition at which the software world finds itself:

We're at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They're shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That's where things get concerning.

Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares.

Imagine if your doctor had skipped med school, but—don't worry!—the AI tool built into his Epic tablet gave him the right answers to every question that happened to come up during your intake consult and your first physical.

Hopefully you never need anything serious.

We shipped a fun feature at Better with Becky industries last week that offers a new way to follow Becky's work: getting each Beckygram delivered via e-mail!

From Becky's announcement:

That's why we built Beckygram—a space outside the noise of social media, where I can share real fitness insights, mindset shifts, and everyday wins without the distractions of ads, comparison traps, or influencer gimmicks. Frankly, I know that if I mentally benefit from being off the app, others will too!

You can sign up by clicking the Follow button on her Beckygram bio and entering your e-mail address at the bottom of any page. I hope you'll consider it, because Instagram does indeed suck.

So that brings us to 4 ways people can enjoy the world of Beckygram multimedia:

Because I am a nerd, I just follow via RSS. But apparently, some of Becky's fans "don't know or care what a feed reader even is," so I made this for them. As we grow our POSSE, hopefully we'll eventually offer whatever distribution channels we need for all 7 billion people on earth to subscribe, but for now maybe there's one that works for you (or someone else in your life who might be into working out, feeling motivated, or eating food).

The year is 2025 and, astoundingly, a blog post is advocating for the lost art of Extreme Programming ("XP"). From Benji Weber:

In my experience teams following Extreme Programming (XP) values and practices have had some of the most joy in their work: fulfilment from meaningful results, continually discovering better ways of working, and having fun while doing so. As a manager, I wish everyone could experience joy in their work.

I've had the privilege to work in, build, and support many teams; some have used XP from the get go, some have re-discovered XP from first principles, and some have been wholly opposed to XP practices.

XP Teams are not only fun, they take control of how they work, and discover better ways of working.

Who wouldn't want that? Where does resistance come from? Don't get me wrong; XP practices are not for everyone, and aren't relevant in all contexts. However, it saddens me when people who would otherwise benefit from XP are implicitly or accidentally deterred.

For what it's worth, I wrote about my favorite experience on a team striving to practice (and even iterate on) Extreme Programming in the August edition of Searls of Wisdom, for anyone wanting my take on it.

Benji's comment about GitHub—whose rise coincided with the corporatization of "Agile" and the petering out of methodologies like XP—jumped out at me as something I hadn't thought about in a while:

Similarly Github's UX nudges towards work in isolation, and asynchronous blocking review flows. Building porcelain scripts around the tooling that represent your team's workflow rather than falling into line with the assumed workflows of the tools designers can change the direction of travel.

One of the things that really turned me off about the idea of working at GitHub when visiting their HQ 2.0 in early 2012 was how individualistic everything was. Not the glorification of certain programmers (I'm all about the opportunity to be glorified!), but rather the total lack of any sense of team. Team was a Campfire chatroom everyone hung out in.

This is gonna sound weird to many of you, but I remember walking into their office and being taken aback by how quiet the GitHub office was. It was also mostly empty, sure, but everyone had headphones on and was sitting far apart from one another. My current client at the time (and the one before that, and the one before that) were raucous by comparison: cross-functional teams of developers, designers, product managers, and subject matter experts, all forced to work together as if they were manning the bridge of a too-nerdy-for-TV ship in Star Trek.

My first reaction to seeing the "GitHub Way" made manifest in their real life office was shock, followed by the sudden realization that this—being able to work together without working together—was the product. And it was taking the world by storm. I'd only been there a few hours when I realized, "this product is going to teach the next generation of programmers how to code, and everything about it is going to be hyper-individualized." Looking back, I allowed GitHub to unteach me the best parts of Extreme Programming—before long I was on stage telling people, "screw teams! Just do everything yourself!"

Anyway, here we are. Maybe it's time for a hipster revival of organizing backlogs on physical index cards and pair programming with in-person mouth words. Not going to hold my breath on that one.

Fred Searls

My dad, Fred Searls, passed away suddenly on Sunday night. Fortunately, my wife Becky, my brother Jeremy, and I were able to get on a flight to Detroit Monday to be with our mother Deanna and start making arrangements.

We worked together to draft this obituary, and it just went live on the funeral home's website (which is how people do it these days, apparently).

Here's the middle part that's actually about the person:

After earning his D.D.S. from Northwestern University Dental School, Fred practiced dentistry in Trenton, where he served patients with care and compassion for over forty-five years. In 1997, he and his family moved to Saline, where Fred’s warmth and generosity quickly made him a vital part of the community he came to love.

Fred enjoyed a wide array of hobbies and interests. In his prime, he was an avid golfer and a long-distance runner, and always pushed himself to excel. In retirement, he took up rucking, a pursuit that blends walking and strength training by hiking with a weighted backpack—combining his love of staying active outdoors with his enthusiasm for connecting with neighbors and offering a helping hand. This balance of active engagement, relentless kindness, and community spirit defined Fred’s life and the legacy he leaves behind.

And here's the service information (with added hyperlinks):

Funeral services for Fred will take place at the Tecumseh Chapel of Handler Funeral Homes at 1:00 pm on Saturday, December 21, 2024. Following the service, guests are invited to a luncheon at Johnnie’s Bar, located at 130 N Main St in Onsted. Visitation will be held at the funeral home from 5:00 pm – 8:00 pm on Friday, December 20, 2024 and for one hour, beginning at noon, before the service on Saturday. Memorial contributions in honor of Fred may be made to the Huron Valley Humane Society.

If you knew dad and are able to come, we'd love to see you there. Whether or not you knew him, I’d be grateful to hear your memories, thoughts, and feelings—feel free to drop me a line at justin@searls.co 💜

The people have been clamoring (clamoring!) for a demo of the hot new strength-training system everyone's been talking about, and today Becky has answered their call:

My program, Build with Becky, is designed to make progressive strength training approachable, enjoyable, and sustainable. It’s about helping people get comfortable with lifting, stay consistent, and build confidence using a structured yet mindful, grace-filled approach. 💪

If you've got 3 minutes and a functioning set of eyeballs, I hope you'll give this demo a watch. This video is the cherry on top of several years of work, and I'm incredibly impressed by how well this web app realizes Becky's vision for a more graceful approach to strength training.

Get 100GB of data (and 25GB tethering) by adding this as your iPhone's second eSIM:

New customers can follow these easy steps to dive in:

  • Mobile: Download the myAT&T app to get started
  • Desktop: Visit att.com/freetrial, use the QR code or click the link to get started
  • Get set up in minutes: Click “start your trial” in the app, confirm your current phone compatibility, and sign up. No credit card no commitments to get started.
  • See the difference: Experience the AT&T network free for up to 30 days with no strings attached — switch or stop your trial anytime.

When iPhone went dual-eSIM in the US, I expected a lot more of this from carriers, so it's good to see it now. I'm not clamoring to change from one mediocre carrier to another right now, but the novelty of being able to try without (1) talking to a human or (2) risking a phone number port snafu is almost enough to make me give this a try.

Every carrier willing to stand behind the quality of their network should do this.

Yes, this is a link post to my own post on switching Rails system tests from Selenium to Playwright, which is newer, faster, and by all accounts I've ever heard from anyone who's used both of them: better.

Since posting this, I have heard several complaints from skeptics, all along the same lines of: how could Playwright possibly be less flaky than Selenium? After all, the tests are written with the same Capybara API. And, being the default, Capybara's Selenium adapter has had many more years of bug fixes and hardening. To these people it simply does not make intuitive sense why Selenium tests would fail erratically more often than Playwright.

Here's my best answer: Playwright is so fast that it forces you to write UI tests correctly on day one. Selenium isn't.

Because UI tests that automate a browser do so asynchronously through inter-process communication, the most common way for tests to be susceptible to non-deterministic failures is when that communication is meaningfully slower than the browser itself under certain circumstances and faster in others.

Two of the most common examples follow. (I use the word "page" below very loosely, as it could apply to any visible content that is shown or hidden in response to user action.)

A script that finds a selector that exists both before and after navigation:

  1. Be on Page A with an element matching some selector .foo
  2. Click a button to go to Page B, which also contains a .foo element
  3. Find .foo
  4. Your test is now in a race condition. Either:
    • Your test will search for .foo before Page B loads, causing it to fail
    • Page B will load before your test searches for .foo, and continue successfully

A script that fails to properly wait:

  1. Be on Page A
  2. Click a button to go to Page B
  3. Find something on the page without appropriately waiting for it to appear (the bulk of Capybara's API, as with many UI testing frameworks, is delineated between "waiting" vs. "non-waiting" search methods)
  4. Your test is now in the same sort of race condition. Either:
    • The non-waiting search will run before Page B loads, causing it to fail
    • Page B will load before your non-waiting search, and continue successfully

Counter-intuitively, the faster your browser automation tool is, the more often the test will fail during race conditions like those above and those failures are a good thing.

If you select something that exists on both pages or without properly waiting, Playwright will almost always be faster than your app and you'll simply never see an improperly-written test pass, either in development or in CI. Instead, you'll be forced to write the test correctly the first time.

Selenium, meanwhile, is so slow under so many conditions and in so many environments that it's not uncommon at all to write a passing test full of race conditions like those above, have the test pass every single time you run it locally, but then see it fail some of the time in CI. Worse, as time goes on your code will become more complex and both your app and your tests will become slower in their own ways. This can lead to apps that had previously been fast enough to always pass in spite of any race conditions to begin failing with alarming frequency years later.

And of course, when that happens, you're in a real pickle. Erratic failures are inherently hard to reproduce. And if a test has been passing for years and is suddenly failing, you aren't likely to remember what you were thinking when you wrote it—meaning that if you can't reliably reproduce the failure, you're unlikely to be able to fix any such race conditions by just looking at the code.

Anyway, that's why.

Switching an existing test suite from Selenium to Playwright won't magically fix all the flaky tests that Selenium let you write by virtue of its being slower than your app. In fact, the first time you run Playwright, you're likely to see dozens of "new" errors that have in fact been hiding in your tests like land mines all along. And that's a good thing! Fix them! 🥒

The new ‌Mac mini‌ will be the first major design change to the machine since 2010, making it Apple's smallest ever desktop computer. The new ‌Mac mini‌ will apparently approach the size of an Apple TV, but it may be slightly taller than the current model, which is 1.4 inches high. It will continue to feature an aluminum shell. Individuals working on the new device apparently say that it is "essentially an iPad Pro in a small box."

I can't be the only person thinking "I wonder if I could plug this into a portable USB power bank, throw it in my bag, and then use run Mac Virtual Display on my Vision Pro without needing to carry a laptop… can I?

I really enjoyed this discussion with host Tim Chaten about the state of Apple Vision Pro. It was recorded a couple weeks after WWDC, which meant the memory was fresh enough to keep all of Apple's announcements top of mind but distant enough to imagine various directions things could go from here.

I gotta say, it was nice talking to someone who knows and cares more about the platform than I do. Some real "there are dozens of us!" energy around the Vision Pro right now.