justin․searls․co

One year into living with Tesla Full-Self Driving and: it's good.

The improvement from v13 to v14 is remarkable. Tap Start and it pulls out of the garage, drives you, and parks itself. Another tap and it drives home, opens garage, parks itself. More advanced than people assume.

Copied!

How many billions of dollars does Anthropic need to update their apps to preserve newlines on paste? Absolutely bizarre how bad their apps are.

Copied!

I'm developing apps for Apple platforms for the first time in 16 years so stop me if this is nuts, but the best feedback loops I've managed are when I've made the Mac build the primary one. No simulator jank. No waiting on devices.

Claude Code doing a better job self-verifying

Copied!

PSA: iPhone Air's microphone is located on the left side of the bottom edge of the device, AKA where a right-handed person's pinky would naturally rest when gripping the phone with one hand.

Anyway, that's why all your videos sound like shit. You're holding it wrong.

Copied!

One happy accident of the fact that Claude Code Opus 4.6 turns seem to reliably take 3-5 minutes in my experience is that it's proving to be the perfect companion to a strength training workout.

I just write prompts between sets.

Copied!

Free idea: hyperbolic_links.

Let users create symbolic links mapped to HTTP resources. Since you can't literally link a file to a URL without changing the actual file system, hyperbolic would centrally handle journaling, cache/etag, updates.

Copied!

I just haggled with a chatbot

We ordered a wood chest that arrived with cosmetic damage. After logging the damage in their customer support interface, it prompted me to start a chat with their AI virtual assistant.

What happened next:

  1. It immediately offered me a 15% refund to keep the product
  2. I asked for 20% and it immediately agreed
  3. I asked for 25% and it immediately agreed
  4. I asked for 30% and it turned me down
  5. I took the 25%, which was, indeed, immediately refunded

Turns out that negotiating with a rules engine is way easier than negotiating with a human tasked with operating a rules engine.

So basically, all Wayfair did was add a chatbot to the end of their existing "Report a Problem" interface that will give customers more money if they ask for more money. What a world. 🌍

Mitchell Hashimoto, founder of Hashicorp and, more recently, Ghostty in a post on his relationship with AI coding:

Instead of giving up, I forced myself to reproduce all my manual commits with agentic ones. I literally did the work twice. I'd do the work manually, and then I'd fight an agent to produce identical results in terms of quality and function (without it being able to see my manual solution, of course).

This was excruciating, because it got in the way of simply getting things done. But I've been around the block with non-AI tools enough to know that friction is natural, and I can't come to a firm, defensible conclusion without exhausting my efforts.

But, expertise formed. I quickly discovered for myself from first principles what others were already saying, but discovering it myself resulted in a stronger fundamental understanding.

  1. Break down sessions into separate clear, actionable tasks. Don't try to "draw the owl" in one mega session.
  2. For vague requests, split the work into separate planning vs. execution sessions.
  3. If you give an agent a way to verify its work, it more often than not fixes its own mistakes and prevents regressions.

More generally, I also found the edges of what agents -- at the time -- were good at, what they weren't good at, and for the tasks they were good at how to achieve the results I wanted.

I recorded an interview on the freeCodeCamp podcast a few days ago saying the same thing. Namely, that this reminds me of every other time programmers have needed to learn a new way to do something they already know how to do some other way. When I was teaching teams test-driven development, I always had to encourage them to force themselves to test-drive 100% of their code without exceptions so they're forced to actually learn the difference between, "this is hard because it's a bad tool for the job," and, "this is hard because I've not mastered this tool yet."

Over-application of a tool is an important part of learning it. And it applies just about every time we're forced to change our ways, whether switching from Windows to Linux, a graphical IDE to terminal Vim, or from Google Drive to a real filesystem.

Either you're the type who can stomach the discomfort of slowing down to adopt change, or you're not. And many (most?) programmers are not. They're the ones who should be worried right now.

I bought a Doggett

My friend Eric Doggett became a Disney Fine Artist a couple years back and he's currently being featured at EPCOT's 2026 Festival of the Arts. Each day this week, he's holding court to talk to people about his work at a pop-up gallery just outside the Mexico Pavilion. Myself and a few other friends ganged up on him this afternoon to lend our moral and financial support by showing up and buying a few pieces.

I really like the painting I picked up. It's a semi-subtle ode to Big Thunder Mountain, a celebration of Walt's love of trains, a not-so-hidden Mickey-shaped rockface, and a tiny nod to the goat.

If you're a local, swing by and say hi to Eric—he's great! If you're not, check him out as @EricDoggett on YouTube—the videos of how he works are pretty cool. I immediately hung it in my office / studio when I got home, because Eric's audio engineering talents are a big reason why Breaking Change sounds as good as it does!

Breaking Change artwork

v50 - SpaceXXX

Breaking Change

Video of this episode is up on YouTube:

Elon has combined 3 of his 4 businesses and everything makes sense. Also, I went to Japan and all I came back with was another weird story about animal sperm. Other stuff happened too, but let's be honest, it's the typical AI schlock you've come to expect from this decade.

Write in with your own takes to podcast@searls.co. Please. Really! Do it.

Citations needed?

Show me them show notes…

A recurring theme in coverage on the effect AI will have on wealth/income inequality suggests there's a strong case to be made that AI is going to be bad for poor people.

I have nothing useful to add to that discussion, so here's a word I just invented: Agentrification

Copied!