justin․searls․co

Brace for the Fuckening

It was only once I read Andrew Yang's "The End of the Office" post the other day that I realized how few political leaders are seriously grappling with this question: what will happen to civilization if all this AI investment actually pays off? Sitting with this thought led me to a dark place, if I'm being honest—not because society might be doomed, but because I'm left quoting Andrew Fucking Yang of all people:

Expect the Starbucks in your local suburb to become occupied with middle-aged former office workers who want to get out of the house. That's a benign portrait, but a lot of these families still owe mortgages on their houses that they won't be able to maintain. If I were a homeowner in Silicon Valley or Westchester County, I might consider putting my house up for sale to see what I could get because there's going to be downward pressure on these communities. It might not feel great being first, but you don't want to be last.

Then again, why should I be surprised that politicians aren't thinking about this? Almost nobody is! AI boosters are having too much fun playing with Claude Code to connect the dots between its current capabilities and the post-office-work utopia/dystopia that awaits us. Meanwhile, AI skeptics seem worryingly self-assured in their wishcasting that today's agents:

  1. Will never be able to do the things they can already do
  2. Will forever remain as incompetent as they are now
  3. Will disappear the moment the AI economic bubble bursts

Other than the first item on that list, I don't even know that the skeptics are wrong! Maybe they're right! In fact, it'd be more convenient for all of us if the skeptics end up being right. But are we really certain that the probability Sam Altman is even kinda-sorta right is literally zero percent? If not, then we should probably prepare for that potentiality.

I imagine that we all have a pretty clear image of what will happen if the AI bubble pops: stocks will crash, Steam Decks will be back in stock, and Sam will end up in jail. (Maybe every disastrous hype cycle in tech simply demands that at least one awkward white man named Sam goes to jail.) But I've got to admit, many people I talk to and follow online don't seem to have given a second thought to the sort of economic hellscape we're in for if the astronomical valuations of companies like OpenAI, Anthropic, and SpaceX (lol) turn out to have been appropriately priced.

Well, give it thirty seconds' thought, and—to borrow Yang's term of choice—we'll be in for The Fuckening: a macroeconomically significant decrease in the number of high-paying white collar jobs the market will support.

Tech CEOs are bullshitting us

Listen to Satya or Sam or Jensen talk about this "What if?" scenario, and you'll hear them compare AI to technological revolutions of the past. They each breathlessly declare, "there will be new jobs," and that AI will, "create a bunch of new ones," and even that, "there will be more jobs." But, as someone who's been using coding agents to build good-enough software with the velocity of a traditional engineering team for nearly a year now, none of these reassurances pass the smell test. The idea that there will be at least one net-new job created for every job eliminated by AI simply isn't credible.

Remember, for the purposes of this post, we're playing out the scenario labeled "what if these AI investments actually pan out?" That scenario isn't compatible with a future where companies find themselves needing just as many highly-compensated humans as they do today—the ROI needed to justify this level of investment simply wouldn't be there.

So why are the AI executives talking out of both sides of their mouth? Because, what else the fuck are they supposed to do? Go on the record and say, "yeah man, I dunno, now's probably a good time to learn how to can your own food or sell your house if you happen to live in a mid-market metro with a strong services sector."

Whenever I hear one of these guys' bullshit quotes, I have to mentally sprinkle in the unspoken context to protect my own sanity:

  • "There will be new jobs for people with deep domain expertise and unusually strong critical thinking skills."
  • "Looking at jobs as a debt counselor or estate auctioneer? AI will create a bunch of new ones."
  • "If you can swallow a massive pay cut and don't mind shutting doors for a living, there will be more jobs."

Programmers are first in, last out

Software developers may be among the first to experience what it feels like for AI to do our jobs for us, but I still believe we'll be the ones shutting the lights off on the middle class.

In fact, programmers are about to become as busy as we've ever been. Why? Because custom software has historically been supply-constrained. How many businesses would have been created over the last twenty years if even extremely basic apps didn't require millions of dollars of venture funding? Already, agents can help capture those opportunities on a relative shoestring budget. A tremendous number of niche, overlooked cottage industries are finally ripe for their ✨disruption✨ moment.

And all those projects still need human programmers! Why? Because today's agents are nowhere close to being able to write software that won't fall over without supervision. The last 10% needed to reach full autonomy will take 90% of the effort. My Tesla has driven me everywhere for over a year, but it'll be decades before the world stops manufacturing steering wheels.

Programmers may be spread across more companies and working in smaller teams, but there likely won't be a net reduction in the amount of total work to be done anytime soon. But can we honestly say the same about the accountant who's been rendered redundant? Or the junior lawyer position the firm opted not to fill? Or the management consultant who realizes their client can buy the same peace of mind with a $20 ChatGPT Plus subscription? Where are they all going to go? The answer is obvious: if this whole "AI thing" ends up becoming what it's cracked up to be, those people are well and truly fucked. And sooner than they probably realize.

Make your work worthwhile

Ultimately, unless we see AI crash and burn catastrophically, I feel pretty comfortable predicting that if the only sounds your job makes are *clickety-clack* and *yackety-yack*, the bulk of your colleagues—and very possibly you—are going to be caught up in the gravitational vortex of what Andrew Yang is calling The Fuckening.

So, what can you do to mitigate these risks and protect yourself? All you need to do is follow my fool-proof two-step plan:

  1. Acknowledge that the odds we're living in The Fuckening timeline are greater than zero
  2. Plan accordingly

If you're a programmer and your current role doesn't resemble that of a full-breadth developer, do something about that ASAP. If you're anyone else, then follow this bit of advice buried inside that larger post:

Figure out how your employer makes money and position your ass directly in-between the corporate bank account and your customers' credit card information.

Do the work: trace your individual contributions to the total revenue and cost savings they represent for your employer. Is that number demonstrably higher than your fully-loaded compensation? If so, you should be okay. And whenever the AI tools in your space improve appreciably, re-examine your contributions and run the numbers again. But as soon as you cost more to your employer than you bring in, don't assume they'll keep you around—figure out how to increase your output! And if the numbers are totally upside down, find someplace new where your cost will be commensurate with your value.

All I have is individualized advice, because our individual situations are all we have control over. Yes, there are countless collective actions we could take as a society to mitigate or even eliminate these downside employment risks. There are probably even public policy prescriptions that could position civilization to absolutely thrive in the age of AI. But the odds of either of those happening in the current political climate are even lower than (ugh) Sam Altman ending up being proven right about all this shit.

I'm expressly not suggesting we treat each other as if we're living out a workplace adaptation of The Hunger Games here. I'm just reminding you to secure your own oxygen mask first before assisting others.

One thing I haven't heard many people talking about is that coding agents can more or less cure RSI after decades of mashing out programs by hand.

Not saying programmers will be remembered as coal miners or 9/11 heroes or anything, but I'm not not saying that.

Copied!

Pointed Claude Cowork at two Japanese lease applications in Excel. It asked me a few questions, then filled both perfectly. (Better than I could—when I did the same via Excel for macOS, I broke the lookup formulas.)

Welcome to the identity crisis, fellow office workers!

Copied!

Bought a bag of pears 3 weeks ago and they're still hard as rocks. Really looking forward to that 8 hour window next week when they all simultaneously ripen before rotting the following day.

Copied!
Breaking Change artwork

v51 - Praise-bomb

Breaking Change

Video of this episode is up on YouTube:

One year into living with Tesla Full-Self Driving and: it's good.

The improvement from v13 to v14 is remarkable. Tap Start and it pulls out of the garage, drives you, and parks itself. Another tap and it drives home, opens garage, parks itself. More advanced than people assume.

Copied!

How many billions of dollars does Anthropic need to update their apps to preserve newlines on paste? Absolutely bizarre how bad their apps are.

Copied!

I'm developing apps for Apple platforms for the first time in 16 years so stop me if this is nuts, but the best feedback loops I've managed are when I've made the Mac build the primary one. No simulator jank. No waiting on devices.

Claude Code doing a better job self-verifying

Copied!

PSA: iPhone Air's microphone is located on the left side of the bottom edge of the device, AKA where a right-handed person's pinky would naturally rest when gripping the phone with one hand.

Anyway, that's why all your videos sound like shit. You're holding it wrong.

Copied!

One happy accident of the fact that Claude Code Opus 4.6 turns seem to reliably take 3-5 minutes in my experience is that it's proving to be the perfect companion to a strength training workout.

I just write prompts between sets.

Copied!

Free idea: hyperbolic_links.

Let users create symbolic links mapped to HTTP resources. Since you can't literally link a file to a URL without changing the actual file system, hyperbolic would centrally handle journaling, cache/etag, updates.

Copied!