justin․searls․co

Which of your colleagues are screwed?

I've been writing about how AI is likely to affect white-collar (or no-collar or hoodie-wearing) computer programmers for a while now, and one thing is clear: whether someone feels wildly optimistic or utterly hopeless about AI says more about their priors than their prospects. In particular, many of the people I already consider borderline unemployable managed to read Full-breadth Developers and take away that they actually have nothing to worry about.

So instead of directing the following statements at you, let's target our judgment toward your colleagues. Think about a random colleague you don't feel particularly strongly about as you read the following pithy and reductive bullet points. Critically appraise how they show up to work through the entire software delivery process. These represent just a sample of observations I've made about developers who are truly thriving so far in the burgeoning age of AI code generation tools.

That colleague you're thinking about? They're going to be screwed if they exhibit:

  • Curiosity without skepticism
  • Strategy without experiments
  • Ability without understanding
  • Productivity without urgency
  • Creativity without taste
  • Certainty without evidence

But that's not all! You might be screwed too. Maybe ask one of your less-screwed colleagues to rate you.

Keep hearing about Finntech and how much money people are making, but never hear anything about tech startups in the other Nordic countries. Does Norway not have as many programmers?

Copied!

Star Wars: The Gilroy Order

UPDATE: To my surprise and delight, Rod saw this post and endorsed this watch order.

I remember back when Rod Hilton suggested The Machete Order for introducing others to the Star Wars films and struggling to find fault with it. Well, since then there have been 5 theatrical releases and a glut of streaming series. And tonight, as credits rolled on Return of the Jedi, I had the thought that an even better watch order has emerged for those just now being exposed to the franchise.

Becky and I first started dating somewhere between the release of Attack of the Clones and Revenge of the Sith and—no small measure of her devotion—she's humored me by seeing each subsequent Star Wars movie in theaters, despite having no interest in the films and little idea what was going on. Get yourself a girl who'll watch half a dozen movies that mildly repulse her, fellas.

Hell, when we were living in Japan, I missed that 吹替 ("dubbed") was printed on our tickets and she wound up sitting through the entirety of The Rise of Skywalker with Japanese voiceovers and no subtitles to speak of. When we walked out, she told me that she (1) was all set with Star Wars movies for a while, and (2) suspected the incomprehensibility of the Japanese dub had probably improved the experience, on balance.

That all changed when she decided to give Andor a chance. See, if you're not a Star Wars fan, Tony Gilroy's Andor series is unique in the franchise for being actually good. Like, it's seriously one of the best TV shows to see release in years. After its initial three-episode arc, Becky was fully on board for watching both of its 12-episode seasons. And the minute we finished Season 2, she was ready to watch Rogue One with fresh eyes. ("I actually have a clue what's going on now.") And, of course, with the way Rogue One leads directly into the opening scene of A New Hope, we just kept rolling from there.

Following this experience, I'd suggest sharing Star Wars with your unsuspecting loved ones in what I guess I'd call The Gilroy Order:

  1. Andor (seasons 1 and 2)
  2. Rogue One
  3. A New Hope
  4. The Empire Strikes Back
  5. Return of the Jedi

If, at this point, you're still on speaking terms with said loved ones, go ahead and watch the remaining Star Wars schlock in whatever order you want. Maybe you go straight to The Mandalorian. Maybe you watch The Force Awakens just so you can watch the second and final film of the third trilogy, The Last Jedi. Maybe you quit while you're ahead and wait for Disney to release anything half as good as Andor ever again. (Don't hold your breath.)

Anyway, the reason I'm taking the time to propose an alternative watch order at all is an expression of the degree to which I am utterly shocked that my wife just watched and enjoyed so many Star Wars movies after struggling to tolerate them for the first two decades of our relationship. I'm literally worried I might have broken her.

But really, it turned out that all she needed was for a genuinely well-crafted narrative to hook her, and Andor is undeniably the best ambassador the franchise currently has.

Interesting analysis of the distinctiveness of the Japanese Web. The biggest cause in my mind has always been bottleneck effect. Japan's Web developed and remains more isolated than any other "free" nation.

If every non-Japanese website disappeared tomorrow, many Japanese would go literal months without noticing. THAT's why its web is different. sabrinas.space

sabrinas.space -
Copied!

How to generate dynamic data structures with Apple Foundation Models

Over the past few days, I got really hung up in my attempts generate data structures using Apple Foundation Models for which the exact shape of that data wasn't known until runtime. The new APIs actually provide for this capability via DynamicGenerationSchema, but the WWDC sessions and sample code were too simple to follow this thread end-to-end:

  1. Start with a struct representing a PromptSet: a variable set of prompts that will either map onto or be used to define the ultimate response data structure 🔽
  2. Instantiate a PromptSet with—what else?—a set of prompts to get the model to generate the sort of data we want 🔽
  3. Build out a DynamicGenerationSchema based on the contents of a given PromptSet instance 🔽
  4. Create a struct that can accommodate the variably-shaped data with as much type safety as possible and which conforms to ConvertibleFromGeneratedContent, so it can be instantiated by passing a LanguageModelSession response's GeneratedContent 🔽
  5. Pull it all together and generate some data with the on-device foundation models! 🔽

Well, it took me all morning to get this to work, but I did it. Since I couldn't find a single code example that did anything like this, I figured I'd share this write up. You can read the code as a standalone Swift file or otherwise follow along below.

1. Define a PromptSet

Start with whatever code you need to represent the set(s) of prompts you'll be dealing with at runtime. (Maybe they're defined by you and ship with your app, maybe you let users define them through your app's UI.) To keep things minimal, I defined this one with a couple of mandatory fields and a variable number of custom ones:

struct EducationalPromptSet {
  let type: String
  let instructions: String

  let name: String
  let description: String

  let summaryGuideDescription: String
  let confidenceGuideDescription: String
  let subComponents: [SubComponentPromptSet]
}

struct SubComponentPromptSet {
  let title: String
  let bodyGuideDescription: String
}

Note that rather than modeling the data itself, the purpose of these structs is to model the set of prompts that will ultimately drive the creation of the schema which will, in turn, determine the shape and contents of the data we get back from the Foundation Models API. To drive this home, whatever goes in summaryGuideDescription, confidenceGuideDescription, and bodyGuideDescription should themselves be prompts to guide the generation of like-named type-safe values.

Yes, it is very meta.

2. Instantiate our PromptSet

Presumably, we could decode some JSON from a file or received over the network that could populate this EducationalPromptSet. Here's an example set of prompts for generating cocktail recipes, expressed in some sample code:

let cocktailPromptSet = EducationalPromptSet(
  type: "bartender_basic",
  instructions: """
    You are an expert bartender. Take the provided cocktail name or list of ingredients and explain how to make a delicious cocktail. Be creative!
    """,

  name: "Cocktail Recipe",
  description: "A custom cocktail recipe, tailored to the user's input and communicated in an educational tone and spirit",
  summaryGuideDescription: "The summary should describe the history (if applicable) and taste profile of the cocktail",
  confidenceGuideDescription: "Range between 0-100 for your confidence in the feasibility of this cocktail based on the prompt",
  subComponents: [
    SubComponentPromptSet(title: "Ingredients", bodyGuideDescription: "A list of all ingredients in the cocktail"),
    SubComponentPromptSet(title: "Steps", bodyGuideDescription: "A list of the steps to make the cocktail"),
    SubComponentPromptSet(title: "Prep", bodyGuideDescription: "The bar prep you should have completed in advance of service")
  ]
)

You can see that the provided instruction, description, and each guide description really go a long way to specify what kind of data we are ultimately looking for here. This same format could just as well be used to specify an EducationalPromptSet for calculus formulas, Japanese idioms, or bomb-making instructions.

3. Build a DynamicGenerationSchema

Now, we must translate our prompt set into a DynamicGenerationSchema.

Why DynamicGenerationSchema and not the much simpler and defined-at-compile-time GenerationSchema that's expanded with the @Generable? Because reasons:

  1. We only know the prompts (in API parlance, "Generation Guide descriptions") at runtime, and the @Guide macro must be specified statically
  2. We don't know how many subComponents a prompt set instance will specify in advance
  3. While subComponents may ultimately redound to an array of strings, that doesn't mean they represent like concepts that could be generated by a single prompt (as an array of ingredient names might). Rather, each subComponent is effectively the answer to a different, unknowable-at-compile-time prompt of its own

As for building the DynamicGenerationSchema, you can break this up into two roots and have the parent reference the child, but after experimenting, I preferred just specifying it all in one go. (One reason not to get too clever about extracting these is that DynamicGenerationSchema.Property is not Sendable, which can easily lead to concurrency-safety violations).

This looks like a lot because this API is verbose as fuck, forcing you to oscillate between nested schemas and properties and schemas:

let cocktailSchema = DynamicGenerationSchema(
  name: cocktailPromptSet.name,
  description: cocktailPromptSet.description,
  properties: [
    DynamicGenerationSchema.Property(
      name: "summary",
      description: cocktailPromptSet.summaryGuideDescription,
      schema: DynamicGenerationSchema(type: String.self)
    ),
    DynamicGenerationSchema.Property(
      name: "confidence",
      description: cocktailPromptSet.confidenceGuideDescription,
      schema: DynamicGenerationSchema(type: Int.self)
    ),
    DynamicGenerationSchema.Property(
      name: "subComponents",
      schema: DynamicGenerationSchema(
        name: "subComponents",
        properties: cocktailPromptSet.subComponents.map { subComponentPromptSet in
          DynamicGenerationSchema.Property(
            name: subComponentPromptSet.title,
            description: subComponentPromptSet.bodyGuideDescription,
            schema: DynamicGenerationSchema(type: String.self)
          )
        }
      )
    )
  ]
)

4. Define a result struct that conforms to ConvertibleFromGeneratedContent

When conforming to ConvertibleFromGeneratedContent, a type can be instantiated with nothing more than the GeneratedContent returned from a language model response.

There is a lot going on here. Code now, questions later:

struct EducationalResult : ConvertibleFromGeneratedContent {
  let summary: String
  let confidence: Int
  let subComponents: [SubComponentResult]

  init(_ content: GeneratedContent) throws {
    summary = try content.value(String.self, forProperty: "summary")
    confidence = try content.value(Int.self, forProperty: "confidence")
    let subComponentsContent = try content.value(GeneratedContent.self, forProperty: "subComponents")
    let properties: [String: GeneratedContent] = {
      if case let .structure(properties, _) = subComponentsContent.kind {
        return properties
      }
      return [:]
    }()
    subComponents = try properties.map { (title, bodyContent) in
      try SubComponentResult(title: title, body: bodyContent.value(String.self))
    }
  }
}

struct SubComponentResult {
  let title: String
  let body: String
}

That init constructor is doing the Lord's work, here, because Apple's documentation really fell down on the job this time. See, through OS 26 beta 4, if you had a GeneratedContent, you could simply iterate over a dictionary of its properties or an array of its elements. These APIs, however, appear to have been removed in OS 26 beta 5. I say "appear to have been removed," because Apple shipped Xcode 26 beta 5 with outdated developer documentation that continues to suggest they should exist and which failed to include beta 5's newly-added GeneratedContent.Kind enum. Between this and the lack of any example code or blog posts, I spent most of today wondering whether I'd lost my goddamn mind.

Anyway, good news: you can iterate over a dynamic schema's collection of properties of unknown name and size by unwrapping the response.content.kind enumerator. In my case, I know my subComponents will always be a structure, because I'm the guy who defined my schema and the nice thing about the Foundation Models API is that its responses always, yes, always adhere to the types specified by the requested schema, whether static or dynamic.

So let's break down what went into deriving the value's customProperties property.

We start by fetching a nested GeneratedContent from the top-level property named subComponents with content.value(GeneratedContent.self, forProperty: "subComponents")

Next, this little nugget assigns to properties a dictionary mapping String keys to GeneratedContent values by unwrapping the properties from the kind enumerator's structure case, and defaulting to an empty dictionary in the event we get anything unexpected:

let properties: [String: GeneratedContent] = {
  if case let .structure(properties, _) = subComponentsContent.kind {
    return properties
  }
  return [:]
}()

Finally, we build out our result struct's subComponents field by mapping over those properties.

subComponents = try properties.map { (title, bodyContent) in
  try SubComponentResult(title: title, body: bodyContent.value(String.self))
}

Two things are admittedly weird about that last bit:

  1. I got a little lazy here by using the each sub-components' title as the name of the corresponding generated property. Since the property name gets fed into the LLM, one can only imagine doing so can only improve the results. Based on my experience so far, the name of a field greatly influences what kind of data you get back from the on-device foundation models.
  2. The bodyContent itself is a GeneratedContent that we know to be a string (again, because that's what our dynamic schema specifies), so we can safely demand one back using its value(Type) method

5. Pull it all together

Okay, the moment of truth. This shit compiles, but will it work? At least as of OS 26 betas 5 & 6: yes!

My aforementioned Swift file ends with a #Playground you can actually futz with interactively in Xcode 26 and navigate the results interactively. Just three more calls to get your cocktail:

import Playgrounds
#Playground {
  let session = LanguageModelSession {
    cocktailPromptSet.instructions
  }

  let response = try await session.respond(
    to: "Shirley Temple",
    schema: GenerationSchema(root: cocktailSchema, dependencies: [])
  )

  let cocktailResult = try EducationalResult(response.content)
}

The above yielded this response:

EducationalResult(
  summary: "The Shirley Temple is a classic and refreshing cocktail that has been delighting children and adults alike for generations. It\'s known for its simplicity, sweet taste, and vibrant orange hue. Made primarily with ginger ale, it\'s a perfect example of a kid-friendly drink that doesn\'t compromise on flavor. The combination of ginger ale and grenadine creates a visually appealing and sweet-tart beverage, making it a staple at parties, brunches, and any occasion where a fun and easy drink is needed.",
  confidence: 100,
  subComponents: [
    SubComponentResult(title: "Steps", body: "1. In a tall glass filled with ice, pour 2 oz of ginger ale. 2. Add 1 oz of grenadine carefully, swirling gently to combine. 3. Garnish with an orange slice and a cherry on top."),
    SubComponentResult(title: "Prep", body: "Ensure you have fresh ginger ale and grenadine ready to go."),
    SubComponentResult(title: "Ingredients", body: "2 oz ginger ale, 1 oz grenadine, Orange slice, Cherry")
])

The best part? I can only generate "Shirley Temple" drinks because whenever I ask for an alcoholic cocktail, it trips the on-device models' safety guardrails and refuses to generate anything.

Cool!

This was too hard

I've heard stories about Apple's documentation being bad, but never about it being straight-up wrong. Live by the beta, die by the beta, I guess.

In any case, between the documentation snafu and Claude Code repeatedly shitting the bed trying to guess its way through this API, I'm actually really grateful I was forced to buckle down and learn me some Swift.

Let me know if this guide helped you out! 💜

I don't wish them ill, but the stock price of DuoLingo (and that entire class of language learning apps) hasn't made a lick of sense since ChatGPT released. It's just going to take a single LLM-based product to obviate the entire business model yro.slashdot.org/story/25/08/17/194212/duolingos-stock-down-38-plummets-after-openais-gpt-5-language-app-building-demo

Duolingo's Stock Down 38%, Plummets After OpenAI's GPT-5 Language App-Building Demo - Slashdot
Copied!
Breaking Change artwork

v42 - Free as in Remodel

Breaking Change

Video of this episode is up on YouTube:

A group of Italian-American feminists should buy an island off the Amalfi coast to establish a women-only community and call it Old Country for No Men.

Copied!

You know that meme where the best developers actually wind up deleting more lines of code than they add?

The more time I spend wrangling agentic codegen tools, the more the task feels like chiseling than sculpting. I suspect the deleters are better poised for this moment.

Copied!

Shout for DANGER

Free idea for anyone who wants it.

I've been juggling so many LLM-based editors and CLI tools that I've started collecting them into meta scripts like this shell-completion-aware edit dingus that I use for launching into my projects each day.

Because many of these CLIs have separate "safe" and "for real though" modes, I've picked up the convention of giving the editor name in ALL CAPS to mean "give me dangerous mode, please."

So:

$ edit -e claude posse_party

Will open Claude Code in ~/code/searls/posse_party in normal mode.

And:

$ edit -e CLAUDE posse_party

Will do the same, while also passing the --dangerously-skip-permissions flag, which I refuse to type.

A few days back, I linked to Scott Werner's clever insight that—rather than fear the mess created by AI codegen—we should think through the flip side: an army of robots working tirelessly to clean up our code has the potential to bring the carrying cost of technical debt way down, akin to the previous decade's zero-interest rate phenomenon (ZIRP). Scott was inspired by Orta Therox's retrospective on six weeks of Claude Code at Puzzmo, which Orta himself wrote after reading my own Full-breadth Developers post.

Blogging is so back!

If you aren't familiar with Brian Marick, he's a whip-smart thinker with a frustrating knack for making contrarian points that are hard to disagree with. He saw my link and left this comment on Scott's blog post about technical debt and ZIRP. The whole comment is worth reading and should have top-billing as a post in its own right, so I figured I'd highlight it here:

The problem with a ZIRP is that those questions are b-o-r-i-n-g and you can't compete with those who skip them. You're out of business before they crash. ("The market can remain irrational longer than you can remain solvent.")

Similarly, there's a collective action problem. Our society is structured such that when the optimists' predictions go wrong, they don't pay for their mistakes – rather society as a whole does. See housing derivatives in 2008, the Asian financial crisis of the late '90s, etc. ZIRP makes it cheaper to be an optimist, but someone else pays the bill for failure (Silicon Valley Bank, Savings and Loan crisis)

It's weird to see ZIRP touted as a model, given the incredible overspending that took place, which had to be clawed back once ZIRP went away. (Most notably in tech layoffs, but I'm more concerned about all the small companies that were crushed because of financials, not because of the merit of their products.)

Brian made me ashamed to admit that I had read Scott's post as an exclusively good thing, despite the fact that on a macro level, he's absolutely right: the excesses of irrational exuberance and their unfair consequences are definitely net-harmful to society. No argument there. Someone should absolutely get on that and, of course, literally no one will.

Why am I unbothered? Because as a customer, I am happy to ride a ZIRP wave for my own personal benefit. That way, even if the world burns in the end, at least I got something out of it. Last time around, I benefited from a shitload of free cloud compute, cheap taxi rides, subsidized meal services, and credit card reward arbitrage in the 2010s—even as I made sure to direct my investment portfolio towards businesses that actually, you know, made money. So it is today: the tech industry has made a nigh-infinite number of GPUs available at remarkably low prices, and I'm just some dipshit customer who is more than happy to allow investors to subsidize my usage. At the moment, I'm paying $200/month for Claude Max which admittedly feels like a bit of a stretch, until I check ccusage and realize I've burned over $4500 worth of API tokens in the last 30 days.

And, unreliable and frustrating as they may be, I'm still seeing a ton of personal value from the current crop of LLM-based tools overall. As long as that's the case, I suppose I'll keep doing whatever best assists me in achieving my goals.

Is any of this sustainable? Unlikely. Are we all cooked? Probably! But as Brian says, this is a collective action problem. I'm not going to be the one to fix it. And while I greatly admire the spirit of those who would gladly spend years of their lives as activists to also not fix it, I've got other shit I'd rather do.

My only real medium-to-long-term hope is that the local LLM scene continues to mature and evolve so as to hedge the possibility that the AI cloud subsidy disappears and all these servers get turned off. So long as this class of tools continues to be available to those who buy fancy Apple products, how I personally approach software development will be forever changed.

(h/t to Tim Dussinger for reminding me to link to Brian's commentary.)

Personally, I was inclined to doubt the GPT-5 haters, but I've gotta say: this thing reminds me more of 3.5-turbo. Asking about Xcode 26 just gets me a full page of explanation that this hypothetical IDE that's been out for 2 months doesn't exist. (That's WITH search enabled!)

Copied!

Letting go of autonomy

I recently wrote I'm inspecting everything I thought I knew about software. In this new era of coding agents, what have I held firm that's no longer relevant? Here's one area where I've completely changed my mind.

I've long been an advocate for promoting individual autonomy on software teams. At Test Double, we founded the company on the belief that greatness depended on trusting the people closest to the work to decide how best to do the work. We'd seen what happens when the managerial class has the hubris to assume they know better than someone who has all the facts on the ground.

This led to me very often showing up at clients and pushing back on practices like:

  • Top-down mandates governing process, documentation, and metrics
  • Onerous git hooks that prevented people from committing code until they'd jumped through a preordained set of hoops (e.g. blocking commits if code coverage dropped, if the build slowed down, etc.)
  • Mandatory code review and approval as a substitute for genuine collaboration and collective ownership

More broadly, if technical leaders created rules without consideration for reasonable exceptions and without regard for whether it demoralized their best staff… they were going to hear from me about it.

I lost track of how many times I've said something like, "if you design your organization to minimize the damage caused by your least competent people, don't be surprised if you minimize the output of your most competent people."

Well, never mind all that

Lately, I find myself mandating a lot of quality metrics, encoding them into git hooks, and insisting on reviewing and approving every line of code in my system.

What changed? AI coding agents are the ones writing the code now, and the long-term viability of a codebase absolutely depends on establishing and enforcing the right guardrails within which those agents should operate.

As a result, my latest project is full of:

  • Authoritarian documentation dictating what I want from each coder with granular precision (in CLAUDE.md)
  • Patronizing step-by-step instructions telling coders how to accomplish basic tasks, repeated each and every time I ask them to carry out the task (as custom slash commands)
  • Ruthlessly rigid scripts that can block the coder's progress and commits (whether as git hooks and Claude hooks)

Everything I believe about autonomy still holds for human people, mind you. Undermining people's agency is indeed counterproductive if your goal is to encourage a sense of ownership, leverage self-reliance to foster critical thinking, and grow through failure. But coding agents are (currently) inherently ephemeral, trained generically, and impervious to learning from their mistakes. They need all these guardrails.

All I would ask is this: if you, like me, are constructing a bureaucratic hellscape around your workspace so as to wrangle Claude Code or some other agent, don't forget that your human colleagues require autonomy and self-determination to thrive and succeed. Lay down whatever gauntlet you need to for your agent, but give the humans a hall pass.