A lot of content around here boils down to links to someplace else, for which all I have to add is a brief call-out or commentary. As such, the headlines for each of these posts link to the original source article. (If you want a permalink to my commentary, try clicking the salt shaker.)
A few days back, I linked to Scott Werner's clever insight that—rather than fear the mess created by AI codegen—we should think through the flip side: an army of robots working tirelessly to clean up our code has the potential to bring the carrying cost of technical debt way down, akin to the previous decade's zero-interest rate phenomenon (ZIRP). Scott was inspired by Orta Therox's retrospective on six weeks of Claude Code at Puzzmo, which Orta himself wrote after reading my own Full-breadth Developers post.
Blogging is so back!
If you aren't familiar with Brian Marick, he's a whip-smart thinker with a frustrating knack for making contrarian points that are hard to disagree with. He saw my link and left this comment on Scott's blog post about technical debt and ZIRP. The whole comment is worth reading and should have top-billing as a post in its own right, so I figured I'd highlight it here:
The problem with a ZIRP is that those questions are b-o-r-i-n-g and you can't compete with those who skip them. You're out of business before they crash. ("The market can remain irrational longer than you can remain solvent.")
Similarly, there's a collective action problem. Our society is structured such that when the optimists' predictions go wrong, they don't pay for their mistakes – rather society as a whole does. See housing derivatives in 2008, the Asian financial crisis of the late '90s, etc. ZIRP makes it cheaper to be an optimist, but someone else pays the bill for failure (Silicon Valley Bank, Savings and Loan crisis)
It's weird to see ZIRP touted as a model, given the incredible overspending that took place, which had to be clawed back once ZIRP went away. (Most notably in tech layoffs, but I'm more concerned about all the small companies that were crushed because of financials, not because of the merit of their products.)
Brian made me ashamed to admit that I had read Scott's post as an exclusively good thing, despite the fact that on a macro level, he's absolutely right: the excesses of irrational exuberance and their unfair consequences are definitely net-harmful to society. No argument there. Someone should absolutely get on that and, of course, literally no one will.
Why am I unbothered? Because as a customer, I am happy to ride a ZIRP wave for my own personal benefit. That way, even if the world burns in the end, at least I got something out of it. Last time around, I benefited from a shitload of free cloud compute, cheap taxi rides, subsidized meal services, and credit card reward arbitrage in the 2010s—even as I made sure to direct my investment portfolio towards businesses that actually, you know, made money. So it is today: the tech industry has made a nigh-infinite number of GPUs available at remarkably low prices, and I'm just some dipshit customer who is more than happy to allow investors to subsidize my usage. At the moment, I'm paying $200/month for Claude Max which admittedly feels like a bit of a stretch, until I check ccusage and realize I've burned over $4500 worth of API tokens in the last 30 days.
And, unreliable and frustrating as they may be, I'm still seeing a ton of personal value from the current crop of LLM-based tools overall. As long as that's the case, I suppose I'll keep doing whatever best assists me in achieving my goals.
Is any of this sustainable? Unlikely. Are we all cooked? Probably! But as Brian says, this is a collective action problem. I'm not going to be the one to fix it. And while I greatly admire the spirit of those who would gladly spend years of their lives as activists to also not fix it, I've got other shit I'd rather do.
My only real medium-to-long-term hope is that the local LLM scene continues to mature and evolve so as to hedge the possibility that the AI cloud subsidy disappears and all these servers get turned off. So long as this class of tools continues to be available to those who buy fancy Apple products, how I personally approach software development will be forever changed.
(h/t to Tim Dussinger for reminding me to link to Brian's commentary.)
Scott Werner, who is frustratingly good at writing what I'm thinking about LLMs, has a new post out where he compares being an "agentic" coder to being an octopus, with each arm being a separate instance of Claude Code independently thinking and acting on its own. It's a good post and you should read it.
In the middle, he said the thing that was what first came to mind when I saw the image of the octopus in this context:
Here's the thing about teams now:
Two developers on one codebase is like two octopuses sharing one coral reef. Technically possible. Practically ridiculous. Everybody's arms getting tangled. Ink everywhere. The coral is screaming (coral doesn't scream, but work with me here).
But one octopus with eight projects? That's just nature.
The more time I spend with coding agents, the more I become convinced that they are damn-near incompatible with working in teams. I've suggested this before, but I really think more people should be chewing on this. The bottleneck for software teams—the thing that's always made them less than the sum of their parts—is the handshake problem. It's the one thing from The Mythical Man-Month everyone remembers: "Adding manpower to a late software project makes it later."
For the last 50 years, this has been (quite reasonably) understood as the number of humans on a team. That the number of relationships between those humans in an organization's design could be used to compute an approximate productivity tax on the collective's broader efforts to encode some kind of intention into software.
If we have 8 humans working on a software project, we have (8 ⨉ 7) ÷ 2
or 28 relationships to manage, with each individual capable of and burdened by the need to bidirectionally seek shared understanding and consensus with one other for the purpose of coordinating their efforts.
Now imagine the team having 8 humans each juggling 8 sub-agents on a project. This figure balloons to (64 ⨉ 63) ÷ 2
or 2,016 relationships. In the good ol' days of 2022, this quadratic increase in communication cost for squaring the size of the team was enough to give people pause on its own. But when ⅞ of the team are AI agents, it adds an all-new wrinkle to the math: 1,764 of those connections are unidirectional. The agents can receive information and instruction but they lack durable institutional memory, they can't pipe up in meetings, and each has its view of the world bottlenecked behind a single operator. Each of those complications has the opportunity to dramatically compound the already-really-quite-bad errors we typically associate with large software teams. This is made even worse by the fact that a manager has no observable signal that their team's composition has changed so radically—they'll walk into the room and see the same eight nerds staring at their computers as ever before.
My theory on why this issue hasn't already triggered productivity meltdowns is a happy accident of circumstance, owing to the fact that the people currently trailblazing multi-agent workflows in earnest are highly-engaged, driven programmers—the go-getting early adopters who were using Rails in 2005 and Node.js in 2009. As a result, the median team of eight engineers may not even have one such developer—which means we haven't seen what deploying coding agents at scale will really look like yet. My prediction is that as these tools continue to go mainstream, things are going to go about as well as if you were to throw a dozen octopuses in an aquarium together.
If none of this is quite clicking with you, think of it this way. Team A has 8 programmers in a room working on a project. Team B has 8 technical analysts each managing a separate sub-team of 8 offshore developers somewhere in South Asia, replete with all the time zone and communication constraints those impose. We have a lot of data to indicate that Team B is in for a bad fucking time, but that scenario is effectively what mainstream adoption of coding agents as they exist today would represent.
Anyway, if your entire team are working to keep their coding agents' hoppers full (replete with subagents and juggling multiple tasks at a time), what is your effective team size by this measure? Am I wrong here? Is everything actually going great? Is coordination not suddenly much harder than it was before? Let me know.
Consider this one of a thousand signposts I'll erect for the sake of anyone on the journey to becoming a full-breadth developer. What's discussed below is exactly the sort of thing that will separate the people who successfully wrangle coding agents from the people doomed to be replaced by them.
This post by Jared Norman about the order in which we design computer programs got stuck in my craw as I was reading it:
When you build anything with code, you start somewhere. Knowing where you should start can be hard. This problem contributes to the difficulty of teaching programming. Many programmers can't even tell you how they decide where they start. It turns out that thinking of somewhere you could start and starting there is good enough for a lot of people.
Relatable.
Back when people talked about test-driven development, I spilled a lot of ink discussing "bottom-up" (Detroit-school) versus "outside-in" (London-school) design and the benefits and drawbacks of each. Both because outside-in TDD was more obscure and because it's a better fit for most application development, I exerted far too much effort building mocking libraries and teaching people how to use them well.
Whether that work had any lasting impact, who's to say. It kept me busy.
In the broader scope of software development, the discussion of where the fuck to even start when programming a computer can take many forms. Most often, the debate comes down to type systems and the degree to which somebody is obsessed with or repelled by them.
At the end of the day, every program is just a recipe. Some number of ingredients being mixed together over some number of steps. The particular order in which you write the recipe doesn't really matter. Instead, what matters is that you think deeply and carefully consider your approach. The ideal order is whatever will prompt the right thought at the right moment to do the right thing to produce the right solution. The best approach is ever-changing. It will vary from person to person and situation to situation and will change from day to day.
But it's easier to tell people to follow a prescriptive one-size-fits-most solution, like to adopt London-school TDD or to use your type system.
Jared wraps with:
You can build any kind of structure with any kind of technique. Hell, you can write pretty good object-oriented code in C. You'll find no hard laws of computer science here. Your entrypoint into the problem you're solving doesn't decide how your system will be structured.
The order you tackle components merely encourages certain kinds of designs.
In programming, it's seen as a cop-out to inject oneself into a spirited debate between two sides and butt in to say that the real answer is "it depends." But one reason the question of, "where do we start?" has been so fundamental to my programming career—especially as someone who can suffer crippling Blank Page Syndrome—is that there actually is a right answer to the question. The hard part? The only way to arrive at that answer is to think it through yourself. No shortcuts. No getting around it. And if you want to get better over time? That requires deliberate and continuous metacognition—thinking about one's own thinking—to an extent that vanishingly few programmers will ever realize, much less attempt.
Loved this post from Joan Westenberg, about the limitations of goals:
The cult of goal-setting thrives in this illusion. It converts uncertainty into an illusion of progress. It demands specificity in exchange for comfort. And it replaces self-trust with the performance of future-planning. That makes it wildly appealing to organizations, executives, and knowledge workers who want to feel like they're doing something without doing anything unpredictable.
And the liberation of constraints:
Constraints make solutions non-obvious. They force the kind of second-order thinking that goals actively discourage. Instead of aiming for a finish line, the constrained mind seeks viability. It doesn't ask, "How do I get there?" It asks, "What's possible from here?"
My first corporate job required me to set annual goals for myself. I still remember them, both because I spent all summer trying to think of easy ones and because I nevertheless accomplished zero of them:
- Speak at a Java user group
- Teach basic finances to kids via the Junior Achievement organization
- Host 5 lunch-and-learns for my department
One of the best managers I ever had didn't give a fuck about my goals (in fact, he never even bothered to inform me he was my manager). On day one of my first project he threw out some random ideas:
- Why not require all production code be pair-programmed instead of reviewed after-the-fact?
- What if no method in the codebase was allowed to be longer than 3 lines?
- You're hitting 100% code coverage in Java, but why do you have zero JavaScript tests?
The last thing he said killed me: why do people sweat hours over every last test of their server-side code but give zero shits slinging thousands of lines of untested jQuery spaghetti? That constraint—of forcing myself to apply my testing zealotry to the front-end—is what led to my initial open source contributions and my speaking career. It's why I wrote my first widely-adopted open source project. It was also the topic of my first talk at a Ruby conference. I proceeded to spend the next few years becoming "the JavaScript testing guy" in half a dozen different software communities.
I'm honestly not sure where I'd be if it hadn't been for that passing comment, but I am absolutely certain that applying a constraint was more effective than setting a goal would have been.
The vast majority of the discourse around the software industry and AI-based coding tools has fallen into one of these buckets:
- Executives want to lay everyone off!
- Nobody wants to hire juniors anymore!
- Product people are building (shitty) prototype apps without developers at all!
What isn't being covered is how many skilled developers are getting way more shit done, as Tom from GameTorch writes:
If you have software engineering skills right now, you can take any really annoying problem that you know could be automated but is too painful to even start, you can type up a few paragraphs in your favorite human text editor to describe your problem in a well-defined way, and then paste that shit into Cursor with o3 MAX pulled up and it will one shot the automation script in about 3 minutes. This gives you superpowers.
I've written a lot of commentary on posts covering the angles enumerated above, and much less about just how many fucking to-dos and rainy day list items I've managed to clear this year with the help of coding tools like ChatGPT, GitHub Copilot, and Cursor. Thanks to AI, stuff that's been clogging up my backlog for years was done quickly and correctly.
When I write code org-name/repo
, I now have a script that finds the correct project directory and selects its preferred editor, and launches it with the appropriate environment loaded. When I write git pump
, I finally have a script that'll pull, commit, and push in one shot (stopping for a Y/n
confirmation only if the changes appear to be nontrivial). I've also finally implemented a comprehensive 3-2-1 backup strategy and scheduled it to run on our Macs each night, thanks to a script that rsyncs my and Becky's most important directories to a massive SSD RAID array, then to a local NAS, and finally to cloud storage.
Each of these was a thing I'd meant to get around to for years but never did, because they were never the most important thing to do. But I could set an agent to work on each of them while I checked my mail or whatever and only spend 20 or 30 minutes of my own time to put the finishing touches on them. All without having to remember arcane shell scripting incantations (since that's outside my wheelhouse).
For now, I only really feel so supercharged when it comes to one-off scripts and the leaf nodes of my applications, but even if that's all AI tools are ever any good for that's still a fucking lot of stuff. Especially as a guy who used his one chance to give a keynote at RailsConf to exhort developers to maximize the number of leaf nodes in their applications.
This week's Vergecast did a great job summarizing the current state of affairs for web publishers grappling with the more-rapidly-than-they'd-hoped impending arrival of "Google Zero." Don't know what Google Zero is? Basically, it describes a seemingly-inevitable future where the number of times Google Search links out to domains not owned by Google asymptotically approaches zero. This is bad news for publishers, who depend on Google for a huge proportion of their traffic (and they depend on that traffic for making money off display ads).
The whole segment is a good primer on where things stand:
My recollection is that everyone could see the writing on the wall as early as the mid-2010s when Google introduced "Featured Snippets" and other iterations of instant answers that obviated the need for users to click links. Publishers had a decade to think up some other way to make money since then, but appear to have done approximately nothing to prepare for a world where their traffic doesn't come from Google.
To the SEO industry, such a world doesn't make sense—you can increase your PageRank one-hundredfold and one hundred times zero is still zero.
To younger workers in publishing, a world without Google is almost impossible to imagine, as it has come to dominate almost every stage of advertising and distribution.
To old-school publishers who can remember what paper feels like, they only recently reached the end of a 20-year journey to migrate from a paid subscription relationship with readers to a free ad-supported situationship with tech platforms that consider their precious content an undifferentiated commodity. Publishers would love to go back, but the world has changed—nobody wants to pay for articles written by people they don't know.
The only people who are thriving are those who developed a patronage followership based on affinity for their individual identities. They've got a Patreon or a Substack and some of the most well-known journalists are making a 5-20x multiple of whatever their salary at Vox or GameSpot was. But if your income depends on the web publishing dynamic as it (precariously) exists today and you didn't spend the last decade making a name for yourself, you are well and truly fucked. Alas.
None of this is new if you read the news about the news.
What is new is that Google is answering more and more queries with AI summaries (and soon, one-shot web apps generated on the fly). As a result, the transition to Google Zero appears to be happening much more quickly than people expected/feared. Despite reporting on this eventuality for a decade, web publishers appear to have been caught flat-footed and have tended to respond with some combination of interminable layoffs and hopeless doom-saying.
This quote from Hemingway's The Sun Also Rises never gets old and applies well here:
"How did you go bankrupt?" Bill asked.
"Two ways," Mike said. "Gradually and then suddenly."
Fortunately, the monetization strategy for justin.searls.co is immune to these pressures, as I'm happy to do all the shit I do for free for some reason.
Well, I suppose this is one way to fix America's dwindling college enrollment problem:
Across America's community colleges and universities, sophisticated criminal networks are using AI to deploy thousands of "synthetic" or "ghost" students—sometimes in the dead of night—to attack colleges. The hordes are cramming themselves into registration portals to enroll and illegally apply for financial aid. The ghost students then occupy seats meant for real students—and have even resorted to handing in homework just to hold out long enough to siphon millions in financial aid before disappearing.
Bonus points if the chatbots are men, at least.
In 2011, the same month Todd and I decided to start Test Double, Steve Jobs had recently died, and we both happened to watch Steve Jobs' incredible 2005 Stanford commencement speech. Among the flurry of remembrances and articles being posted at the time, the video of this speech in particular broke through and became the lodestone for those moved by his passing.
The humble "just three stories" structure, the ephemera described in Isaacson's book, and the folklore about Steve's brooding in the run-up to the speech became almost as powerful as his actual words. The fact that Jobs, the ruthlessly focused product visionary and unflinching pitchman, was himself incredibly nervous about this speech might be the most humanizing thing any of us have ever heard about him.
Well, it's been twenty years, and the Steve Jobs Archive has written something of a coda on it. They've also released the e-mails Steve wrote to himself in lieu of proper notes (perhaps the second-most humanizing thing). They've also spruced up and remastered the video of the speech itself on YouTube.
Looking through his e-mails, I found I actually prefer this draft phrasing on the relieving clarity of our impending demise:
The most important thing I've ever encountered to help me make big choices is to remember that I'll be dead soon.
In 2011, Todd and I ran out of good reasons not to take the leap and do what we could to make some small difference in how people wrote software. In 2025, I believe we're now at an inflection point that we haven't seen since then. If you can see a path forward to meet this moment and make a meaningful impact, do it. Don't worry, you'll be dead soon.
I've never regretted failing to succeed; I've only regretted failing to try.
If you just read this month's newsletter, you might have gotten the impression whoa, it's really hard as a foreigner to buy property in Japan. And the fact it took me over a month, mostly on-site, to enter into contract to buy a condo in cash should serve as ample evidence of that.
However, multiple seemingly conflicting things can be true at once, and Bloomberg's Gearoid Reidy calls out several great points in a saucy column (archive link) which he wrote after I got myself into this mess:
But increasingly, the spotlight is falling on foreign buyers, particularly wealthy Chinese, seeking a safe place for their capital and drawn by Japan's political stability and social safety net. Lawmakers and commentators have been raising the lack of restrictions on property in parliament in recent weeks, as well as in the media. Former international soccer-star-turned-investor Keisuke Honda summed up what many think when he recently tweeted that he thought foreigners should not be allowed to buy land here.
Japan wouldn't be alone in seeing foreign non-residents snap up a bunch of attractive real estate—whether to park capital in a stable economy or to exploit increased tourism by flooding the zone with cheap Airbnb listings. What's different is that Japan's government does almost nothing to document, constrain, or tax foreign buyers.
Amazingly, it was only this decade that Japan first began making it harder for foreigners to buy properties even in sensitive areas next to military bases or nuclear plants. Beyond that, it's open season: Buyers don't even have to be resident in the country, there are no additional taxes or stamp duties for foreign purchasers, nor are there extra levies for second or holiday homes.
Japan is an outlier in the region. Singapore doubled its stamp duty on foreign buyers to 60% in 2023 as part of a series of disincentives, while Hong Kong only recently removed a similar curb in an effort to breathe life into the property market. Elsewhere, Australia announced a two-year outright ban on foreigners buying some homes, a step Canada last year extended.
All of this is genuinely surprising when you consider Japan's general hesitation around immigration. The suppressed value of the yen over the last four years has only exacerbated the issue and led to a run on housing inventory since 2021. Nevertheless, over-tourism has gotten far more attention from the media—pointing a camera at throngs of poorly-behaved white people outside Sensoji Temple makes for better TV than footage of largely-empty luxury condominiums popping up on every corner in Nakameguro.
Ultimately, the barriers to buying real estate in Japan have less to do with legal restrictions or taxes and more to do with language, culture, and the lack of comprehensive regulation against discrimination. What this adds up to is that real estate agencies specializing in serving foreign buyers and for which there are dozens in Tokyo specifically (many marketing to a single locale like Singapore or Hong Kong), can do deals all day long while asking almost nothing of the buyer beyond the funds for the purchase. However, there are very few such real estate agents outside Tokyo and a handful of foreign-friendly mid-market metros like Fukuoka.
Once you venture outside Tokyo, if you intend to buy desirable homes or new construction (i.e. not an abandoned house in the middle of nowhere), few realtors will have experience dealing with foreign non-residents and, regardless, many developers will insist on working with buyers directly—which means foreigners are often boxed out entirely. (About 40% of foreign buyers report having been turned away as a result of not being Japanese.)
Anyway, Gearoid describes a very real affordability crisis. Many Japanese workers with well-paid jobs have lost all hope of ever becoming homeowners despite a rapidly-declining population. Personally, I wouldn't be thrilled to have to pay more in tax when our purchase closes, but I'd completely understand and support the policy outcome such a tax would serve.
If you've been following me for the last few years, you've heard all about my move to Orlando* and how great I think it is. If, for whatever reason, this has engendered a desire in you to join me in paradise, now's your chance. An actually great house has hit the market in an area where high-quality inventory is exceptionally limited.
Beautiful house. Gated community. Golf course. Lake access. Disney World.
How else do I know this house is good? Because my brother lives there! He's poured his heart and soul into modernizing and updating it since he moved to Orlando in 2022, and it really shows. I suspect it won't sit on the market very long, so if you're interested you should go do the thing and click through to Zillow and then contact the realtor.
As our good friend/realtor Ken Pozek puts in his incredible overview video, this house comes with everything except compromises†:
* Specifically, the Disney World-adjacent part of Orlando
† Okay, one compromise: you'll have to put up with living near me.
As some of you know, I moved to Orlando in 2020. But it wasn't so much Orlando as Disney World itself, given our home's relative proximity to the parks and the degree to which we're isolated from most of the "Florida stuff" that comes to mind when I tell people I live in Florida.
One of the great joys of where we live is that I've made a variety of fascinating friends who similarly relocated to central Florida with a degree of intentionality, and one of them is Eric Doggett. Eric is a phenomenally talented photographer, artist, and all-around creative. In fact, if you listen to Breaking Change, a big reason it sounds as good as it does is thanks to Eric!
A couple years ago, Eric was admitted into the Disney Fine Art program, and now he officially has some prints for sale. Check out his announcement video:
If you're a Disney fan and you're the art-buying sort, go buy some! I especially love the new Palm Springs motif he's been iterating on most recently.
One of the best parts of all these ridiculous side quests I accept is that I never run out of new situations to figure out. This week: how to iron my shirt and press my pants from a Japanese business hotel the day before a meeting.
Thankfully, we have YouTube
Of course, models vary between companies and I actually had to follow this less entertaining video to get figure out what to do with a standing "Twinbird" press (apparently the #1 seller in pants presses):
Matthias Endler wrote up a list of traits he sees in great developers, which I read because Jerod linked to it in Changelog's newsletter. In his blurb, Jerod called back to the conversation he had with yours truly on a recent podcast episode, which is also the first thing I thought of when I read the post.
As lists go, these traits are all great things to look for in developers, even if a lot of it is advice you've seen repeated countless times before. This one on bugs stands out:
Most developers blame the software, other people, their dog, or the weather for flaky, seemingly "random" bugs.
The best devs don't.
No matter how erratic or mischievous the behavior of a computer seems, there is always a logical explanation: you just haven't found it yet!
The best keep digging until they find the reason. They might not find the reason immediately, they might never find it, but they never blame external circumstances.
Something I've always found interesting: when users encounter a bug, most blame themselves; when programmers encounter a bug, most blame anything but themselves. And not because programmers are trying to evade fault (although that's indeed a factor in lots of shitty work environments)! I believe it's because the prospect of spending hours and hours chasing down the cause of a bug—and with no guarantee you'll be successful—is so dreadful. Happens to the best of us: hundreds of times, I've witnessed a novel bug while pairing on something else and told my pair, "let's pretend we didn't just see that," in order to keep our productivity on track.
Anyway, if you're asking me, the single best trait to predict whether I'm looking at a good programmer or a great one is undoubtedly perseverance. Someone that takes to each new challenge like a dog to a bone, and who struggles to sleep until the next obstacle is cleared.
Until mid-2022, you could absolutely have a successful, high-paying career as a programmer if you lacked perseverance, but I'm not sure that's going to be true much longer.
Microsoft CTO Kevin Scott apparently just said:
"95% of code is going to be AI-generated (in the next five years)," Scott said. But before developers start panicking, he reassured that "it doesn't mean that the AI is doing the software engineering job…. authorship is still going to be human."
Panic? Never been a better time to start a company focused on cleaning up bad code and aiding broken organizations and then billing by the hour.
Anyone who still believes the quantity of code one owns is an asset and not a liability is a fool.
I don't mean to pick on Pawel Brodzinski in this blog post, but I stopped reading right at the top:
In its original meaning, Kanban represented a visual signal. The thing that communicated, well, something. It might have been a need, option, availability, capacity, request, etc.
I hate to come off as a pedant here, but something that's always annoyed me about the entire family of Lean practices in the Western world is the community's penchant for its uncritical adoption of regular-ass nouns and verbs from Japanese. Lean consultants have spent literal decades assigning highly-specific nuanced meanings to random words, and if you actually listen to anyone introducing Lean, it's hard to go 5 minutes without getting the icky sense the use of those words is being deployed to trade on appeals to nonsensical Oriental exoticism. I've lost track of how many times I've heard something like, "according to the ancient Japanese art of Kaizen," or similar bullshit.
It's true that Lean's existence is owed to the work of luminaries like Deming, Ohno, and Toyoda and their development of the Toyota Production System, but what eventually grew into the sprawling umbrella term "Lean" was based on surprisingly brief and incomplete glimpses of those innovations. As a result, the connective tissue between Lean as it's marketed in the West and anything that ever actually happened in Japan is even more tenuous than most Lean fans probably realize. So the fact that everyone carries on using mundane Japanese words as industry jargon makes even less sense.
For example, here are some words Lean people use and what they actually mean:
- Kanban (看板) - this just means "sign", most often the kind you'd find outside a store, not "a signaling device that gives authorization and instructions for the production or withdrawal"
- Kaizen (改善) - the word for "improvement" and doesn't refer to some special methodology. It doesn't even mean "continuous improvement"
- Muda (無駄) - this word means "waste", as in you order a bunch of sushi and don't eat it all. Nothing to do with "creating value for the customer"
- Muri (無理) - just means "unreasonable" or "impossible", not "overburdening equipment or operators"
- Gemba (現場) - literally means "actual location", usually used in broadcast news to convey things like the site of a car accident, not "any place where value-creating work actually occurs"
- Jidouka (自働化) - just means "automation", as opposed to, "ability to detect when an abnormal condition has occurred and immediately stop work"
- Hansei (反省) - means "reflect" (with a gloss of "regret"), rather than a "thinking about how a process or personal shortcoming can be improved"
- Hoshin Kanri (方針管理) - this one just means "policy management", which could mean documenting how many smoke breaks employees are allowed to take. Hardly "a strategic framework for building sustained high performance"
And so on.
As an entitled white man, I'll be the first to admit I don't lose much sleep over cultural appropriation. I'm just saying, if you're trying to come up with a name for a specific concept or process, remember that existing words have meaning before cherry-picking a noun from a foreign language textbook and calling it a day.
UPDATE: Just as I was worried I might have been a bit too harsh here, I realized his blog has comments.
This one is just incredible:
A post it note is not a kanban
Theo, you might have to reconsider your idea of "idiocy", potentially in front of a mirror. "Kanban" is not a noun so of course a post-it can't be one. The concept originated from Japan (Toyota factories to be specific) so it makes absolute sense to use the original word. Their method did not use a signboard at all, Kanban is the system, which you would learn with a couple minutes of focused googling.
Of course, open a dictionary and you'll see that kanban (看板) is categorized under meishi (名詞), which (unless the Lean folk have some other made up definition for it), means noun.
Well, Theo, we use a Japanese name because that's where it came from. Have you ever heard of a tsunami, or kamikaze, or sushi? These are also Japanese words we use in the English which have more nuanced meanings than just googling their "literal translation".
Additionally, I can understand that being as unintelligent as you are must be difficult but if you try your hardest you might be able to google "kanban" and "signboard" to learn that one refers to a methodology and the other does not.
For example, real expert Lean practitioners know that "ahou" (阿呆) refers to observing a mistake repeatedly and forming an expensive twelve step correction plan, even though its literal translation is "idiot."
Doc Searls (no relation) writes over at searls.com (which is why this site's domain is searls.co) about how the concept of human agency is being lost in the "agentic" hype:
My concern with both agentic and agentic AI is that concentrating development on AI agents (and digital "twins") alone may neglect, override, or obstruct the agency of human beings, rather than extending or enlarging it. (For more on this, read Agentic AI Is the Next Big Thing but I'm Not Sure It's What, by Adam Davidson in How to Geek. Also check out my Personal AI series, which addresses this issue most directly in Personal vs. Personal AI.)
Particularly interesting is that he's doing something about it, by chairing a IEEE spec dubbed "MyTerms":
Meet IEEE P7012, which "identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines." It has been in the works since 2017, and should be ready later this year. (I say this as chair of the standard's working group.) The nickname for P7012 is MyTerms (much as the nickname for the IEEE's 802.11 standard is Wi-Fi). The idea behind MyTerms is that the sites and services of the world should agree to your terms, rather than the other way around.
MyTerms creates a new regime for privacy: one based on contract. With each MyTerm you are the first party. Not the website, the service, or the app maker. They are the second party. And terms can be friendly. For example, a prototype term called NoStalking says "Just show me ads not based on tracking me." This is good for you, because you don't get tracked, and good for the site because it leaves open the advertising option. NoStalking lives at Customer Commons, much as personal copyrights live at Creative Commons. (Yes, the former is modeled on the latter.)
How are the terms communicated? So MyTerms is expressed as some kind of structured data (JSON? I haven't read the spec) codification presented by the user's client (HTTP headers or some kind of handshake?), to which the server either agrees to or something-something (blocks access?). Then both parties record the agreement:
On your side—the first-party side—browser makers can build something into their product, or any developer can make a browser add-on (Firefox) or extension (the rest of them). On the site's side—the second-party side—CMS makers can build something in, or any developer can make a plug-in (WordPress) or a module (Drupal).
Not answered in Doc's post (and I suspect, the rub) is how any of this will be enforced. In the late 90s, browser makers added a bold, green lock symbol to the location bar to convey a sense of safety to users that they were communicating over HTTPS. Then, there was a lucrative incentive at play: secure communications were necessary to get people to type their credit cards into a website. Today, the largest browser makers don't have any incentive to promote this. Could you imagine Microsoft, Google, or Apple making any of their EULA terms negotiable?
Maybe the idea is to put forward this spec and hope future regulations akin to the Digital Services Act will force sites to adopt it. I wish them luck with that.
Tuesday, while recording an episode of The Changelog, Adam reminded me that my redirects from possyparty.com to posseparty.com didn't support HTTPS. Naturally, because this was caught live and on air and was my own damn fault, I immediately rushed to cover for the shame I felt by squirreling away and writing custom software. As we do.
See, if you're a cheapskate like me, you might have noticed that forwarding requests from one domain or subdomain to another while supporting HTTPS isn't particularly cheap with many DNS hosts. But the thing is, I am particularly cheap. So I built a cheap solution. It's called redirect-dingus:
What is it? It's a tiny Heroku nginx app that simply reads a couple environment variables and uses them to map request hostnames to your intended redirect targets for cases when you have some number of domains and subdomains that should redirect to some other canonical domain.
Check out the README for instructions on setting up your own Heroku app with it for your own domain redirect needs. I recommend forking it (just in case I decide to change the nginx config to redirect to an offshore casino or crypto scam someday), but you do you.
This 6-minute video of Wally explaining how he manages cue cards for SNL was the most stressful day of work I've had in years.
RevenueCat seems like a savvy, well-run business for mobile app developers trying to subscription payments in the land of native in-app purchase APIs. Every year they take the data on their platform and publish a survey of the results. Granted, there's definitely a selection bias at play—certain kinds of developers are surely more inclined to run their payments through a third-party as opposed to Apple's own APIs.
That said, it's a large enough sample size that the headline results are, as Scharon Harding at Ars Technica put it, "sobering". From the report itself:
Across all categories, nearly 20 percent reach $1,000 in revenue, while 5 percent reach the $10,000 mark. Revenue drop-off is steep, with many categories losing ~50 percent of apps at each milestone, emphasizing the challenge of sustained growth beyond early revenue benchmarks.
Accepted without argument is that subscription-based apps are the gold standard for making money on mobile, so one is left to surmise that these developers are way better off than the ones trying to charge a one-time, up front price for their apps. And only 5% of all of subscription apps earn enough revenue to replace a single developer salary for any given year.
Well, if you've ever wondered why some startup didn't have budget to hire you or your agency to build a native mobile app for them, here you go. Outside free-to-play games, the real money is going to companies that merely use mobile apps as a means of distribution and who generally butter their bread somehow else (think movie tickets, car insurance, sports betting).
Anyway, super encouraging thing to read first thing while sitting down to map out this subscription-based iOS app I'm planning to create. Always good to set expectations low, I guess.
Benji Edwards for Ars Technica:
According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."
The user wasn't having it:
"Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding."
If some little shit talked to me that way and expected me to code for free, I'd tell him to go fuck himself, too.