A lot of content around here boils down to links to someplace else, for which all I have to add is a brief call-out or commentary. As such, the headlines for each of these posts link to the original source article. (If you want a permalink to my commentary, try clicking the salt shaker.)
The vast majority of the discourse around the software industry and AI-based coding tools has fallen into one of these buckets:
- Executives want to lay everyone off!
- Nobody wants to hire juniors anymore!
- Product people are building (shitty) prototype apps without developers at all!
What isn't being covered is how many skilled developers are getting way more shit done, as Tom from GameTorch writes:
If you have software engineering skills right now, you can take any really annoying problem that you know could be automated but is too painful to even start, you can type up a few paragraphs in your favorite human text editor to describe your problem in a well-defined way, and then paste that shit into Cursor with o3 MAX pulled up and it will one shot the automation script in about 3 minutes. This gives you superpowers.
I've written a lot of commentary on posts covering the angles enumerated above, and much less about just how many fucking to-dos and rainy day list items I've managed to clear this year with the help of coding tools like ChatGPT, GitHub Copilot, and Cursor. Thanks to AI, stuff that's been clogging up my backlog for years was done quickly and correctly.
When I write code org-name/repo
, I now have a script that finds the correct project directory and selects its preferred editor, and launches it with the appropriate environment loaded. When I write git pump
, I finally have a script that'll pull, commit, and push in one shot (stopping for a Y/n
confirmation only if the changes appear to be nontrivial). I've also finally implemented a comprehensive 3-2-1 backup strategy and scheduled it to run on our Macs each night, thanks to a script that rsyncs my and Becky's most important directories to a massive SSD RAID array, then to a local NAS, and finally to cloud storage.
Each of these was a thing I'd meant to get around to for years but never did, because they were never the most important thing to do. But I could set an agent to work on each of them while I checked my mail or whatever and only spend 20 or 30 minutes of my own time to put the finishing touches on them. All without having to remember arcane shell scripting incantations (since that's outside my wheelhouse).
For now, I only really feel so supercharged when it comes to one-off scripts and the leaf nodes of my applications, but even if that's all AI tools are ever any good for that's still a fucking lot of stuff. Especially as a guy who used his one chance to give a keynote at RailsConf to exhort developers to maximize the number of leaf nodes in their applications.
This week's Vergecast did a great job summarizing the current state of affairs for web publishers grappling with the more-rapidly-than-they'd-hoped impending arrival of "Google Zero." Don't know what Google Zero is? Basically, it describes a seemingly-inevitable future where the number of times Google Search links out to domains not owned by Google asymptotically approaches zero. This is bad news for publishers, who depend on Google for a huge proportion of their traffic (and they depend on that traffic for making money off display ads).
The whole segment is a good primer on where things stand:
My recollection is that everyone could see the writing on the wall as early as the mid-2010s when Google introduced "Featured Snippets" and other iterations of instant answers that obviated the need for users to click links. Publishers had a decade to think up some other way to make money since then, but appear to have done approximately nothing to prepare for a world where their traffic doesn't come from Google.
To the SEO industry, such a world doesn't make sense—you can increase your PageRank one-hundredfold and one hundred times zero is still zero.
To younger workers in publishing, a world without Google is almost impossible to imagine, as it has come to dominate almost every stage of advertising and distribution.
To old-school publishers who can remember what paper feels like, they only recently reached the end of a 20-year journey to migrate from a paid subscription relationship with readers to a free ad-supported situationship with tech platforms that consider their precious content an undifferentiated commodity. Publishers would love to go back, but the world has changed—nobody wants to pay for articles written by people they don't know.
The only people who are thriving are those who developed a patronage followership based on affinity for their individual identities. They've got a Patreon or a Substack and some of the most well-known journalists are making a 5-20x multiple of whatever their salary at Vox or GameSpot was. But if your income depends on the web publishing dynamic as it (precariously) exists today and you didn't spend the last decade making a name for yourself, you are well and truly fucked. Alas.
None of this is new if you read the news about the news.
What is new is that Google is answering more and more queries with AI summaries (and soon, one-shot web apps generated on the fly). As a result, the transition to Google Zero appears to be happening much more quickly than people expected/feared. Despite reporting on this eventuality for a decade, web publishers appear to have been caught flat-footed and have tended to respond with some combination of interminable layoffs and hopeless doom-saying.
This quote from Hemingway's The Sun Also Rises never gets old and applies well here:
"How did you go bankrupt?" Bill asked.
"Two ways," Mike said. "Gradually and then suddenly."
Fortunately, the monetization strategy for justin.searls.co is immune to these pressures, as I'm happy to do all the shit I do for free for some reason.
Well, I suppose this is one way to fix America's dwindling college enrollment problem:
Across America's community colleges and universities, sophisticated criminal networks are using AI to deploy thousands of "synthetic" or "ghost" students—sometimes in the dead of night—to attack colleges. The hordes are cramming themselves into registration portals to enroll and illegally apply for financial aid. The ghost students then occupy seats meant for real students—and have even resorted to handing in homework just to hold out long enough to siphon millions in financial aid before disappearing.
Bonus points if the chatbots are men, at least.
In 2011, the same month Todd and I decided to start Test Double, Steve Jobs had recently died, and we both happened to watch Steve Jobs' incredible 2005 Stanford commencement speech. Among the flurry of remembrances and articles being posted at the time, the video of this speech in particular broke through and became the lodestone for those moved by his passing.
The humble "just three stories" structure, the ephemera described in Isaacson's book, and the folklore about Steve's brooding in the run-up to the speech became almost as powerful as his actual words. The fact that Jobs, the ruthlessly focused product visionary and unflinching pitchman, was himself incredibly nervous about this speech might be the most humanizing thing any of us have ever heard about him.
Well, it's been twenty years, and the Steve Jobs Archive has written something of a coda on it. They've also released the e-mails Steve wrote to himself in lieu of proper notes (perhaps the second-most humanizing thing). They've also spruced up and remastered the video of the speech itself on YouTube.
Looking through his e-mails, I found I actually prefer this draft phrasing on the relieving clarity of our impending demise:
The most important thing I've ever encountered to help me make big choices is to remember that I'll be dead soon.
In 2011, Todd and I ran out of good reasons not to take the leap and do what we could to make some small difference in how people wrote software. In 2025, I believe we're now at an inflection point that we haven't seen since then. If you can see a path forward to meet this moment and make a meaningful impact, do it. Don't worry, you'll be dead soon.
I've never regretted failing to succeed; I've only regretted failing to try.
If you just read this month's newsletter, you might have gotten the impression whoa, it's really hard as a foreigner to buy property in Japan. And the fact it took me over a month, mostly on-site, to enter into contract to buy a condo in cash should serve as ample evidence of that.
However, multiple seemingly conflicting things can be true at once, and Bloomberg's Gearoid Reidy calls out several great points in a saucy column (archive link) which he wrote after I got myself into this mess:
But increasingly, the spotlight is falling on foreign buyers, particularly wealthy Chinese, seeking a safe place for their capital and drawn by Japan's political stability and social safety net. Lawmakers and commentators have been raising the lack of restrictions on property in parliament in recent weeks, as well as in the media. Former international soccer-star-turned-investor Keisuke Honda summed up what many think when he recently tweeted that he thought foreigners should not be allowed to buy land here.
Japan wouldn't be alone in seeing foreign non-residents snap up a bunch of attractive real estate—whether to park capital in a stable economy or to exploit increased tourism by flooding the zone with cheap Airbnb listings. What's different is that Japan's government does almost nothing to document, constrain, or tax foreign buyers.
Amazingly, it was only this decade that Japan first began making it harder for foreigners to buy properties even in sensitive areas next to military bases or nuclear plants. Beyond that, it's open season: Buyers don't even have to be resident in the country, there are no additional taxes or stamp duties for foreign purchasers, nor are there extra levies for second or holiday homes.
Japan is an outlier in the region. Singapore doubled its stamp duty on foreign buyers to 60% in 2023 as part of a series of disincentives, while Hong Kong only recently removed a similar curb in an effort to breathe life into the property market. Elsewhere, Australia announced a two-year outright ban on foreigners buying some homes, a step Canada last year extended.
All of this is genuinely surprising when you consider Japan's general hesitation around immigration. The suppressed value of the yen over the last four years has only exacerbated the issue and led to a run on housing inventory since 2021. Nevertheless, over-tourism has gotten far more attention from the media—pointing a camera at throngs of poorly-behaved white people outside Sensoji Temple makes for better TV than footage of largely-empty luxury condominiums popping up on every corner in Nakameguro.
Ultimately, the barriers to buying real estate in Japan have less to do with legal restrictions or taxes and more to do with language, culture, and the lack of comprehensive regulation against discrimination. What this adds up to is that real estate agencies specializing in serving foreign buyers and for which there are dozens in Tokyo specifically (many marketing to a single locale like Singapore or Hong Kong), can do deals all day long while asking almost nothing of the buyer beyond the funds for the purchase. However, there are very few such real estate agents outside Tokyo and a handful of foreign-friendly mid-market metros like Fukuoka.
Once you venture outside Tokyo, if you intend to buy desirable homes or new construction (i.e. not an abandoned house in the middle of nowhere), few realtors will have experience dealing with foreign non-residents and, regardless, many developers will insist on working with buyers directly—which means foreigners are often boxed out entirely. (About 40% of foreign buyers report having been turned away as a result of not being Japanese.)
Anyway, Gearoid describes a very real affordability crisis. Many Japanese workers with well-paid jobs have lost all hope of ever becoming homeowners despite a rapidly-declining population. Personally, I wouldn't be thrilled to have to pay more in tax when our purchase closes, but I'd completely understand and support the policy outcome such a tax would serve.
If you've been following me for the last few years, you've heard all about my move to Orlando* and how great I think it is. If, for whatever reason, this has engendered a desire in you to join me in paradise, now's your chance. An actually great house has hit the market in an area where high-quality inventory is exceptionally limited.
Beautiful house. Gated community. Golf course. Lake access. Disney World.
How else do I know this house is good? Because my brother lives there! He's poured his heart and soul into modernizing and updating it since he moved to Orlando in 2022, and it really shows. I suspect it won't sit on the market very long, so if you're interested you should go do the thing and click through to Zillow and then contact the realtor.
As our good friend/realtor Ken Pozek puts in his incredible overview video, this house comes with everything except compromises†:
* Specifically, the Disney World-adjacent part of Orlando
† Okay, one compromise: you'll have to put up with living near me.
As some of you know, I moved to Orlando in 2020. But it wasn't so much Orlando as Disney World itself, given our home's relative proximity to the parks and the degree to which we're isolated from most of the "Florida stuff" that comes to mind when I tell people I live in Florida.
One of the great joys of where we live is that I've made a variety of fascinating friends who similarly relocated to central Florida with a degree of intentionality, and one of them is Eric Doggett. Eric is a phenomenally talented photographer, artist, and all-around creative. In fact, if you listen to Breaking Change, a big reason it sounds as good as it does is thanks to Eric!
A couple years ago, Eric was admitted into the Disney Fine Art program, and now he officially has some prints for sale. Check out his announcement video:
If you're a Disney fan and you're the art-buying sort, go buy some! I especially love the new Palm Springs motif he's been iterating on most recently.
One of the best parts of all these ridiculous side quests I accept is that I never run out of new situations to figure out. This week: how to iron my shirt and press my pants from a Japanese business hotel the day before a meeting.
Thankfully, we have YouTube
Of course, models vary between companies and I actually had to follow this less entertaining video to get figure out what to do with a standing "Twinbird" press (apparently the #1 seller in pants presses):
Matthias Endler wrote up a list of traits he sees in great developers, which I read because Jerod linked to it in Changelog's newsletter. In his blurb, Jerod called back to the conversation he had with yours truly on a recent podcast episode, which is also the first thing I thought of when I read the post.
As lists go, these traits are all great things to look for in developers, even if a lot of it is advice you've seen repeated countless times before. This one on bugs stands out:
Most developers blame the software, other people, their dog, or the weather for flaky, seemingly "random" bugs.
The best devs don't.
No matter how erratic or mischievous the behavior of a computer seems, there is always a logical explanation: you just haven't found it yet!
The best keep digging until they find the reason. They might not find the reason immediately, they might never find it, but they never blame external circumstances.
Something I've always found interesting: when users encounter a bug, most blame themselves; when programmers encounter a bug, most blame anything but themselves. And not because programmers are trying to evade fault (although that's indeed a factor in lots of shitty work environments)! I believe it's because the prospect of spending hours and hours chasing down the cause of a bug—and with no guarantee you'll be successful—is so dreadful. Happens to the best of us: hundreds of times, I've witnessed a novel bug while pairing on something else and told my pair, "let's pretend we didn't just see that," in order to keep our productivity on track.
Anyway, if you're asking me, the single best trait to predict whether I'm looking at a good programmer or a great one is undoubtedly perseverance. Someone that takes to each new challenge like a dog to a bone, and who struggles to sleep until the next obstacle is cleared.
Until mid-2022, you could absolutely have a successful, high-paying career as a programmer if you lacked perseverance, but I'm not sure that's going to be true much longer.
Microsoft CTO Kevin Scott apparently just said:
"95% of code is going to be AI-generated (in the next five years)," Scott said. But before developers start panicking, he reassured that "it doesn't mean that the AI is doing the software engineering job…. authorship is still going to be human."
Panic? Never been a better time to start a company focused on cleaning up bad code and aiding broken organizations and then billing by the hour.
Anyone who still believes the quantity of code one owns is an asset and not a liability is a fool.
I don't mean to pick on Pawel Brodzinski in this blog post, but I stopped reading right at the top:
In its original meaning, Kanban represented a visual signal. The thing that communicated, well, something. It might have been a need, option, availability, capacity, request, etc.
I hate to come off as a pedant here, but something that's always annoyed me about the entire family of Lean practices in the Western world is the community's penchant for its uncritical adoption of regular-ass nouns and verbs from Japanese. Lean consultants have spent literal decades assigning highly-specific nuanced meanings to random words, and if you actually listen to anyone introducing Lean, it's hard to go 5 minutes without getting the icky sense the use of those words is being deployed to trade on appeals to nonsensical Oriental exoticism. I've lost track of how many times I've heard something like, "according to the ancient Japanese art of Kaizen," or similar bullshit.
It's true that Lean's existence is owed to the work of luminaries like Deming, Ohno, and Toyoda and their development of the Toyota Production System, but what eventually grew into the sprawling umbrella term "Lean" was based on surprisingly brief and incomplete glimpses of those innovations. As a result, the connective tissue between Lean as it's marketed in the West and anything that ever actually happened in Japan is even more tenuous than most Lean fans probably realize. So the fact that everyone carries on using mundane Japanese words as industry jargon makes even less sense.
For example, here are some words Lean people use and what they actually mean:
- Kanban (看板) - this just means "sign", most often the kind you'd find outside a store, not "a signaling device that gives authorization and instructions for the production or withdrawal"
- Kaizen (改善) - the word for "improvement" and doesn't refer to some special methodology. It doesn't even mean "continuous improvement"
- Muda (無駄) - this word means "waste", as in you order a bunch of sushi and don't eat it all. Nothing to do with "creating value for the customer"
- Muri (無理) - just means "unreasonable" or "impossible", not "overburdening equipment or operators"
- Gemba (現場) - literally means "actual location", usually used in broadcast news to convey things like the site of a car accident, not "any place where value-creating work actually occurs"
- Jidouka (自働化) - just means "automation", as opposed to, "ability to detect when an abnormal condition has occurred and immediately stop work"
- Hansei (反省) - means "reflect" (with a gloss of "regret"), rather than a "thinking about how a process or personal shortcoming can be improved"
- Hoshin Kanri (方針管理) - this one just means "policy management", which could mean documenting how many smoke breaks employees are allowed to take. Hardly "a strategic framework for building sustained high performance"
And so on.
As an entitled white man, I'll be the first to admit I don't lose much sleep over cultural appropriation. I'm just saying, if you're trying to come up with a name for a specific concept or process, remember that existing words have meaning before cherry-picking a noun from a foreign language textbook and calling it a day.
UPDATE: Just as I was worried I might have been a bit too harsh here, I realized his blog has comments.
This one is just incredible:
A post it note is not a kanban
Theo, you might have to reconsider your idea of "idiocy", potentially in front of a mirror. "Kanban" is not a noun so of course a post-it can't be one. The concept originated from Japan (Toyota factories to be specific) so it makes absolute sense to use the original word. Their method did not use a signboard at all, Kanban is the system, which you would learn with a couple minutes of focused googling.
Of course, open a dictionary and you'll see that kanban (看板) is categorized under meishi (名詞), which (unless the Lean folk have some other made up definition for it), means noun.
Well, Theo, we use a Japanese name because that's where it came from. Have you ever heard of a tsunami, or kamikaze, or sushi? These are also Japanese words we use in the English which have more nuanced meanings than just googling their "literal translation".
Additionally, I can understand that being as unintelligent as you are must be difficult but if you try your hardest you might be able to google "kanban" and "signboard" to learn that one refers to a methodology and the other does not.
For example, real expert Lean practitioners know that "ahou" (阿呆) refers to observing a mistake repeatedly and forming an expensive twelve step correction plan, even though its literal translation is "idiot."
Doc Searls (no relation) writes over at searls.com (which is why this site's domain is searls.co) about how the concept of human agency is being lost in the "agentic" hype:
My concern with both agentic and agentic AI is that concentrating development on AI agents (and digital "twins") alone may neglect, override, or obstruct the agency of human beings, rather than extending or enlarging it. (For more on this, read Agentic AI Is the Next Big Thing but I'm Not Sure It's What, by Adam Davidson in How to Geek. Also check out my Personal AI series, which addresses this issue most directly in Personal vs. Personal AI.)
Particularly interesting is that he's doing something about it, by chairing a IEEE spec dubbed "MyTerms":
Meet IEEE P7012, which "identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines." It has been in the works since 2017, and should be ready later this year. (I say this as chair of the standard's working group.) The nickname for P7012 is MyTerms (much as the nickname for the IEEE's 802.11 standard is Wi-Fi). The idea behind MyTerms is that the sites and services of the world should agree to your terms, rather than the other way around.
MyTerms creates a new regime for privacy: one based on contract. With each MyTerm you are the first party. Not the website, the service, or the app maker. They are the second party. And terms can be friendly. For example, a prototype term called NoStalking says "Just show me ads not based on tracking me." This is good for you, because you don't get tracked, and good for the site because it leaves open the advertising option. NoStalking lives at Customer Commons, much as personal copyrights live at Creative Commons. (Yes, the former is modeled on the latter.)
How are the terms communicated? So MyTerms is expressed as some kind of structured data (JSON? I haven't read the spec) codification presented by the user's client (HTTP headers or some kind of handshake?), to which the server either agrees to or something-something (blocks access?). Then both parties record the agreement:
On your side—the first-party side—browser makers can build something into their product, or any developer can make a browser add-on (Firefox) or extension (the rest of them). On the site's side—the second-party side—CMS makers can build something in, or any developer can make a plug-in (WordPress) or a module (Drupal).
Not answered in Doc's post (and I suspect, the rub) is how any of this will be enforced. In the late 90s, browser makers added a bold, green lock symbol to the location bar to convey a sense of safety to users that they were communicating over HTTPS. Then, there was a lucrative incentive at play: secure communications were necessary to get people to type their credit cards into a website. Today, the largest browser makers don't have any incentive to promote this. Could you imagine Microsoft, Google, or Apple making any of their EULA terms negotiable?
Maybe the idea is to put forward this spec and hope future regulations akin to the Digital Services Act will force sites to adopt it. I wish them luck with that.
Tuesday, while recording an episode of The Changelog, Adam reminded me that my redirects from possyparty.com to posseparty.com didn't support HTTPS. Naturally, because this was caught live and on air and was my own damn fault, I immediately rushed to cover for the shame I felt by squirreling away and writing custom software. As we do.
See, if you're a cheapskate like me, you might have noticed that forwarding requests from one domain or subdomain to another while supporting HTTPS isn't particularly cheap with many DNS hosts. But the thing is, I am particularly cheap. So I built a cheap solution. It's called redirect-dingus:
What is it? It's a tiny Heroku nginx app that simply reads a couple environment variables and uses them to map request hostnames to your intended redirect targets for cases when you have some number of domains and subdomains that should redirect to some other canonical domain.
Check out the README for instructions on setting up your own Heroku app with it for your own domain redirect needs. I recommend forking it (just in case I decide to change the nginx config to redirect to an offshore casino or crypto scam someday), but you do you.
This 6-minute video of Wally explaining how he manages cue cards for SNL was the most stressful day of work I've had in years.
RevenueCat seems like a savvy, well-run business for mobile app developers trying to subscription payments in the land of native in-app purchase APIs. Every year they take the data on their platform and publish a survey of the results. Granted, there's definitely a selection bias at play—certain kinds of developers are surely more inclined to run their payments through a third-party as opposed to Apple's own APIs.
That said, it's a large enough sample size that the headline results are, as Scharon Harding at Ars Technica put it, "sobering". From the report itself:
Across all categories, nearly 20 percent reach $1,000 in revenue, while 5 percent reach the $10,000 mark. Revenue drop-off is steep, with many categories losing ~50 percent of apps at each milestone, emphasizing the challenge of sustained growth beyond early revenue benchmarks.
Accepted without argument is that subscription-based apps are the gold standard for making money on mobile, so one is left to surmise that these developers are way better off than the ones trying to charge a one-time, up front price for their apps. And only 5% of all of subscription apps earn enough revenue to replace a single developer salary for any given year.
Well, if you've ever wondered why some startup didn't have budget to hire you or your agency to build a native mobile app for them, here you go. Outside free-to-play games, the real money is going to companies that merely use mobile apps as a means of distribution and who generally butter their bread somehow else (think movie tickets, car insurance, sports betting).
Anyway, super encouraging thing to read first thing while sitting down to map out this subscription-based iOS app I'm planning to create. Always good to set expectations low, I guess.
Benji Edwards for Ars Technica:
According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."
The user wasn't having it:
"Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding."
If some little shit talked to me that way and expected me to code for free, I'd tell him to go fuck himself, too.
Existence of the imminent Oblivion remake was leaked months ago, and maybe I just missed this tidbit, but today Andy Robinson reported for Video Games Chronicle:
The Oblivion remake is reportedly "fully remade" with Unreal Engine 5, with six reworked gameplay systems: stamina, sneaking, blocking, archery, hit reaction and HUD.
If this is the case and because Elder Scrolls VI is still being developed on the Gamebryo/Creation Engine, I can't wait to see a side-by-side analysis of image quality, performance, and overall "Bethesda jank" between the two. I've been saying Bethesda needs to ditch its in-house engine since two-thousand-fucking-eight when Fallout 3 shipped broken and required years of patches to even feel playable. If this Oblivion remake is a lights-out technical success and I'm Phil Spencer, I'd be kicking shins left and right to convince Bethesda to take the time to replatform Elder Scrolls VI now before it ends up becoming another dud like Starfield.
Joe Rossignol at MacRumors:
Apple says turning on Low Power Mode reduces the Mac Studio's fan noise, which is useful for tasks that require a quieter environment, and it also allows for reduced power consumption if the computer is left running continuously.
The reduced fan noise aspect of Low Power Mode requires macOS Sequoia 15.1 or later. The new Mac Studio ships with macOS Sequoia 15.3.
A few Reddit users said macOS Sequoia 15.3 enabled Low Power Mode on the previous-generation Mac Studio with the M2 Max chip, and presumably on M2 Ultra configurations too. This is not reflected in Apple's support document.
I can confirm, a "Low Power Mode" toggle appears in the Energy settings of my M2 Ultra Mac Studio.
I really put this thing through the ringer with video and AI workloads and I have never been able to hear the fan (even with my ear right to the back of the thing), so I guess I was lucky to get one whose fan holes don't whistle. I'm always glad to receive new features through software, but am comfortable promising you that I will never turn this on.
Gurman with the scoop, summarized by MacRumors:
An updated version of the Mac Studio could launch as soon as this week, reports Bloomberg's Mark Gurman. The new machine is expected to be equipped with the M4 Max chip that we first saw in the 2024 MacBook Pro models, but Apple apparently does not have an M4 Ultra chip ready to go. Instead, there could be a version of the Mac Studio that uses an M3 Ultra chip. Apple didn't release an M3 Ultra chip alongside the M3 chip lineup, so it would be a new chip even though it's not part of the current M4 family. The current Mac Studio has an M2 Ultra chip, as does the Mac Pro.
Releasing a previous-generation, higher-end chip is utterly routine from every other manufacturer, but Apple doesn't sell chips, it sells computers.
Offering a Mac Studio in M4 Max and M3 Ultra configurations would give Apple's marketing team a really fucking narrow needle to thread. One imagines the Ultra will be better for massive video exports and the Max will be better for literally every other workflow. Woof.
Welp, I just did the thing I promised I'd never do and read the Hacker News comments for this otherwise lovely post pointing out the durable relevance of Ruby on Rails twenty years later. One comment stood out as so wrong, however, I couldn't help but clap back.
It started when someone wrote, presumably in praise of Rails, "I really like web apps that are just CRUD forms." CRUD is shorthand for the basic operations of "create, read, update, and delete", and those four verbs can express the vast majority of what anyone has ever wanted to do with a computer. It's why the spreadsheet was the killer app of the 80s and early 90s. It's why Rails 2's embrace of REST as a way to encapsulate CRUD over HTTP redefined how pretty much everyone has made web applications ever since.
Anyway, in response to the fellow above said he enjoys simple CRUD apps, somebody else wrote:
I really like easy problems too. Unfortunately, creating database records is hardly a business. With a pure CRUD system you're only one step away from Excel really. The business will be done somewhere else and won't be software driven at all but rather in people's heads and if you're lucky written in "SOP" type documents.
This struck me as pants-on-head levels of upside down. Father forgive me, but I hit reply with this:
As someone who co-founded one of the most successful Ruby on Rails consultancies in the world: building CRUD apps is a fantastic business.
There are two types of complexity: essential and incidental. Sometimes, a straightforward CRUD app won't work because the product's essential complexity demands it. But at least as often, apps (and architectures, and engineering orgs, and businesses) are really just CRUD apps with a bunch of incidental complexity cluttering up the joint and making everything confusing, painful, and expensive.
I've served dozens of clients over my career, and I can count on one hand the number of times I've found a company whose problem couldn't more or less be solved with "CRUD app plus zero-to-one interesting features." No technologist wants to think they're just building a series of straightforward CRUD apps, so they find ways to complicate it. No businessperson wants to believe their company isn't a unique snowflake, so they find ways to complicate it. No investor wants to pour their money into yet another CRUD app, so they invent a story to complicate it.
IME, >=90% of application developers working today are either building CRUD apps or would be better off if they realized they were building CRUD apps. To a certain extent, we're all just putting spreadsheets on the Internet. I think this—more than anything else—explains Rails' staying power. I remember giving this interview on Changelog and the host Adam asking about the threat Next.js posed to Rails, and—maybe I'd just seen this movie too many times since 2005—it didn't even register as a possible contender.
Any framework that doesn't absolutely nail a batteries-included CRUD feature-set as THE primary concern will inevitably see each app hobbled with so much baggage trying to roundaboutly back into CRUD that it'll fall over on itself.
Anyway, I believe in this point enough that I figured I should write it down someplace that isn't a deeply-nested comment at a place where I'd recommend you not read the comments, so that's how you and I wound up here just now.