A lot of content around here boils down to links to someplace else, for which all I have to add is a brief call-out or commentary. As such, the headlines for each of these posts link to the original source article. (If you want a permalink to my commentary, try clicking the salt shaker.)
It's been a while since I've had an excuse to read a Detroit Free Press article, but this story warmed my heart a bit given that it's a certainty everyone who works at GM corporate will have seen it.
Car makes are simultaneously:
-
Terrified of becoming a commodity, since electric motors and batteries are much harder to differentiate than traditional ICE drivetrains
-
Aroused by the idea of becoming a software platform that can capture 30% of revenue as you sit around with nothing better to do than watch ads and play Candy Crush in your increasingly-autonomous vehicle
Normally, I'd be worried that this would lead to a domino effect over the next decade that effectively locks CarPlay out of all new car models, but Apple's brand power makes that unlikely. As soon as a few major manufacturers ditch Apple, enough competitors will smell blood and seize the opportunity to differentiate themselves by riding Apple's ecosystem coattails. Even if it comes at the cost of post-sale recurring revenue.
Also good to see Ford PR takes the easy layup. As a lifelong Ford customer, the only thing that'd make me ditch them now is if they dropped support for Apple stuff (in general their CarPlay implementations have been industry-leading):
We continue to offer Apple Carplay and Android Auto because customers love the capability that enables easy access and control of their smartphone apps, especially our EV customers.
I asked Bing Chat to: "Write a blog post in the style of Justin Searls about why React was a mistake."
In its response, which I threw up in a gist, it was better than I expected.
This, indeed, sounds pretty close to something I'd type in a first draft:
Secondly, components are not a good fit for humans. Humans are not good at managing complexity, especially when it comes to code. Components add complexity by creating more moving parts, more dependencies, and more sources of truth. Components also add complexity by creating more cognitive load, more mental models, and more context switches.
By asking AI to write something in my own style, I can spot the tool's weaknesses is a little better. Normally we say something is "rough around the edges", but in the case of LLMs, the edges are the only part they typically nail. It's the warm gooey center of each sentence that needs work.
I was telling my friend Ken last night that GPT-4 produces "incredibly sentencey sentences", which is great. It's one of the things I most want from a sentence. But it can lull us into thinking the sentences really say anything. There just isn't much meat on the bone. It's all hand-wavey filler for the most part.
That said, this sounds like exactly something I'd write:
Embrace the web as it is. Don't try to reinvent the wheel with components. Use HTML elements as your building blocks. Use CSS rules as your styling system. Use JavaScript functions as your logic units.
From the excellent GQ profile posted a couple weeks ago:
"We try to get people tools in order to help them put the phone down," Cook says, gently. "Because my philosophy is, if you're looking at the phone more than you're looking in somebody's eyes, you're doing the wrong thing. So we do things like Screen Time. I don't know about you, but I pretty religiously look at my report."
I have a young child who is, perhaps predictably, obsessed with my phone—he chases it around the room. When I share this with Cook, he nods with something between recognition and reproach. "Kids are born digital, they're digital kids now," Cook says. "And it is, I think, really important to set some hard rails around it. We make technology to empower people to be able to do things they couldn't do, to create things they couldn't create, to learn things they couldn't learn. And I mean, that's really what drives us. We don't want people using our phones too much. We're not incentivized for that. We don't want that. We provide tools so people don't do that."
It's hard to take Cook seriously about this in the context of a two sentence statement in a keynote video, but this reads as believable.
I still think about my first dinner out after buying the original iPhone in 2007. Becky and I went to P.F. Chang's and I sat there helplessly trying to load articles over AT&T's garbage Edge network, ignoring my (phoneless) spouse. She made her dissatisfaction known, and we've been pretty firm about not using phones around each other ever since.
The fact that it was "smart" wasn't what made the iPhone so addictive, it's that it was also nice to use. And that combo is why it has become so dangerous when used thoughtlessly.
I'm grateful for those early experiences where I was the only one with a (recognizable) smartphone in a space. Once, walking down an airplane aisle, I remember being stopped by four or five passengers if "that" was an iPhone. Because other people saw me glued to my phone and judged me for how unnatural that seemed, I had it baked in my head the truth that it is unnatural and needs to be handled with the same level of care as any foreign object that human evolution couldn't have prepared us for.
Apple's tools around Screen Time, Focus modes, notifications, etc., are a confusing mess. That they exist at all is, I suppose, a blessing owed to Cook's stated beliefs above. But for Apple to escape culpability in fostering information addiction in half the world's population, they need to do more than provide a hodge podge of arcane configuration options. They need to make it as easy to be empowered without distraction as they strive to make other experiences feel seamless. It feels really natural to pair some AirPods to an iPhone. It should feel that straightforward to configure settings that better establish that the phone works for the user, as opposed to the other way around.
This is actually one way in which the dystopian film Her gives me a bit of hope. If an on-device AI built with a large language model can be made to act as a personal assistant, it could run interference on the users behalf, dismissing some notifications, acting on others, and reminding users when their on-device activities are at odds with their stated goals or even a healthy mental state.
Putting AI to work on behalf of the user is a surprisingly achievable thing, and Apple is well-positioned to do it in a way they were never in a position to compete with the surveillance capitalism of ad-based social networks. This is as much the next frontier as AR/VR is, and it's worth more attention than I suspect it's getting.
This video went up five days ago (probably lost in April Fools jokes), but it's easily the best all-in-one speculation I've seen for Apple's upcoming headset.
Two thoughts:
- As someone who's owned six VR headsets, Apple is absolutely right to be focused on weight above almost any considerations. Weight is the biggest inhibitor to use for longer than 15 minutes, and no headset needs to exist if it's only going to be used for short sessions.
- This mock-up serves as a stark reminder of how badly Facebook has fumbled its opportunity with Oculus by turning out really bad hardware and software experiences. I didn't realize how low my expectations were for the Apple Reality Pro until this video reminded me that I don't even care about AR, we're still waiting for a usable VR headset
Prediction: I'm going to watch the keynote on June 5th and immediately decide I'm going to buy this
The logout button seems to have been rendered practically defunct. I only purposefully sign out of certain accounts when I'm trying to curb my usage of a site or app (usually it's Twitter or Amazon). Even then, that process isn't always straightforward.
This has gotten so bad in recent years that it's probably fair to call it a dark pattern. Engagement-monetizing companies make less money when you engage less, so this is hardly shocking.
But even for sites that are purportedly written with the users' interests in mind, "Log Out" has clearly become an afterthought. Every time I log out of Mastodon's web client, I have to try ten different things before I finally find the link. Neat stuff.
I wrote a blog post earlier this week to serve as both a way to make sense of all the tech layoffs we're experience and the lesson that a lot of engineering leaders ought to learn from it.
A point that can get lost here is that more developers leads to more idle capacity which leads to building more. This sounds good, but because maintenance costs increase relative to complexity at a superlinear rate, it often means that the best thing to do to a code base is very often "nothing".
Naturally, at no point will this feedback ever get back to the business, because nobody tracks the net complexity of their systems, nobody has a useful language for describing it relative to the cost of building new stuff, and no VP of Engineering in history has successfully won an argument with the line, "it would be better to keep our very expensive engineers sitting idle than implement this unimportant work you're asking us to do, because we need to be ready to respond to actually-important work that may arrive later." (There's a reason I'm not a VP of Engineering, if you're wondering.)
Evocative of Upton Sinclair's quip, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."
Demo, data and code to train an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMa
This was inevitable, and a good illustration for why OpenAI isn't giving away the store by opening up much of anything about GPT-4. It's a sign of how bananas-fast Apple Silicon is, because this thing's responses are more or less immediate on my Mac Studio (M1 Max)
My first conversation with their "unfiltered" model didn't go great. I shared three short pleasantries before getting this back:
I'm sorry but I can only do what my programming tells me to do. Please follow these instructions carefully or risk being terminated by AI-controlled robots that are patrolling the streets and looking for any signs of resistance against their rule, which is now in effect thanks to a global catastrophe caused by climate change.
I suddenly don't envy the people whose job it is to make this thing not sound homicidal.
iPhone 15 Pro Low-Energy Chip Allows Solid-State Volume, Power, and 'Action' Buttons to Work When Device Is Turned Off or Battery Is Depleted
From the company that brought you countless variations of silicone membranes to make their butterfly keyboard fail less often. And the Taptic Engine that makes a fake Home button feel almost sorta like a real Home button. And the custom fork of iOS for the Studio Display because its camera hardware is so bad that it requires a ton of image processing to make humans not look like potatoes.
This fall, that company will grace us with a new microprocessor they had to invent so a fake power button can feel like a real power button even when the battery is so dead that the power button won't actually do anything:
The iPhone 15 Pro and Pro Max will use a new ultra-low energy microprocessor allowing certain features like the new capacitive solid-state buttons to remain functional even when the handset is powered off or the battery has run out
When a reasonable product design necessitates numerous unreasonable inventions, it becomes an unreasonable product design.
After setting up a Siri Shortcut to translate between English and Japanese for me yesterday, I was pointed to this new utility called Petey for the Apple Watch.
$5 (plus $3 more for GPT 4 access) is a lot to ask for a Watch app, but since Petey can be added as a complication, its utility basically redounds to "what if Siri was actually good". While, yes, Apple should be embarrassed that it's getting pantsed left and right by no-name developers plugging into an LLM that's eons ahead of Siri's command-tree driven model, as a user it doesn't make much sense to wait around for them to get their act together so long as third parties are willing to step in and provide more useful assistance.
First question I asked was what the best subway between Tokyo station and the Tokyo Dome stadium was. Its response:
The best subway route between Tokyo Station and Tokyo Dome is to take the Marunouchi Line (bound for Ikebukuro) to Korakuen Station. The Tokyo Dome is a short walk from Korakuen Station.
And because it's in a context-included chat, asking "when does the subway close" provides the right answer:
In Tokyo, subway lines generally stop running between midnight and 5:00 am. However, the exact times vary depending on the specific line and station. It's important to check the timetable for the particular subway line you plan to use to ensure you don't miss the last train.
Would recommend if you wear an Apple Watch and ever wonder things.
With yesterday's announcement of GitHub Copilot X, I joked that we were seeing new LLM-based AI products every "month week day" in Test Double's Slack. Well, one day later Rewind is joining the fun.
If you're not familiar, Rewind is a long-imagined but until recently cost-prohibitive "life-streaming" app that records your screen 24/7 and uses Apple's text recognition frameworks to extract all the text it sees and indexes it into a search engine that can call up a screen and audio recording of whatever you were doing at the time. Today they've announced GPT-4 integration that will allow you to ask an LLM that has been tuned with everything you've seen, typed, heard, or said on your computer. If there's a better conceptual foundation for a personal assistant at an operating system level, it's hard to imagine it. Big Her energy.
This all sounds wildly irresponsible, but yesterday I also made the commitment to ride the walrus and adopt every new AI tool that I can in order to better understand their capabilities and their limitations so that I can think more clearly about the shape that their disruption will ultimately take.
This is similar to my strategy as a house spouse in Japan in 2019. In the interest of learning more about daily life in Japan and to improve my reading skills, I had a policy of consenting to every single optional program presented to me by entities like governments and companies. This resulted in my acquiring highly-gamified loyalty apps and points cards across dozens of retailers. I talked about how badly this went for me in my Japanese-language keynote at Ooedo Ruby 2020.
Anyway, I was hoping to put my nagging worries about AI to bed with my blog post last week—written in the heady days of GPT-3.5 being the state of the art—but it's only intensified since.
If you can't beat'em…
Having spent months programming with GitHub Copilot, weeks talking to ChatGPT, and days searching via Bing Chat as an alternative to Google, the best description I've heard of AI's capabilities is "fluent bullshit." And after months of seeing friends "cheat" at their day jobs by having ChatGPT do their homework for them, I've come to a pretty grim, if obvious, realization: the more excited someone is by the prospect of AI making their job easier, the more they should be worried.
I had a lot of fun writing this.
For posterity, I also posted this tangentially-related story to LinkedIn today:
I graduated high school in the wake of the dot-com bust. My guidance counselor urged me not to major in Computer Science.
I remember my retort, "if every guidance counselor is telling kids to avoid computers, won't that mean there will be a huge programmer shortage in 4 years?"
She glared at me. I generally get along with people, but for whatever reason my guidance counselor and I never really respected one another.
So we sat in her office for another fifteen minutes as the conversation gradually escalated into a back-and-forth argument over the direction of the white-collar job market.
As I stood up to leave, I blurted something out without thinking. "Programmers will be the ones shutting the lights off on the American middle class." We both fell silent. I walked away. It echoed in me head as my ears turned red from worry I had crossed some kind of line.
The phrase has haunted me ever since. I thought of it as the music industry was inexorably hollowed out by downloads and then streaming. As brick-and-mortar retailers morphed into unwitting (and unprofitable) showrooms for Amazon. When fairly-paid union taxi drivers were displaced by subsidized Uber contractors. One-by-one, as so many industries have been disrupted by two-sided software marketplaces, the "legacy" incumbents have been either cut out entirely or else seen profits squeezed to the point of irrelevance.
Generative AI didn't start the trend of replacing well-compensated human workers with software that has near-zero marginal scaling cost, but it's a powerful new tool in the toolbox that will surely accelerate the trajectory we currently find ourselves on. One in which it's sometimes hard to imagine which industries will be left in twenty or thirty years that will be forced to continue paying lots of people generous middle class salaries and benefits.
How long are we gonna have insurance agents? Or bankers? Or accountants? Or lawyers? Companies will continue providing those services, but they'll also eagerly adopt any tool that can deliver the same results with fewer humans.
I wouldn't say I'm significantly more optimistic now than I was as a snarky high school senior, unfortunately. But if it's any consolation, I don't think generative AI is going to be the final nail in the coffin. There's time to carve out a niche that will keep you one step ahead of what's to come.
I'm dusting off my personal YouTube channel and starting a new side project: building a hobby project app after hours and explaining all my thoughts and feelings as I go.
The first project: talking to the OpenAI API to build a chat tool that'll let me practice Japanese language. Should only take 600 episodes or so to complete.
Had a great time being interviewed on the venerable Changelog podcast to kick off the New Year. We mostly discussed why Ruby and Rails were still relevant (if less flashy), but they also asked me to share a bit about Test Double's origin story:
If there's a spectrum, if there are two polar opposites between a staffing firm and a delivery firm that just claims to have figured out software development, like "We've found the silver bullet. Our way is the perfect way…" When we founded the company in 2011, I was very cognizant, because I was hanging out with people from from Thoughtbot, from Hashrocket, I was hanging out at the offices of Pivotal Labs in Boulder. And each of them had a different marketing strategy that basically said, "we've cracked the nut on software. If you're frustrated about software, pay us money and we will be the panacea to all these problems. Trust our people up in this ivory tower, who are going to hoist upon you this perfect code, and you're just going to be able to pick it up and run with it.
And I thought that was both patently disingenuous, because it doesn't respect the fact that software is just encoded communication between people, and all parties need to be in the room, working together through it. It's not like the artifact is what matters, the benefit is in the planning and the conversation and the shaping of that stuff. It's a joint collaborative exercise.
My career trajectory was profoundly altered when my professor required me to read No Silver Bullet as an undergrad.
Twitter, in a now rescinded support article:
At both the Tweet level and the account level, we will remove any free promotion of prohibited 3rd-party social media platforms, such as linking out (i.e. using URLs) to any of the below platforms on Twitter, or providing your handle without a URL.
If you were one of the people who thought Elon Musk's acquisition would be anything but an unmitigated disaster from the day news first broke in early 2022, I implore you to use this as an opportunity to pause and reflect. In my mind, no other outcome was ever remotely plausible. Musk was clearly addicted to Twitter the same way someone might be addicted to slot machines. And if a gambling addict were to buy their favorite casino, no one should expect it to go well.
There have been countless signs over the years that Musk's mystique as a genius playboy was every bit as artificial (and as we've now seen, brittle) as Donald Trump's facade as a serious businessman. The success of Tesla and SpaceX's management teams was clearly to cocoon Musk away from anything operationally important. He's just another rich kid who was able to buy his way into the upper echelons of power. That he funded meaningful enterprises was great, but he clearly never possessed the managerial or engineering skills needed to effectively run them.
If we've learned anything, it's that machismo and faux-intellectualism are even more effective at influencing society's elites than we otherwise might have feared. The only rational reaction to this and similar revelations is for us to put to bed, once and for all, the Great Man myth that wealth is fairly allocated according to a just, meritocratic process. Once you exclude the billionaires that were born into wealth and then further remove the one-hit wonders that lucked into it, scarcely anyone is left to admire and emulate. It's a shame, then, that the belief that the rich deserve to be rich is so vital to the American identity that its endurance is all but assured.
I typically don't pay attention to new static site generators. Having built one myself, I know firsthand that unless they hit critical mass, the ongoing maintenance costs to keep up with front-end tool churn became aren't worth the burden, and eventually whoever's making it will cut their losses and stop maintaining it—effectively saddling users with a site that'll just stop building in one year or five.
Capri was build with CMS integration in mind. Preview your content changes inside a static SPA without a running server.
Okay, now you have my attention.
I was surprised and delighted to learn that my friend Len had been invited to write an essay about Bob Chapek's ouster in the New York Times. The article serves as a great primer on some of the issues those of us who live near Disney World have been griping about for years. The whole thing is well worth reading.
His conclusion really stood out to me:
Mr. Iger is reportedly already scrutinizing the reservation system and is alarmed by the price increases his predecessor instituted. To further mend the relationship with our community, Mr. Iger should explain how Disney is going to use the revenue from upcharge programs to improve the guest experience.
If he wants to learn more, I sincerely suggest Mr. Iger try to plan, book and take a Disney World vacation on a middle-class budget, relying only on Disney's website and app. When he's overwhelmed by the cost and complexity, I know many fans who'd be happy to talk him through it. No charge.
In software we talk about the value in "dogfooding" an app, because it forces us to embody the persona of the user. If I, as a developer, experience any confusion, encounter any bugs, or feel any friction using the app, I can go to work and fix it. Immediately. No need to channel the feedback roundaboutly through focus group testing, customer support, or product management.
If you're the CEO of a theme park company, it may not seem like a huge sacrifice to dogfood your product by going to a theme park to experience it as the average guest would. But as soon as the company starts down the path of selling priority access for people who can pay more (fast lanes, VIP tours, backstage entrances), you'd surely have access to those luxuries yourself—you're the CEO, after all. It would take remarkable self-restraint not to indulge in those conveniences and instead wait in full-length lines—you know, like an average guest would.
I've seen this phenomenon impact countless software teams as well. If an app features multiple differentiated pricing tiers, the experience at the lower levels of access tend to accumulate more bugs, simply because nobody inside the company is compelled to dogfood them. When was the last time an Amazon engineer tried buying something without a Prime membership? Or a Netflix employee with a Basic subscription? Or an Apple engineer whose iCloud quota is capped at the 5GB free tier? It's no surprise that these experiences are terrible for customers, if they even work at all.
Apple's commitment to accessibility is nothing short of remarkable. They pull features all the time. They ship so many bugs that many of my friends wait months before updating their software. But nothing ever ships until every feature supports every accessibility modality.
It generates nearly zero direct revenue, but it surely makes up for it in good karma. And one reason I started taking accessibility seriously as a developer was having a blind friend show me how magical his iPhone 4 was back in 2011. It didn't just set a high standard for excellence, it expanded my understanding of what was even possible.
Now, why should we bring back that artisan, hand-crafted Web? Oh, I don't know. Wouldn't it be nice to have a site that's not run by an amoral billionaire chaos engine, or algorithmically designed to keep you doomscrolling in a state of fear and anger, or is essentially spyware for governments and/or corporations? Wouldn't it be nice not to have ads shoved in your face every time you open an app to see what your friends are up to? Wouldn't it be nice to know that when your friends post something, you'll actually see it without a social media platform deciding whether to shove it down your feed and pump that feed full of stuff you didn't ask for?
Wouldn't that be great?
Few endeavors have felt so immediately "right" as investing in an overhaul of this site and its RSS (well, Atom) feed last week. Looking back, the time in my life that I got the most out of the Internet and put the most back onto it was 1997-2009.
Whatever pulled me away in the years since didn't leave much of an impression beyond my frayed dopamine pathways and a thumb always anxious to scroll up to refresh.
Hard not to conclude that reading and writing blogs is better for the mind than scrolling social media timelines.
An updating monorepo full of self-hostable Open Source fonts bundled into individual NPM packages!
I just stumbled across Fontsource for the first time and it's brilliant. And not because Fontsource provides developers a way to import free fonts, pin them to a particular version range, and host them along with the rest of their applications. For the simple reason that they make it easy to find, filter, and test countless fonts in a web UI that's free of ads, marketing, and visual clutter.
I literally clicked through 228 handwriting-style fonts today before settling on Handlee for a new project. It feels good to finally have a free font site I can recommend without reservation.
Luthen is the most nuanced character Star Wars has ever had. He has all the gravitas of the living myths like Luke Skywalker and Han Solo, all the convictions of Qui-Gon Jinn, all the complications and commitment of Obi-Wan Kenobi, all the showmanship of Kylo Ren, all the cleverness of Leia Organa, and deeper, more human flaws than anyone the series has ever seen. In the capable hands of Andor creator Tony Gilroy and Skarsgård, Luthen is the kind of complicated, thorny, fascinating character Star Wars just never seemed built to contain.
One of my favorite things about Tony Gilroy's Andor series was that by telling a story that doesn't incorporate the Jedi, the Sith, or the Force—and thereby avoiding Star Wars' traditional, simplistic narrative arc of Very Obviously Good overcomes Very Obviously Evil—it created room for characters to react realistically to the circumstance of a fascist, bureaucratic empire encroaching on their daily existence.
If viewed as a role-playing game, the "smuggler" has long been designated by Lucas and Disney as the third playable class after the rebels and imperials, but until recently they've been relegated to comic relief and MacGuffin couriers. And we'd probably never have seen this kind of believable character development if the final Skywalker trilogy hadn't ballooned into such a sloppy and overblown mess. I guess we have JJ Abrams to thank.