A lot of content around here boils down to links to someplace else, for which all I have to add is a brief call-out or commentary. As such, the headlines for each of these posts link to the original source article. (If you want a permalink to my commentary, try clicking the salt shaker.)
I got a chance to sit down with the Changelog crew for a second time this year. This time it was to discuss the provocative blog post I wrote last month. It also featured my colleague Landon, who got to represent The Youth in tech today.
It was a great discussion. It's not technical and doesn't require much prior context to follow. I'm also always blown away by how much better the Changelog's audio engineering is than any other programming podcasts I've heard. Anyone that can make me able to listen to myself is working some kind of magic.
If you use Apple Podcasts (as I do), here's the direct link.
A friend mentioned the Sumitomo corporation yesterday and it prompted a conversation about Japanese-style conglomerates and how they've gone out of fashion in the West.
One thing that never came into fashion in the West? Educational comic books about your conglomerate that you link off your corporate home page.
132 pages! Things really start cooking about 12 pages in.
Incredibly relatable content in this provocative post by Thorsten Ball:
Many times in your day-to-day programming life you have to wait. Wait for your development environment to boot up, wait for the formatting-on-save command to finish, wait for the website you just opened to load, wait for tests to run, wait for the CI build to finish.
The waiting doesn't really cause me physical pain, but it does evoke a physical reaction alright. I just can't stand it. Maybe it's because I'm impatient by nature. Maybe it's knowing that things could be faster that causes it. When I have to wait ten seconds for a test to finish that I plan to run many times over the next hour, I tell you, it feels as if I'm about to lose my mind.
"Impatience" has been considered a virtue in software for literal decades, because nearly every single action a programmer redounds to a call-and-response with a computer that can't be considered complete until the computer has delivered its result and a human has interpreted it.
Imagine yourself texting with someone. If the other party replies quickly, it will promote focus and secure your attention—you'll stare at your phone and reply promptly as well. If many seconds or minutes go by between responses, however, you'll rationally lose interest, go do something else, and return to the conversation whenever you happen to come back to it. Most importantly, a fast-paced chat results in many more total messages exchanged than a slow-paced conversation, because time is stubbornly finite.
No one has any problem conceptualizing the above, but perhaps because we tend not to conceive of programming as a two-way conversation between a human and a computer, developers often lack a keen sense of this issue's salience.
I should see more people wince when a website takes longer than two seconds to load. There are very few reasons most websites should take long to load. Yet many times when, together with colleagues, I'd watch a website that we built load for longer than two seconds and say "something's off, I bet there's an N+1 query here" and turn out to be right – nobody else noticed anything.
Over the course of my career, very few programmers have seemed as constitutionally impatient as I am with slow computer responses. I've only become more radical in my impatience over time, as my understanding of programming as a two-way "conversation" has deepened.
Here's one way to think about it.
The upper bound of a programmer's productivity is the speed, fidelity, and correctness of the answers they're able to extract from each "feedback loop" they complete with a computer:
- Speed: if your page is slow to load, you can't refresh it as many times in a given working session, so you can't iterate on it quickly
- Fidelity: if you run a command that pulls down far too much or far too little information to answer your question, you'll spend additional time parsing and interpreting its results
- Correctness: if you have the wrong question in mind, you'll run the wrong commands, and you'll probably also waste feedback cycles to ask the wrong follow-up questions, too
I wrote a click-bait title referencing 10x developers a couple weeks ago. That post was careful to minimize value judgments and to avoid venturing into offering advice. Well, if you want some advice, here you go: take to heart the compounding nature of the effects that feedback loops have on productivity, and you'll set yourself apart as a programmer.
To illustrate, compare the potential productivity of two programmers, given a script to compute the upper bound of activities performed in the 480 minutes that comprise an 8-hour workday.
- Programmer A completes one feedback loop with their computer every 45 seconds. 1 in 10 of their commands ask the wrong question and result in the next 5 questions also being wrong. 1 in 3 of their commands produce low-fidelity results that take them 5 minutes to interpret the answer. They complete 85 productive feedback loops per day.
- Programmer B completes one feedback loop with their computer every 15 seconds. 1 in 25 of their commands ask the wrong question and result in the next 3 questions also being wrong. 1 in 10 of their commands produce low-fidelity results that take them 2 minutes to interpret the answer. They complete 902 productive feedback loops per day.
85 vs 902. There you go, a 10x difference in productivity.
It would be very fair to quibble over which numbers to measure, whether the numbers I chose are feasible, and so forth. This is only meant to illustrate that the difference between waiting a few hundred milliseconds versus a few seconds versus multiple minutes really adds up, especially when you factor in that expertise and focus can be learned and practiced to get better at asking the right questions and maintaining a clear mindset.
Something more this silly script doesn't capture is the human element of what it feels like to frequently feel like you're waiting. Beyond a certain point, people will develop habits to tab away to more responsive user interfaces like Slack or social media, resulting in minutes lost to distraction and minutes more to their attention residue. There are also reinforcing social effects of working in an organization where these phenomena are normalized that people rarely consider.
If I could go back and change one thing about how I learned to program, it would have been to emphasize the importance of internalizing this lesson and seizing control of the feedback loop between myself and my computer. I've been beating this drum for a while (and it was the primary thrust of my RailsConf 2017 keynote), but it still doesn't feel like the industry is getting any closer to acknowledging its importance or applying it to how we teach people programming, manage programmers, and design systems.
Perks of working at Google in 2007:
"Let me pull this up because there are so many," he says. When his computer produces a list a moment later, Kallayil makes his way down the screen and continues: "The free gourmet food, because that's a daily necessity. Breakfast, lunch and dinner I eat at Google. The next one is the fitness center, the 24-hour gym with weights. And there are yoga classes."
There is a pause before he adds that he also enjoys the speaker series, the in-house doctor, the nutritionist, the dry cleaners and the massage service. He has not used the personal trainer, the swimming pool and the spa — at least not yet, anyway. Nor has he commuted to and from the office on the high-tech, wi-fi equipped, bio-diesel shuttle bus that Google provides for employees, but that is only because he lives nearby and can drive without worrying about a long commute.
Let's check in on how 2023's going:
Being banned from the entire Internet would be tough, but Googlers in the high-security program will still get access to "Google-owned websites," which is actually quite a bit of the Internet. Google Search would be useless, but you could probably live a pretty good Internet life, writing documents, sending emails, taking notes, chatting with people, and watching YouTube.
Somewhere along the way—even by the time I visited the Googleplex in 2007—Google lost their way on this. Employment policies that promote people's autonomy and agency can pay companies like Google massive dividends in increased creativity, productivity, and loyalty. But perks that attempt to squeeze blood from the stone by making the office feel more luxurious than home always obfuscate the nature of the work itself and will inevitably distract everyone involved—a recipe for resentment between workers and management the minute the going gets tough.
Now that the going's gotten tough, it's too Google could never tell the difference between the two.
Maybe Sorkin has a "The Neural Network" treatment left in him.
Stability AI is being sued by a co-founder, who claims he was deceived into selling his 15% stake in one of the hottest startups in the sector for $100 to CEO Emad Mostaque, months before the company raised millions at a $1 billion valuation.
Should've run the contract terms through ChatGPT for a summary first.
Gotta appreciate ingenuity on the platforms when you see it. Because this Amazon Seller's name is "Shopping cart", it means that what a user sees before adding an item to their cart is:
Ships from Amazon
Sold by Shopping cart
Almost got me.
This is worth a read. If you've been harboring any illusions that machine learning and AI are cleanroom scientific breakthroughs, this should dispel it.
There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads. Others are looking at credit-card transactions and figuring out what sort of purchase they relate to or checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt. Humans are correcting customer-service chatbots, listening to Alexa requests, and categorizing the emotions of people on video calls. They are labeling food so that smart refrigerators don't get confused by new packaging, checking automated security cameras before sounding alarms, and identifying corn for baffled autonomous tractors.
If you sit with the thought that AI models are only valuable when they're provided painstaking and voluminous feedback from poorly-paid workers, the associated "intelligence" begins to evoke thoughts of the mechanical Turk (the one from history, not the Amazon product).
Wow they're really not kidding around with that Last of Us haunted house Universal announced yesterday.
TODO: insert "when did they start?" joke here.
It's better to fix the root cause of software problems, but seeing as Bethesda has chosen to continue using their in-house, 26-year-old Gamebryo engine—which is famous for literally nothing other than its signature "Bethesda jank" flavor of bugginess—I guess Phil Spencer and Microsoft have made the calculated decision to send wave after wave of QA employees until the NPCs reach their internal jank limit:
You can either read (NYT Paywall) or listen (in a brisk, 18-minute podcast rendition), but whichever you choose, this piece feels like a triumphant synthesis of several ideas Ezra Klein has been slow-cooking over hours upon hours of interviews on the various podcasts he's hosted over the last decade. If you have an Internet connection or participate in an economy, I strongly recommend you read and reflect on this column.
Many of the themes Ezra hits on are things that I've felt compelled to write about here, even back in the mid-2010s when I only blogged semi-annually. Like the "mysterious" reason that productivity is flat despite so many breakthroughs in information technology. Or my various exhortations that the best thing Apple could do for humanity is help users save themselves from notification hell. And more recently, the kinds of work that AI will both replace and create for us.
Anyway, Ezra's diagnosis is bang on and I'm in such violent agreement with his take here that I struggle to even imagine a counter argument. It seems to me the reason why such a profound and clear truth can fail to take the world by storm is that these are mostly systemic issues that represent collective action problems. After all, if your boss wants you to reply to e-mails within 5 minutes, or if your coworkers will judge you for not being a green bubble in Slack all day, what can you really do to reclaim your individual ability to focus?
Still, we probably each have more control over how much of our focus we cede to technology than we probably admit to ourselves. Always worth taking a moment to think about.
This may be a humble forum post, but it's a great example of the sort of entertainment that:
- Would be greatly enhanced by immersive visuals and audio
- Would leverage Apple's dominant position in the music industry
- Hasn't really been done before
Apple gave up on trying to host in-person iTunes/Apple Music Festivals, but they never really made sense for any other reason than strengthening ties with the recording industry. Something like this makes more sense. More intimate. Easier to manage. Cross-sells their headset by leveraging their platforms and services in a way "only Apple could do".
Neat idea. Wouldn't be surprised to hear something like this tomorrow.
Becky and I often debate what I'll really be like in retirement. How I picture myself in retirement sure sounds a lot like how it seems to be going for John Calhoun.
Maybe it's because this came out of a reputable institution like Stanford, but this project feels wildly irresponsible. Not only because OpenAI's language models frequently hallucinate complete nonsense, but because the app uploads your entire aggregate health data to OpenAI.
Apple went to pretty absurd lengths to keep individuals' health data private and secure. While various apps that integrate with Apple Health access more data than they need and surely phone home too much of it back to their application servers, the idea of just blasting all your health data carte blanche at OpenAI seems… bad, maybe?
This disclaimer doesn't inspire much confidence:
HealthGPT is provided for general informational purposes only and is not intended as a substitute for professional medical advice, diagnosis, or treatment. Large language models, such as those provided by OpenAI, are known to hallucinate and at times return false information. The use of HealthGPT is at your own risk. Always consult a qualified healthcare provider for personalized advice regarding your health and well-being. Aggregated HealthKit data for the past 14 days will be uploaded to OpenAI. Please refer to the OpenAI privacy policy for more information.
Hopefully the fact that you have to jump through a bunch of hoops to build and install this thing means few people will use it and nobody will end up hurting themselves, but if it was in the app store it's hard to imagine this not leading to a lot of really bad health outcomes. (And that's the best case scenario—imagine if OpenAI one day changes their privacy policy and starts selling a version of the language model to insurance underwriters that's tuned with user-submitted stuff like this.)
It's been a while since I've had an excuse to read a Detroit Free Press article, but this story warmed my heart a bit given that it's a certainty everyone who works at GM corporate will have seen it.
Car makes are simultaneously:
-
Terrified of becoming a commodity, since electric motors and batteries are much harder to differentiate than traditional ICE drivetrains
-
Aroused by the idea of becoming a software platform that can capture 30% of revenue as you sit around with nothing better to do than watch ads and play Candy Crush in your increasingly-autonomous vehicle
Normally, I'd be worried that this would lead to a domino effect over the next decade that effectively locks CarPlay out of all new car models, but Apple's brand power makes that unlikely. As soon as a few major manufacturers ditch Apple, enough competitors will smell blood and seize the opportunity to differentiate themselves by riding Apple's ecosystem coattails. Even if it comes at the cost of post-sale recurring revenue.
Also good to see Ford PR takes the easy layup. As a lifelong Ford customer, the only thing that'd make me ditch them now is if they dropped support for Apple stuff (in general their CarPlay implementations have been industry-leading):
We continue to offer Apple Carplay and Android Auto because customers love the capability that enables easy access and control of their smartphone apps, especially our EV customers.
I asked Bing Chat to: "Write a blog post in the style of Justin Searls about why React was a mistake."
In its response, which I threw up in a gist, it was better than I expected.
This, indeed, sounds pretty close to something I'd type in a first draft:
Secondly, components are not a good fit for humans. Humans are not good at managing complexity, especially when it comes to code. Components add complexity by creating more moving parts, more dependencies, and more sources of truth. Components also add complexity by creating more cognitive load, more mental models, and more context switches.
By asking AI to write something in my own style, I can spot the tool's weaknesses is a little better. Normally we say something is "rough around the edges", but in the case of LLMs, the edges are the only part they typically nail. It's the warm gooey center of each sentence that needs work.
I was telling my friend Ken last night that GPT-4 produces "incredibly sentencey sentences", which is great. It's one of the things I most want from a sentence. But it can lull us into thinking the sentences really say anything. There just isn't much meat on the bone. It's all hand-wavey filler for the most part.
That said, this sounds like exactly something I'd write:
Embrace the web as it is. Don't try to reinvent the wheel with components. Use HTML elements as your building blocks. Use CSS rules as your styling system. Use JavaScript functions as your logic units.
From the excellent GQ profile posted a couple weeks ago:
"We try to get people tools in order to help them put the phone down," Cook says, gently. "Because my philosophy is, if you're looking at the phone more than you're looking in somebody's eyes, you're doing the wrong thing. So we do things like Screen Time. I don't know about you, but I pretty religiously look at my report."
I have a young child who is, perhaps predictably, obsessed with my phone—he chases it around the room. When I share this with Cook, he nods with something between recognition and reproach. "Kids are born digital, they're digital kids now," Cook says. "And it is, I think, really important to set some hard rails around it. We make technology to empower people to be able to do things they couldn't do, to create things they couldn't create, to learn things they couldn't learn. And I mean, that's really what drives us. We don't want people using our phones too much. We're not incentivized for that. We don't want that. We provide tools so people don't do that."
It's hard to take Cook seriously about this in the context of a two sentence statement in a keynote video, but this reads as believable.
I still think about my first dinner out after buying the original iPhone in 2007. Becky and I went to P.F. Chang's and I sat there helplessly trying to load articles over AT&T's garbage Edge network, ignoring my (phoneless) spouse. She made her dissatisfaction known, and we've been pretty firm about not using phones around each other ever since.
The fact that it was "smart" wasn't what made the iPhone so addictive, it's that it was also nice to use. And that combo is why it has become so dangerous when used thoughtlessly.
I'm grateful for those early experiences where I was the only one with a (recognizable) smartphone in a space. Once, walking down an airplane aisle, I remember being stopped by four or five passengers if "that" was an iPhone. Because other people saw me glued to my phone and judged me for how unnatural that seemed, I had it baked in my head the truth that it is unnatural and needs to be handled with the same level of care as any foreign object that human evolution couldn't have prepared us for.
Apple's tools around Screen Time, Focus modes, notifications, etc., are a confusing mess. That they exist at all is, I suppose, a blessing owed to Cook's stated beliefs above. But for Apple to escape culpability in fostering information addiction in half the world's population, they need to do more than provide a hodge podge of arcane configuration options. They need to make it as easy to be empowered without distraction as they strive to make other experiences feel seamless. It feels really natural to pair some AirPods to an iPhone. It should feel that straightforward to configure settings that better establish that the phone works for the user, as opposed to the other way around.
This is actually one way in which the dystopian film Her gives me a bit of hope. If an on-device AI built with a large language model can be made to act as a personal assistant, it could run interference on the users behalf, dismissing some notifications, acting on others, and reminding users when their on-device activities are at odds with their stated goals or even a healthy mental state.
Putting AI to work on behalf of the user is a surprisingly achievable thing, and Apple is well-positioned to do it in a way they were never in a position to compete with the surveillance capitalism of ad-based social networks. This is as much the next frontier as AR/VR is, and it's worth more attention than I suspect it's getting.
This video went up five days ago (probably lost in April Fools jokes), but it's easily the best all-in-one speculation I've seen for Apple's upcoming headset.
Two thoughts:
- As someone who's owned six VR headsets, Apple is absolutely right to be focused on weight above almost any considerations. Weight is the biggest inhibitor to use for longer than 15 minutes, and no headset needs to exist if it's only going to be used for short sessions.
- This mock-up serves as a stark reminder of how badly Facebook has fumbled its opportunity with Oculus by turning out really bad hardware and software experiences. I didn't realize how low my expectations were for the Apple Reality Pro until this video reminded me that I don't even care about AR, we're still waiting for a usable VR headset
Prediction: I'm going to watch the keynote on June 5th and immediately decide I'm going to buy this
The logout button seems to have been rendered practically defunct. I only purposefully sign out of certain accounts when I'm trying to curb my usage of a site or app (usually it's Twitter or Amazon). Even then, that process isn't always straightforward.
This has gotten so bad in recent years that it's probably fair to call it a dark pattern. Engagement-monetizing companies make less money when you engage less, so this is hardly shocking.
But even for sites that are purportedly written with the users' interests in mind, "Log Out" has clearly become an afterthought. Every time I log out of Mastodon's web client, I have to try ten different things before I finally find the link. Neat stuff.
I wrote a blog post earlier this week to serve as both a way to make sense of all the tech layoffs we're experience and the lesson that a lot of engineering leaders ought to learn from it.
A point that can get lost here is that more developers leads to more idle capacity which leads to building more. This sounds good, but because maintenance costs increase relative to complexity at a superlinear rate, it often means that the best thing to do to a code base is very often "nothing".
Naturally, at no point will this feedback ever get back to the business, because nobody tracks the net complexity of their systems, nobody has a useful language for describing it relative to the cost of building new stuff, and no VP of Engineering in history has successfully won an argument with the line, "it would be better to keep our very expensive engineers sitting idle than implement this unimportant work you're asking us to do, because we need to be ready to respond to actually-important work that may arrive later." (There's a reason I'm not a VP of Engineering, if you're wondering.)
Evocative of Upton Sinclair's quip, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."