A lot of content around here boils down to links to someplace else, for which all I have to add is a brief call-out or commentary. As such, the headlines for each of these posts link to the original source article. (If you want a permalink to my commentary, try clicking the salt shaker.)
This video gives a pretty good perspective of what gaming-oriented VR enthusiasts think of Meta's Quest 3 headset. Reading the coverage of their event, it seems like Zuck and company are embracing the device as more of a game console than a metaverse client.
This seems smart, because more people want immersive VR games than want a metaverse client.
But it's really interesting to consider the Quest 3 rollout with knowledge of how Apple carefully framed the Vision Pro as being built for computing, going well out of their way to ever use the phrase "virtual reality" and with gaming as a total after thought. In fact, visionOS doesn't allow for immersive roomscale apps, which dramatically decreases the number of games that could be ported to it.
What consumers want from VR, so far: games.
What Meta is giving them: games (with the hope that they will convert people into social metaverse experiences).
What Apple is giving them: a computing platform (with an express deemphasis on gaming, to the point of kneecapping its capabilities).
Watching this play out will be a great test of Apple's ability to "skate where the puck is going". I suspect later revisions will allow for better game capabilities but (like the rest of their platforms) never enough to excite gamers. Meta, meanwhile, happily lets gamers use their sold-at-a-loss headset as a wireless or wired dumb terminal connected to a more powerful gaming PC.
My 2¢:
- Meta will indeed capture a lot of market share, but (similar to the Wii) most devices will play a couple games and then collect dust. And the people who buy Quest units won't be the same people who want the experiences Meta really wants to offer. The most likely case seems to be that they'll start to look more and more like a traditional game console platform holder like Sony or Microsoft. And, unlike their current business, that's at least an honest one that can turn a tidy profit without selling out their customers
- Apple Vision Pro will sell in such low volumes for the first several years that we'll keep hearing premature obituaries from the media until 7 or 8 years from now Apple is able to break through with a mainstream computing platform by sheer force of will
If you ever had to settle for "Sudafed PE" instead of "Actually works Sudafed", you will want to know about the FDA's vote validating everyone's perception that phenylephrine is useless as a decongestant:
Advisers for the Food and Drug Administration this week voted unanimously, 16 to 0, that oral doses of phenylephrine—found in brand-name products like Sudafed PE, Benadryl Allergy Plus Congestion, Mucinex Sinus-Max, and Nyquil Severe Cold & Flu—are not effective at treating a stuffy nose.
I got sick in Greece and noticed straight away that low-dose pseudoephedrine is widely available over the counter and without any ID or registration requirements. This ruling definitely makes one wonder how the industry will respond if the only available decongestant sells in low volume because it's kept behind the counter to prevent people from using it to cook meth.
Great reader e-mail over at TPM:
Musk's behavior has been atrocious. But he shouldn't have been allowed to be in that position in the first place. That's on the Pentagon and the US government more generally. In the first rush of enthusiasm and support for Ukraine, Musk shipped a bunch of free Starlink devices to Ukraine and agreed to cover the cost of the service. Later when he cooled on Ukraine he started threatening to shut the service off if the Pentagon didn't pick up the tab. That's standard mercurial behavior from Musk. But of course the Pentagon and more broadly the US should be picking up the tab. Much as I loathe the person Musk has turned out to be, I remember thinking at the time, how can this even be a question? Of course they should pick up the tab. The idea that we'd leave it to the whim of someone like Musk to be covering the cost of mission-critical technology for an ally at war is crazy.
This was my thought at the time. That Musk's offer to blanket Ukraine with Starlink satellites and terminals was "free as in blackmail", especially if it bypassed (what would have surely been expedited) defense procurement processes that would have mandated their availability and security.
Now Musk has half the US government over a barrel, with no real way out until credible competitors to SpaceX emerge.
DeSantis's campaign manager, emphasis mine:
"Iowa is a real state for us because of its education — it's a highly educated state — because of income, because of Bible reading," said Jeff Roe, in audio obtained by POLITICO. "New Hampshire is a terrible state for Donald Trump. That's a terrible state for him. He's gonna get like 28 percent. Now there is more people who will have a slice of that and some people are just betting on New Hampshire overall. But he's going to lose the first two states. We're going to beat him in Iowa."
When you're a conservative that doesn't believe in school and for whom universities are bastions of a woke ideology, the two hallmarks of the highly educated are apparently income and Bible reading. TIL.
This essay almost exactly mirrors my feelings about the AI innovations we've seen spring forth over the last year. If you know me at all, you know that I've made my career by sticking my head in the sand and ignoring new technology trends in favor of following the market's fundamentals, even when I'm very excited about those innovations personally. (It's why I'm still making full-stack web apps instead of building everything with native Apple SDKs.)
That said, I've been dismayed to see so many of my friends that reside along the same pessimistic-bordering-on-cynical gradient continue to be stubbornly dismissive of AI as just another fad. This isn't crypto. The real-world economic impact of only the most basic tools has already been profound, and humans are nowhere close to catching up with its implications for a huge swath of jobs in the knowledge economy.
Sure. Few claim that LLMs possess human-like intelligence, "think" like humans, or exhibit self-awareness. Then again, there are also schools of thought that argue that humans are also just glorified response generators. Regardless of philosophical dispositions, it is important to note that a large number of white collar jobs on top of which economies are built involve just reasonable amounts of comprehension, bounded decision making, and text generation—paper pushing, "code monkeying" and whatnot. Really though, probabilistic text generation that maintains context in numerous computer and human languages while being meaningful, useful, and conversational, at the same time exhibiting at least an illusion of reason, in what world is that a trivial achievement ought to be dismissed with a "just"!? Those goal posts have shifted hilariously fast.
Ever since my post about AI and jobs in March, I have felt my take was overly optimistic. The obvious limitations of the tools we see today (e.g. LLM hallucination) do indeed limit the practical application of AI, but the potential for composability to address these concerns is sobering (e.g. a supervisory model that audits and reinforces the accuracy of an LLM's hallucination) and should distress anyone who would prefer that AI didn't devour the middle-class economy.
The rumors in the run-up to the iPhone 15 have been particularly maddening when it comes to the inevitable switch to the USB-C connector.
Speculation therefore remains rife about the USB-C port capabilities of the iPhone 15 lineup, and nothing is certain beyond the switch from Lightning. Rumors suggest the cables supplied in iPhone 15 boxes are limited to USB 2.0 data transfer speeds at a rate of 480 MBps, which is the same as Lightning.
In contrast, the iPhone 15 Pro models are expected to be supplied with cables capable of USB 3.2 or Thunderbolt 3 trans
Here's what I would have predicted a year ago: iPhone 15 would get a USB-C connector with USB data speeds and fast-ish charging speeds, and the Pro models would get a Thunderbolt/USB 4 port with typical transfer speeds and slightly faster-than-the-15-but-nothing-like-a-MacBook-fast charging speeds.
Simple, straightforward, consistent with other products as well as Apple's strategy of widening the segmentation between the phone lines.
The rumors have confirmed this at every step if that's what you'd been expecting. But as far as I can tell, everyone reporting on Apple rumors seems to be befuddled. Last week it was Thunderbolt 3, which has effectively been discontinued across the rest of Apple's line. I realize they need to work to put clicks on the table like everyone else, but they almost seem intentionally dense on this one.
I got a chance to sit down with the Changelog crew for a second time this year. This time it was to discuss the provocative blog post I wrote last month. It also featured my colleague Landon, who got to represent The Youth in tech today.
It was a great discussion. It's not technical and doesn't require much prior context to follow. I'm also always blown away by how much better the Changelog's audio engineering is than any other programming podcasts I've heard. Anyone that can make me able to listen to myself is working some kind of magic.
If you use Apple Podcasts (as I do), here's the direct link.
A friend mentioned the Sumitomo corporation yesterday and it prompted a conversation about Japanese-style conglomerates and how they've gone out of fashion in the West.
One thing that never came into fashion in the West? Educational comic books about your conglomerate that you link off your corporate home page.
132 pages! Things really start cooking about 12 pages in.
Incredibly relatable content in this provocative post by Thorsten Ball:
Many times in your day-to-day programming life you have to wait. Wait for your development environment to boot up, wait for the formatting-on-save command to finish, wait for the website you just opened to load, wait for tests to run, wait for the CI build to finish.
The waiting doesn't really cause me physical pain, but it does evoke a physical reaction alright. I just can't stand it. Maybe it's because I'm impatient by nature. Maybe it's knowing that things could be faster that causes it. When I have to wait ten seconds for a test to finish that I plan to run many times over the next hour, I tell you, it feels as if I'm about to lose my mind.
"Impatience" has been considered a virtue in software for literal decades, because nearly every single action a programmer redounds to a call-and-response with a computer that can't be considered complete until the computer has delivered its result and a human has interpreted it.
Imagine yourself texting with someone. If the other party replies quickly, it will promote focus and secure your attention—you'll stare at your phone and reply promptly as well. If many seconds or minutes go by between responses, however, you'll rationally lose interest, go do something else, and return to the conversation whenever you happen to come back to it. Most importantly, a fast-paced chat results in many more total messages exchanged than a slow-paced conversation, because time is stubbornly finite.
No one has any problem conceptualizing the above, but perhaps because we tend not to conceive of programming as a two-way conversation between a human and a computer, developers often lack a keen sense of this issue's salience.
I should see more people wince when a website takes longer than two seconds to load. There are very few reasons most websites should take long to load. Yet many times when, together with colleagues, I'd watch a website that we built load for longer than two seconds and say "something's off, I bet there's an N+1 query here" and turn out to be right – nobody else noticed anything.
Over the course of my career, very few programmers have seemed as constitutionally impatient as I am with slow computer responses. I've only become more radical in my impatience over time, as my understanding of programming as a two-way "conversation" has deepened.
Here's one way to think about it.
The upper bound of a programmer's productivity is the speed, fidelity, and correctness of the answers they're able to extract from each "feedback loop" they complete with a computer:
- Speed: if your page is slow to load, you can't refresh it as many times in a given working session, so you can't iterate on it quickly
- Fidelity: if you run a command that pulls down far too much or far too little information to answer your question, you'll spend additional time parsing and interpreting its results
- Correctness: if you have the wrong question in mind, you'll run the wrong commands, and you'll probably also waste feedback cycles to ask the wrong follow-up questions, too
I wrote a click-bait title referencing 10x developers a couple weeks ago. That post was careful to minimize value judgments and to avoid venturing into offering advice. Well, if you want some advice, here you go: take to heart the compounding nature of the effects that feedback loops have on productivity, and you'll set yourself apart as a programmer.
To illustrate, compare the potential productivity of two programmers, given a script to compute the upper bound of activities performed in the 480 minutes that comprise an 8-hour workday.
- Programmer A completes one feedback loop with their computer every 45 seconds. 1 in 10 of their commands ask the wrong question and result in the next 5 questions also being wrong. 1 in 3 of their commands produce low-fidelity results that take them 5 minutes to interpret the answer. They complete 85 productive feedback loops per day.
- Programmer B completes one feedback loop with their computer every 15 seconds. 1 in 25 of their commands ask the wrong question and result in the next 3 questions also being wrong. 1 in 10 of their commands produce low-fidelity results that take them 2 minutes to interpret the answer. They complete 902 productive feedback loops per day.
85 vs 902. There you go, a 10x difference in productivity.
It would be very fair to quibble over which numbers to measure, whether the numbers I chose are feasible, and so forth. This is only meant to illustrate that the difference between waiting a few hundred milliseconds versus a few seconds versus multiple minutes really adds up, especially when you factor in that expertise and focus can be learned and practiced to get better at asking the right questions and maintaining a clear mindset.
Something more this silly script doesn't capture is the human element of what it feels like to frequently feel like you're waiting. Beyond a certain point, people will develop habits to tab away to more responsive user interfaces like Slack or social media, resulting in minutes lost to distraction and minutes more to their attention residue. There are also reinforcing social effects of working in an organization where these phenomena are normalized that people rarely consider.
If I could go back and change one thing about how I learned to program, it would have been to emphasize the importance of internalizing this lesson and seizing control of the feedback loop between myself and my computer. I've been beating this drum for a while (and it was the primary thrust of my RailsConf 2017 keynote), but it still doesn't feel like the industry is getting any closer to acknowledging its importance or applying it to how we teach people programming, manage programmers, and design systems.
Perks of working at Google in 2007:
"Let me pull this up because there are so many," he says. When his computer produces a list a moment later, Kallayil makes his way down the screen and continues: "The free gourmet food, because that's a daily necessity. Breakfast, lunch and dinner I eat at Google. The next one is the fitness center, the 24-hour gym with weights. And there are yoga classes."
There is a pause before he adds that he also enjoys the speaker series, the in-house doctor, the nutritionist, the dry cleaners and the massage service. He has not used the personal trainer, the swimming pool and the spa — at least not yet, anyway. Nor has he commuted to and from the office on the high-tech, wi-fi equipped, bio-diesel shuttle bus that Google provides for employees, but that is only because he lives nearby and can drive without worrying about a long commute.
Let's check in on how 2023's going:
Being banned from the entire Internet would be tough, but Googlers in the high-security program will still get access to "Google-owned websites," which is actually quite a bit of the Internet. Google Search would be useless, but you could probably live a pretty good Internet life, writing documents, sending emails, taking notes, chatting with people, and watching YouTube.
Somewhere along the way—even by the time I visited the Googleplex in 2007—Google lost their way on this. Employment policies that promote people's autonomy and agency can pay companies like Google massive dividends in increased creativity, productivity, and loyalty. But perks that attempt to squeeze blood from the stone by making the office feel more luxurious than home always obfuscate the nature of the work itself and will inevitably distract everyone involved—a recipe for resentment between workers and management the minute the going gets tough.
Now that the going's gotten tough, it's too Google could never tell the difference between the two.
Maybe Sorkin has a "The Neural Network" treatment left in him.
Stability AI is being sued by a co-founder, who claims he was deceived into selling his 15% stake in one of the hottest startups in the sector for $100 to CEO Emad Mostaque, months before the company raised millions at a $1 billion valuation.
Should've run the contract terms through ChatGPT for a summary first.
Gotta appreciate ingenuity on the platforms when you see it. Because this Amazon Seller's name is "Shopping cart", it means that what a user sees before adding an item to their cart is:
Ships from Amazon
Sold by Shopping cart
Almost got me.
This is worth a read. If you've been harboring any illusions that machine learning and AI are cleanroom scientific breakthroughs, this should dispel it.
There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads. Others are looking at credit-card transactions and figuring out what sort of purchase they relate to or checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt. Humans are correcting customer-service chatbots, listening to Alexa requests, and categorizing the emotions of people on video calls. They are labeling food so that smart refrigerators don't get confused by new packaging, checking automated security cameras before sounding alarms, and identifying corn for baffled autonomous tractors.
If you sit with the thought that AI models are only valuable when they're provided painstaking and voluminous feedback from poorly-paid workers, the associated "intelligence" begins to evoke thoughts of the mechanical Turk (the one from history, not the Amazon product).
Wow they're really not kidding around with that Last of Us haunted house Universal announced yesterday.
TODO: insert "when did they start?" joke here.
It's better to fix the root cause of software problems, but seeing as Bethesda has chosen to continue using their in-house, 26-year-old Gamebryo engine—which is famous for literally nothing other than its signature "Bethesda jank" flavor of bugginess—I guess Phil Spencer and Microsoft have made the calculated decision to send wave after wave of QA employees until the NPCs reach their internal jank limit:
You can either read (NYT Paywall) or listen (in a brisk, 18-minute podcast rendition), but whichever you choose, this piece feels like a triumphant synthesis of several ideas Ezra Klein has been slow-cooking over hours upon hours of interviews on the various podcasts he's hosted over the last decade. If you have an Internet connection or participate in an economy, I strongly recommend you read and reflect on this column.
Many of the themes Ezra hits on are things that I've felt compelled to write about here, even back in the mid-2010s when I only blogged semi-annually. Like the "mysterious" reason that productivity is flat despite so many breakthroughs in information technology. Or my various exhortations that the best thing Apple could do for humanity is help users save themselves from notification hell. And more recently, the kinds of work that AI will both replace and create for us.
Anyway, Ezra's diagnosis is bang on and I'm in such violent agreement with his take here that I struggle to even imagine a counter argument. It seems to me the reason why such a profound and clear truth can fail to take the world by storm is that these are mostly systemic issues that represent collective action problems. After all, if your boss wants you to reply to e-mails within 5 minutes, or if your coworkers will judge you for not being a green bubble in Slack all day, what can you really do to reclaim your individual ability to focus?
Still, we probably each have more control over how much of our focus we cede to technology than we probably admit to ourselves. Always worth taking a moment to think about.
This may be a humble forum post, but it's a great example of the sort of entertainment that:
- Would be greatly enhanced by immersive visuals and audio
- Would leverage Apple's dominant position in the music industry
- Hasn't really been done before
Apple gave up on trying to host in-person iTunes/Apple Music Festivals, but they never really made sense for any other reason than strengthening ties with the recording industry. Something like this makes more sense. More intimate. Easier to manage. Cross-sells their headset by leveraging their platforms and services in a way "only Apple could do".
Neat idea. Wouldn't be surprised to hear something like this tomorrow.
Becky and I often debate what I'll really be like in retirement. How I picture myself in retirement sure sounds a lot like how it seems to be going for John Calhoun.
Maybe it's because this came out of a reputable institution like Stanford, but this project feels wildly irresponsible. Not only because OpenAI's language models frequently hallucinate complete nonsense, but because the app uploads your entire aggregate health data to OpenAI.
Apple went to pretty absurd lengths to keep individuals' health data private and secure. While various apps that integrate with Apple Health access more data than they need and surely phone home too much of it back to their application servers, the idea of just blasting all your health data carte blanche at OpenAI seems… bad, maybe?
This disclaimer doesn't inspire much confidence:
HealthGPT is provided for general informational purposes only and is not intended as a substitute for professional medical advice, diagnosis, or treatment. Large language models, such as those provided by OpenAI, are known to hallucinate and at times return false information. The use of HealthGPT is at your own risk. Always consult a qualified healthcare provider for personalized advice regarding your health and well-being. Aggregated HealthKit data for the past 14 days will be uploaded to OpenAI. Please refer to the OpenAI privacy policy for more information.
Hopefully the fact that you have to jump through a bunch of hoops to build and install this thing means few people will use it and nobody will end up hurting themselves, but if it was in the app store it's hard to imagine this not leading to a lot of really bad health outcomes. (And that's the best case scenario—imagine if OpenAI one day changes their privacy policy and starts selling a version of the language model to insurance underwriters that's tuned with user-submitted stuff like this.)