justin․searls․co

MacRumors, citing a Kuo report:

Apple is planning to produce between 60,000 and 80,000 Vision Pro units for the February 2 launch of the device, according to analyst Ming-Chi Kuo. With the small size of the shipment, Kuo believes that the Vision Pro will "sell out soon after the release."

Relative to Apple's current scale (churning out hundreds of iPhones per minute), a launch this small would be downright artisanal.

I read this post after seeing the news that the purchase process will indeed require a face scan and any prescription lens selection (though, mercifully, one can wait to upload a scan of a prescription until after checkout). Suddenly, it became clear that manically force-relaunching the Apple Store app at 8 AM on January 19th might not be enough to ensure a launch day delivery date.

Or would it?

Lately whenever I've found myself uselessly fretting about something, I've asked ChatGPT 4. So let's see what it has to say in response to this query:

Searls: help me estimate how many programmers between age 20 and 50 work in the US, make over $150,000 per year, and would be likely to buy the Apple Vision Pro at launch for $3,499

Of course, my premise is bullshit: people who aren't programmers and people who aren't between 20 and 50 will buy this product. But given that it'll be pretty hard to estimate what proportion of humans writ large will want to fork over $3,499 (or more), narrowing the demographic seemed necessary.

Chat GPT's response was characteristically humorous:

ChatGPT: Estimating the number of programmers in the U.S. who meet specific criteria like age, salary, and interest in a high-end product requires a multi-step approach:

  1. Identify Total Programmers in the U.S.: We need to determine the total number of programmers or software developers in the U.S.

  2. Age Filter (20-50 years): Estimate the proportion of these professionals who are between the ages of 20 and 50.

  3. Salary Filter (>$150,000/year): Identify what percentage of these individuals earn over $150,000 per year.

  4. Likelihood to Purchase High-End Product: Estimate the likelihood of this demographic purchasing a high-end product like the Apple Vision Pro at launch.

Let's start with step one and use available data to estimate the total number of programmers in the U.S. We'll use data from sources like the U.S. Bureau of Labor Statistics (BLS) or similar reports.

After that, we can apply demographic and income filters based on typical distributions in the tech industry. Finally, we'll estimate the purchasing likelihood based on consumer behavior patterns in similar income brackets and interest in technology products.

I'll begin by finding the total number of programmers or software developers in the U.S.

Based on a placeholder number for illustrative purposes, let's assume there are approximately 4,000,000 programmers or software developers in the U.S. Now, we'll proceed to the next steps:

  1. Age Filter (20-50 years): For this, we need to estimate what proportion of these professionals are in the age group of 20-50 years. Let's assume, based on typical age distributions in the tech industry, that about 70% of programmers fall within this age group.

  2. Salary Filter (>$150,000/year): Next, we'll estimate what percentage of these individuals earn over $150,000 per year. This might vary widely, but for the sake of estimation, let's assume that 20% of programmers in this age group earn over $150,000 per year.

  3. Likelihood to Purchase High-End Product: Finally, we'll estimate the likelihood of this demographic purchasing a high-end product like the Apple Vision Pro at launch. This is quite speculative, but for the sake of estimation, let's assume 10% of the high-earning programmers are interested in purchasing such a product at launch.

Let's calculate these estimates.

Based on these estimates:

  • Approximately 70% of programmers are aged between 20 and 50.
  • About 20% of these programmers earn over $150,000 per year.
  • And roughly 10% of this high-earning group might be interested in purchasing a high-end product like the Apple Vision Pro at launch.

The estimated number of programmers in the U.S. who fit all these criteria and would likely buy the Apple Vision Pro at launch is around 56,000.

Please note that these figures are based on assumptions and should be used as a rough estimate. Actual numbers may vary based on more precise data and research.

Incidentally, 60,000-80,000 might be close to the actual launch day market for this thing in the US! Unfortunately, GPT failed to account for the prevalence of scalpers. This makes me wonder whether the face-scanning step was added not only to improve customer experience (recall the Solo loop band fiasco based on shoddy in-app sizing guidelines), but also to slow scalpers down and to reduce the fungibility of the Vision Pro by obfuscating which size light shield someone is actually buying.

Of course, if Apple really wanted to discourage scalpers, they would lock the device to the iCloud account of whoever purchased it for some period of time. But since that would cause significant customer support headaches, it's unlikely Apple would ever bother.

MacRumors gives a rundown of how Vision Pro can interface with your Mac based on what the keynote video indicated. I forget sometimes I've been scrutinizing Apple keynote videos since 2001 when they would talk at length about things like the number of pipeline stages in PowerPC CPUs relative to Intel, and so not everyone has developed the same eagle eye.

In summary, here's what you apparently can do with a Mac from your Vision Pro:

  • Control the Mac's main display in a single 4K-resolution screensharing window called Mac Virtual Display (this is almost certainly why macOS Sonoma added High Performance screen sharing)
  • Connect Bluetooth accessories like mice, trackpads, and keyboards to Vision Pro

Here's what you probably can't do:

  • Screen share to any Mac over the Internet using the built-in screen-sharing feature (doing so will probably require a third-party VNC-like app like Screens, as Apple will likely keep things simple by requiring being on the same local network and leveraging Bonjour and peer-to-peer wifi)

Here's what you almost certainly won't be able to do:

  • Control Intel Macs (much less non-Macs)
  • Control multiple Macs at once
  • Bring up multiple screens on a Mac
  • Run at resolutions higher than 4K (surely a bandwidth limit of high-performance screen sharing, which one presumes sports lower latency and a higher refresh rate)
  • Break apps out of Mac Virtual Display to be organized in space (similar to how Parallels allowed Windows apps to be run as free-floating Mac windows circa 2007 with the advent of x86 virtualization in Mac OS X)

Important to get expectations sorted before pre-orders. I'm already trying to think through ways I could conceivably travel exclusively with a Vision Pro and without an accompanying Mac. The 13" M2 MacBook Air has always felt way larger and heavier than it should, and definitely would be for this purpose. If there were ever a time for Apple to rescuscitate the 12" MacBook, it would be now. Maybe I'll settle for a VNC app to remotely connect to a Mac in the cloud? It'd be great if someone could build an Apple TV-sized puck that could run macOS for no other purpose but to be a remote display for Vision Pro.

Of course the real solution would be for a future version of the Vision Pro hardware to support virtualization of macOS… but I'm not going to hold my breath.

Breaking Change artwork

v1 - Initial Commit

Breaking Change

I finally gave up and recorded a podcast. And now I'm writing show notes. I tried not to let this happen but here I am typing this and it's happening. Forgive me.

This is a podcast for anybody who shares my interests and likes passing the time with a friendly, opinionated voice in their head. I have done thousands of hours of chores and errands over the years listening to mostly agreeable, sometimes interesting 1.5-3 hour podcasts to keep my mind busy. Now it's time for me to give back, I guess.

Here's the deal: e-mail me at podcast@searls.co and I'll talk about whatever you want me to talk about. Write whatever you want into that e-mail body: ask a question about video games, ask for my take on a new technology, ask for relationship advice, tell me my podcast sucks. You write it, I'll read it.

Okay, here's what I covered in version one of Breaking Change:

Show those show notes…

If you're a Ruby developer and you've never been to RubyKaigi, the 2024 edition from May 15-17 in Okinawa would be a fantastic place to start. Despite being nearly thirty years old, Ruby is still ascendant in Japan—inspiring a diverse and enthusiastic community of Rubyists who are passionate about pushing the language further. Kaigi has played an important role in that success, gradually picking up steam over the last ten years as it travels between many of the best cities in Japan (owed largely to the organizers' impeccable taste in food and drink).

If you've ever considered coming to Kaigi but hesitated out of concern about Japanese language ability, transportation complexity, unfamiliar customs, or whether you'll like the food, Okinawa offers a uniquely comfortable on-ramp to first-time visitors:

  • Okinawa's nature and beaches make it one of the most beautiful places in Japan, with the best weather in the nation in May—tropical and sunny, interspersed with brief heat-breaking rain showers
  • Despite the laid-back island atmosphere, English is as widely understood and spoken in Okinawa as anywhere else in Japan (one silver lining of the continuous presence of the American military). Additionally, RubyKaigi's organizers try to schedule at least one English-language talk for every time slot and pay for live translation services into English for all Japanese-langauge talks
  • While all of Japan is incredibly welcoming to foreigners, Okinawa is one of the few places in Japan designed to be a tourist destination—easy to get around and with the amenities people expect to see on vacation, while refreshingly free of the congestion and strain tourists' presence has placed on other cities like Osaka and Kyoto
  • It's always been my perspective that to experience the full range of Japanese culture, one needs to escape the urban sprawl of Tokyo and visit more rural areas. Okinawa is unique in that it's every bit as accommodating to people unfamiliar with Japanese customs as Tokyo is, while still offering a broad and authentic view into Japanese culture and tradition
  • If you're not sure whether you'll like Japanese food, Okinawa is such a melting pot that it's become home to a fusion of American and Asian cuisine that should satisfy any palate. When in doubt, there's always taco rice

For more information on what it's like to attend Kaigi, check out the field report I filed for Test Double last year in Matsumoto. It was easily the most educational and fun conference I've attended since the pandemic, with genuinely useful information about still-in-development Ruby features I wouldn't have found anywhere else and absolutely fantastic official and unofficial activities and parties. I'm sure this year will be no different! Tickets aren't on sale yet, but you can sign up to be notified when they go live.

I'm not planning to speak this year, but you should consider submitting to the CFP before it closes at the end of January. RubyKaigi's program is geared for giving open source Ruby programmers a platform to share their work along with their experiences and insights. In addition to straightforward presentations of new gems and language features, the conference also celebrates the joy of programming by highlighting creative and quirky talks like Shioi-san's Ruby fork to add a oft-requested i++ operator and @tompng's increasingly fantastical quines. If you write open source Ruby, I hope you'll consider submitting a talk! (And if you'd like some free advice on responding to CFPs, I published a video about speaking last month you might enjoy.)

Anyway, if you're thinking of going, drop me a line if you have any questions or concerns and I'll be happy to help. Hope to see you in Okinawa this May! 🏝️

Ruby's REPL, irb has gotten a ton of love over the past few years, essentially obviating most popular debuggers in the process. This year is another huge leap forward.

This is my favorite item in this list because I didn't even know show_source existed:

The show_source command has always been an essential tool for many IRB users, especially for debugging. This year it received two enhancements that will make it even more useful:

  • You can now use -s to get the method's super definition if it has one
  • It can now display private methods too

Awesome!

Having a great REPL is essential in dynamic languages, so quality-of-life improvements have a direct impact on the speed and quality of feedback during development.

Rounding out what feels like six months of contemplative and retrospective blog posts over at the Test Double blog is this one on my own idiosyncratic approach to finding meaning in my work and how I wish more managers would approach their direct reports' career development.

To wit, I've never accomplished anything I felt proud of by setting a goal. In fact, the surest way to ensure I don't do something is to set a goal. When asked to set goals for myself, I've found that expressing the goal (as opposed to achieving it) becomes my overriding objective. The moment a manager approved my list of goals, I felt that I had completed the work asked of me and I would instantly lose all motivation to pursue the goals themselves.

This explains why planting flags can succeed where goal-setting fails. If what I'm searching for is meaning in my work, setting a goal creates an expectation of where, when, and how my future self should find that meaning. High pressure. Focusing on doing my job well and reflecting on whatever I did in retrospect, however, has allowed me to sift through my experiences, identify patterns, and give meaning to them. Low pressure.

I feel like managers as well as employees find comfort in planning what their future state will look like, as if it reduces any of the fear-inducing uncertainty that lies ahead. It doesn't, but I can't imagine why else people would so quickly reach for goal-setting as their primary career development approach when it almost always fizzles and fails.

I need exactly two things to do great work: awareness of others' expectations of what they think I should be doing and a shared definition of what success will look like when I achieve it. Nothing more is necessary to excavate metaphorical tons of marble from which I can chisel sculptures to mark the moments that mattered along the way.

One more note on what people get wrong about ✨Thought Leadering✨:

But here's the thing: I create these things for me and me alone. When a bunch of people read something I wrote or show up to one of my talks, do I find it encouraging and validating? Sure. But it's not what drives me. I started creating things to punctuate my life's sentences long before anybody took an interest in me and I wouldn't stop even if everyone loses interest in me.

If the lack of an audience is the reason you've failed to apply creativity in expressing the meaning of your life's work, that's a real shame.

Shout out to Dave from Ypsilanti who requested bids from contractors for a bathroom remodel on Angie's List and entered my cell phone number. 30 calls and 20 texts in the last 24 hours. Cool.

Kudos to Instagram for being so confidently unconcerned its creators would consider pushing back against the platform for saying this:

90% decrease in cost per person recalling seeing ads with a mix of virtual creator content and campaign video, compared to ads with campaign video only

I've talked to several people in advertising about the rise of "The Influencer" as a target for ad spend over the last five years and the one thing—literally, the only thing—ad executives seem to like about influencers is that they're cheap. If they like two things about working with influencers, it's that so many users are competing for so few dollars that moral scruples about aren't a factor.

I don't see how AI talent doesn't give ad buyers more of what they want (even lower prices and even fewer scruples) while also escaping many of the downsides of working with influencers (managing amateur talent, qualifying promoted content, ensuring brand safety). It will be interesting to see how durable influencers' place in the market turns out to be.

The First Annual Buggy Awards!

Welcome to the 2023 Buggies! The inaugural award ceremony in which I celebrate the most frustrating, hard-to-reproduce, and least-discussed software bug of the year.

Buggy Trophy of a Golden Ladybug

This year's award recipient for Neatest Bug of the Year has been striving for literal years to climb atop the pile of apps that freeze on first launch after install, error pages that themselves trigger additional errors, and save buttons that do nothing until you clear your cookies. But as we say goodbye to 2023, this bug found a way to emerge on top of a more-crowded-than-ever field of hopelessly broken software.

So, without further ado, the Neatest Bug of The year is…

Spoiler alert: there's more to this…

What is organicfruitapps.com?

I was fighting with a home automation this morning and it required me to ping a few high-traffic websites and I got curious looking at the headers that Apple returns:

$ curl -I https://www.apple.com/
HTTP/2 200
server: Apple
content-type: text/html; charset=utf-8
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
strict-transport-security: max-age=31536000; includeSubdomains; preload
referrer-policy: no-referrer-when-downgrade
content-security-policy: default-src 'self' blob: data: *.akamaized.net *.apple.com *.apple-mapkit.com *.cdn-apple.com *.organicfruitapps.com; child-src blob: embed.music.apple.com embed.podcasts.apple.com https://recyclingprogram.apple.com swdlp.apple.com www.apple.com www.instagram.com platform.twitter.com www.youtube-nocookie.com; img-src 'unsafe-inline' blob: data: *.apple.com *.apple-mapkit.com *.cdn-apple.com *.mzstatic.com; script-src 'unsafe-inline' 'unsafe-eval' blob: *.apple.com *.apple-mapkit.com www.instagram.com platform.twitter.com; style-src 'unsafe-inline' *.apple.com
cache-control: max-age=582
expires: Sat, 23 Dec 2023 12:49:53 GMT
date: Sat, 23 Dec 2023 12:40:11 GMT
x-cache: TCP_MEM_HIT from a23-218-251-35.deploy.akamaitechnologies.com (AkamaiGHost/11.3.3-52660090) (-)
set-cookie: geo=US; path=/; domain=.apple.com

What happens next will shock you…

Summarizing a report on the top programming languages of 2023, and the impact LLMs have had:

We've been internally discussing how we're going to address the impact of AI-based code assistants on our language rankings since GitHub released Copilot in October 2021. However, it was when ChatGPT hit the market on November 30, 2022 and went from 0 to 100M users in two months that we started seeing undeniable impacts on our source data.

The first chart is worth the click on its own. StackOverflow hasn't seen so few new questions asked since 2012. The decline looks less sudden than it is, because the number of questions being asked started declining in mid-2021, which is incidentally when GitHub Copilot was first released.

You can argue all you want about the quality of the code that AI tools produce (the results I've seen from a year of using them in earnest has been middling at best), but I can think of no better metric for GitHub to hang its hat on than questions no longer asked to other humans. That seems like incontrovertible evidence people are asking AI tools those questions instead and that they're getting good enough answers that they ultimately decide not to go ask a human.

This decline is dramatic, but let's play the AI skeptic for a moment: it would be interesting to learn what nature of questions Copilot is taking from StackOverflow's "marketshare." Countless questions asked on StackOverflow represent rubber duck pairing, in which the very act of articulating a problem carefully makes the answer become apparent. If 30% of StackOverflow questions are effectively solved in the act of the author being forced to cogently describe a problem in writing, then a 30% decline in StackOverflow questions isn't necessarily evidence that AI is providing good answers—only that people are now using LLM chatbots as their rubber duck pair partners instead of the text area of that particular web forum. (No matter how you slice it, though, less user-generated content and engagement is bad news for StackOverflow's business model, it would seem.)

I just added an /rss page to this web site to document which feeds I publish.

Because many people either never experienced RSS before Google killed it or don't have an answer to the question of "how would I go about getting back into RSS in 2023", I also included a step-by-step explainer on setting up a feed reader application to subscribe to this site and sync its subscription via iCloud.

If you fell out of the habit of getting your news from an RSS reader during the Twitter era like I did, I really encourage you to give it another go. It is refreshing to be in total control of what, when, and how I consume articles. The best part is that the only friction to unsusbscribing from a feed is the time it takes me to ask, "am I enjoying this?" which is a much lower bar than the social pressure I would always before clicking "Unfollow" on a social network.

What if I replaced myself with a chatbot?

Potential preview of coming attractions.

The entire reason I started blogging was to avoid repeating myself to people, so the prospect of uploading a corpus of hundreds of thousands of words I've written and hundreds of open source repositories I've created to build a Searls-flavored ChatGPT was immediately attractive.

Unfortunately, it still dishes takes I disagree with too often for me to turn it on publicly. Unclear whether this custom GPT feature represents a very thin candy shell with a lot of ego projection and wish-casting to make it seem more "real".