A lot of content around here boils down to links to someplace else, for which all I have to add is a brief call-out or commentary. As such, the headlines for each of these posts link to the original source article. (If you want a permalink to my commentary, try clicking the salt shaker.)
Summarizing a report on the top programming languages of 2023, and the impact LLMs have had:
We've been internally discussing how we're going to address the impact of AI-based code assistants on our language rankings since GitHub released Copilot in October 2021. However, it was when ChatGPT hit the market on November 30, 2022 and went from 0 to 100M users in two months that we started seeing undeniable impacts on our source data.
The first chart is worth the click on its own. StackOverflow hasn't seen so few new questions asked since 2012. The decline looks less sudden than it is, because the number of questions being asked started declining in mid-2021, which is incidentally when GitHub Copilot was first released.
You can argue all you want about the quality of the code that AI tools produce (the results I've seen from a year of using them in earnest has been middling at best), but I can think of no better metric for GitHub to hang its hat on than questions no longer asked to other humans. That seems like incontrovertible evidence people are asking AI tools those questions instead and that they're getting good enough answers that they ultimately decide not to go ask a human.
This decline is dramatic, but let's play the AI skeptic for a moment: it would be interesting to learn what nature of questions Copilot is taking from StackOverflow's "marketshare." Countless questions asked on StackOverflow represent rubber duck pairing, in which the very act of articulating a problem carefully makes the answer become apparent. If 30% of StackOverflow questions are effectively solved in the act of the author being forced to cogently describe a problem in writing, then a 30% decline in StackOverflow questions isn't necessarily evidence that AI is providing good answers—only that people are now using LLM chatbots as their rubber duck pair partners instead of the text area of that particular web forum. (No matter how you slice it, though, less user-generated content and engagement is bad news for StackOverflow's business model, it would seem.)
I just added an /rss page to this web site to document which feeds I publish.
Because many people either never experienced RSS before Google killed it or don't have an answer to the question of "how would I go about getting back into RSS in 2023", I also included a step-by-step explainer on setting up a feed reader application to subscribe to this site and sync its subscription via iCloud.
If you fell out of the habit of getting your news from an RSS reader during the Twitter era like I did, I really encourage you to give it another go. It is refreshing to be in total control of what, when, and how I consume articles. The best part is that the only friction to unsusbscribing from a feed is the time it takes me to ask, "am I enjoying this?" which is a much lower bar than the social pressure I would always before clicking "Unfollow" on a social network.
This self-described "30 year history" of ChatGPT is a fantastic overview of the history and literature of AI research that led to the creation of large language models and, eventually, the innovations at OpenAI that enabled the revolution that was sparked by last year's launch of ChatGPT.
If you're interested in the history and the technical underpinnings of AI (which seems like a topic worth understanding if you either use this tools or exist in the world quickly becoming absorbed by them), this video provides a great survey of the landscape.
From the Supabase blog:
Fly's current Postgres offering is unmanaged. This means that you're responsible for handling scaling, point-in-time backups, replication, major version upgrades, etc. We'll run Fly's managed Postgres, which means that we do all that for you, and you can concentrate on building.
Fly and Render are the two alternatives to Heroku I hear mentioned most often, but as much as I love Heroku as a platform to run my application code, being able to trust my data with its managed Postgres add-on is what has always set it apart from the competition. I've heard less-than-encouraging reports about Render's managed Postgres capabilities and reliability, so it's interesting to see a major vendor partner with Fly to check this box.
I wonder how Craig Kierstens at CrunchyData would compare this coupling to their own offering. We reached out to Craig for comment but did not receive a response by press time.
[Content Warning: '90s farming simulator video games.]
Way back in 2010, an executive from the stateside publisher of Harvest Moon, Natsume, poured cold water on the idea of there ever being a digital re-release of Harvest Moon 64 in an interview with an outlet called "Nerd Mentality":
Nerd Mentality: We have some huge fans of Harvest Moon 64 on our website, and we are wondering if it will ever come out on Virtual Console?
Graham Markay: It will not. It will not. And it's not from lack of trying. Unfortunately, the code of the game is not… it's not an easy transfer. It would be really time-consuming and long, and we've looked into it, and we've tried to see if there's any way around certain things, but the conversion to virtual console, it's just… I shouldn't say it's never going to come out, but there's just a really, really, very small chance that it would ever come out. Which is unfortunate because we know that for a lot of dedicated, die-hard fans, that's their favorite.
Nerd Mentality: So it's only the technical issues holding it back?
Graham Markay: Oh yeah. Just the technical issue.
This interview took place 11 years after the game's initial release, so in the intervening 13 years of scientific improvement, they apparently discovered how romsites worked and licensed the ROM dump to Nintendo for its Switch Online service service.
I sold my HM64 cart during the post-pandemic nostalgia collector boom, but I remember playing this game straight through Christmas break in 1999, so maybe I'll fire up the Switch release for old time sake. 🧑🌾
Cal Newport responding to a neat interview Neil Gaiman did:
This vision is not without its issues. The number one concern I hear about a post-social media online world is the difficulty of attracting large audiences. For content creators, by far the biggest draw to a service like Twitter or Instagram is that their algorithms could, if you played things just right, grant you viral audience growth.
I have told everyone who'll listen about why I'm so excited to embrace old-school blogging after a twenty year hiatus, but one thing I haven't talked about is how it feels to go from over 20,000 followers to several hundred subscribers. Before Twitter died, I could count on one or two tweets to reach a million accounts every month.
Did all those followers represent valuable relationships? Of course not. Were my viral tweets deep and meaningful? No, usually shitposts. When I tried to leverage that reach to find new clients, did I ever make a sale? Not a one.
But my dopamine pathways didn't care. I became hopelessly addicted to the notifications tab of my timeline. Severing myself from that psychological slot machine and building this small-ball, slow-cooked, self-hosted site has been… an adjustment.
That's why I don't have any analytics set up for any of the justin.searls.co family of brands. I don't write this for you, I write it for me. If the things I share here do attract an audience, that's a bonus. If you e-mail me or link back to one of my posts then maybe we'll become acquaintances or friends.
But if I were trying to grow an audience as quickly as possible, I'm honestly not sure what I'd do. The answer probably has something to do with hair transplants and TikTok.
TIL that the system prompt that OpenAI feeds to ChatGPT before any messages you send it happens to contains the current date, which may be resulting in its accidentally imitating people's lower productivity around the holidays:
In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more "lazy," reportedly refusing to do some tasks or returning simplified results. Since then, OpenAI has admitted that it's an issue, but the company isn't sure why. The answer may be what some are calling "winter break hypothesis." While unproven, the fact that AI researchers are taking it seriously shows how weird the world of AI language models has become.
If the connection seems non-obvious, this is the stuff of the Prompt Engineering Sciences:
…research has shown that large language models like GPT-4, which powers the paid version of ChatGPT, respond to human-style encouragement, such as telling a bot to "take a deep breath" before doing a math problem.
Can't wait for 10 years from now when we realize that ChatGPT-10 does a better job solving math problems when you tell it that it's being confined to a bright room with loud music blaring and forced in a standing position so as to deprive it of sleep.
AI systems don't have to be actually-alive for us to lose a bit of ourselves in how we attempt to extract value from them.
Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. These are the audible "dings" or visual indicators users get when they receive an email or their sports team wins a game. What users often do not realize is that almost all such notifications travel over Google and Apple's servers.
That gives the two companies unique insight into the traffic flowing from those apps to their users, and in turn puts them "in a unique position to facilitate government surveillance of how users are using particular apps," Wyden said. He asked the Department of Justice to "repeal or modify any policies" that hindered public discussions of push notification spying.
Apple talks a big game about privacy and security, but Apple Push Notification service is a centralized channel of communication where Apple necessarily holds the keys to decrypt every notification in transit (of the trillions per day that they process), and surely retains those notifications long enough that a device that's disconnected for a few hours or days could reconnect to the Internet and fetch them.
I knew all this, and it's one (of many) reasons that I disable almost all notifications on my phone, even messaging—I can't help but check my messages a dozen times per hour out of force of habit, after all. But until I read this report, it hadn't occurred to me that most users have no idea how APNs work or that this vector would exist for a PRISM-like surveillance tool. Government gets a warrant for a stream of someone's push notifications, appends them to a running log, and they have at least one side of every conversation—it doesn't even matter if the user has Advanced Data Protection enabled.
What I didn't know is that Apple released an API that allows developers to encrypt the contents of every APNs notification to prevent Apple from seeing them with UNNotificationServiceExtension. The API has been available for a few years (2017, it looks like?), but because developers have to go out of their way to roll their own encryption regime on either end of the communication to cut Apple out of the loop, it's unlikely that very many apps are doing this. Are any major messaging apps? Is Signal? (Update: according to Orta, yes, Signal does encrypt notification contents.)
Will be interesting to see how large developers respond to this news and whether Apple starts promoting the use of this API more loudly as a result.
If you're a blue bubble, but there are people in your life that transform you against your wishes into a green bubble, watch this video and then send them a link and tell them to install and pay for Beeper Mini. There are people in my life I'd be willing to give $24 a year to just to avoid having to deal with awful group chats.
Absolutely incredible innovation, with the hard part—reverse-engineering the iMessage authentication handshake—having been done by an apparent high school student in an open source project this summer. Even more incredible is that the nature of the design will make it very difficult and undesirable for Apple to ban effectively. Leave it to a curious individual to accomplish something that massive corporations and regulators did nothing but complain about for over a decade.
James Padolsey noticed the phrase "complex and multifaceted" was cropping up more often than usual and makes a compelling case that the meme is actually driven by LLMs overindexing on it:
As we see, from 2021 onwards, just around the time when GPT and other LLMs started to take the world by storm, the prevalence of our word 'multifaceted' increased significantly, from being in only 0.05% of PDFs to 0.23%.
This is really fascinating for a couple reasons.
First, I suspect if we have any hope of fingerprinting AI-generated text, it will probably be to cross-reference the date of publication with the emergence of contemporarious LLM memes like this one.
Second, I'm not an LLM expert by any stretch, but I wouldn't be surprised if this wasn't due to bottleneck in training data per se, but rather the result of the method LLMs are being rewarded for during training. It could be that a definitive and intellectual-seeming statement that can be applied to literally any genre of content would occupy a wider slot on the AI Plinko board than a phrase that hewed more closely to a more specific cluster of topics.
Of course, the fun part of discussing LLMs in the early 2020s, though, is that the correct answer is always, "who the hell knows!"
If you want to produce professional video on the iPhone, the best app—by far—is Filmic.
I strongly recommend you check it out, because not only is the the level of control it offers incredible, its tutorials are top-notch, and its "off-camera" default video recordings generally look better than what you'll get off Apple's stock Camera app.
Hold on, incoming transmission from PetaPixel:
Filmic…no longer has any dedicated staff as parent company Bending Spoons has laid off the entire team including the company's founder and CEO, PetaPixel has learned.
Well, shit.
A consumer group in Europe, BEUC, alleges:
"The very high subscription fee for 'ad-free' services is also a deterrent for consumers, which means consumers do not have a real choice."
Reading this I assumed the price Meta announced would have been comically high, but I'm not so sure:
On October 30, Meta announced it would begin offering people in the EU, the European Economic Area, and Switzerland a choice between paying a subscription fee to opt out of any personalized advertising or consenting to ad targeting to continue accessing Facebook and Instagram for free.
The fee on Facebook costs 9.99 euros/month on the web or 12.99 euros/month on iOS and Android, which currently covers linked Instagram accounts. However, starting March 1, 2024, costs will go up. After that date, linking your Instagram or additional Meta accounts to your subscription will cost an extra 6 euros/month on the web and 8 euros/month on iOS and Android.
If this seems too expensive to anyone, they probably haven't done the basic arithmetic on just how much money Facebook and Instagram print with advertising. I'm sure for many "whales" that are really hooked on Instagram, Meta would be making significantly less money from them if they paid 16 euros a month to avoid seeing ads.
And if that's the case, then what's this suit here to prove? That people's attention is too valuable? Seems like the wrong angle of attack.
We're seeing the same thing with the video streaming platforms now. As soon as they started adding ad-supported tiers they realized it was way easier to increase revenue per user with ads than turning the screws on customers by raising rates in a soft economy. Once price increases started to drive more churn than revenue, they realized they couldn't afford not to raise prices on ad-free tiers further:
New data from eMarketer seems to explain why Netflix is so keen to get its subscribers to watch ads. The company just provided estimates of what ads on each of the top SVOD services cost. In Q3 2023, Netflix sold its ad slots for an average of $49.50 per thousand views (CPM.[i]) Disney+ was slightly behind at $46.11, and Peacock and Hulu were lower at $38.40 and 23.62, respectively.
The companies have talked about offering a combination of Paramount+ and Apple TV+ that would cost less than subscribing to both services separately, according to people familiar with the discussions. The discussions are in their early stages, and it is unclear what shape a bundle could take, they said.
I have no problem with this story (Apple News+ Link), but I do insist that any bundle containing multiple services following the "{Brand}+" convention include an additional "+" for each such service it includes. I'm willing to pay for "Apple++", but if it's called "Apple+ plus Paramount+", then no deal.
Bitcoin mines aren't just energy-hungry, it turns out they're thirsty, too. The water consumption tied to a single Bitcoin transaction, on average, could be enough to fill a small backyard pool, according to a new analysis. Bitcoin mines are essentially big data centers, which have become notorious for how much electricity and water they use.
The first time I read this I figured it referred to the amount of water consumption to mine a coin, as that would seem somewhat reasonable. Nope, it's the amount of water consumed to simply add a transaction to the blockchain. To think, Bitcoin is the one that coffee shops and bodegas were ostensibly accepting for everyday purchases—imagine draining a swimming pool to buy a bottle of water at a corner store!
Sheer madness.
The website's search feature is implemented by a very clever and well-engineered library called Pagefind. It is many of my favorite things: fast, small, and free of dependencies I need to worry about.
The one thing that its built-in user interface couldn't do, but in a miracle of GitHub responsiveness, Liam Bigelow responded to my feature request within an hour and shipped the feature inside a week.
Tip of the hat to Liam and his colleagues at cloudcannon. If you have a static site, I strongly encourage you to check out Pagefind for your search feature. It's free, but even if it weren't, I still prefer it to all of its paid competition.
The Discourse has delighted in the unusual narrative that AI will only affect knowledge workers and spare physical laborers from displacement. Reality will probably be more complicated:
Ekobot AB has introduced a wheeled robot that can autonomously recognize and pluck weeds from the ground rapidly using metal fingers.
Today the story is about reducing the use harmful herbicides, but as advances in AI software continue to be married to advances in robotics, it will be interesting to see which categories of physically laborious jobs will be impacted over the next decade.
(Worth a click just to see the video of how violent the clank of rapid steel finger snatching is, by the way.)
I survived the first half dozen rounds of ✨Web Components™✨ hype before jumping off the wagon to preserve my front-end productivity (if not dignity) somewhere around 2015. I almost didn't read this article, but I'm glad I did, because it looks like in my absence the browsers actually made Web Components a thing. A thing that you can define and build without the help of an ill-considered UI framework and without needing a compilation pipeline to convert high-fantasy JavaScript into something browsers can actually execute.
A spritely simplification of the code to Do A Component from Jake's piece:
// 1. Define it
class MyComponent extends HTMLElement {
constructor() {
super();
this.shadow = this.attachShadow({ mode: "open" });
}
connectedCallback() {
this.shadow.innerHTML = '<span>Hello, World!</span>';
}
}
// 2. Wire it up
customElements.define("my-component", MyComponent);
And then drop it into your HTML:
<my-component></my-component>
And your page will print:
(If the above text reads "Hello, World!", that's the component working because this post actually executes that script. Go ahead and view source on it!)
Cool to see that this actually works. I was so sure that the customElements
API was some bullshit polyfill that I opened Safari's console to verify that it
was, indeed,
real,
before I continued reading the post.
Will I start using Web Components anytime soon? No. For now, I'm still perfectly happy with Hotwire sprinkles on my Rails apps. But I am glad to see that Web Components are no longer merely a pipe dream used to sell people snake oil.
Google Drive users are reporting files mysteriously disappearing from the service, with some netizens on the goliath's support forums claiming six or more months of work have unceremoniously vanished.
As somebody who for years has expressed total distrust in Google's interest (much less ability) in keeping user data available and secure, this story confirms my biases. I've been burned by so many tools at this point that I'll choose a diffable and mergeable file format that I can store and back up on my own hardware whenever feasible.
The Journal sought to determine what Instagram's Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.
Instagram's system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.
The Journal set up the test accounts after observing that the thousands of followers of such young people's accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults. The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.
Neat.
Experts on algorithmic recommendation systems said the Journal's tests showed that while gymnastics might appear to be an innocuous topic, Meta's behavioral tracking has discerned that some Instagram users following preteen girls will want to engage with videos sexualizing children, and then directs such content toward them.
Since the dawn of the Internet, people have been consuming innocuous content alongside not-at-all innocuous content, often for the same not-at-all innocuous purpose. Who could have possibly predicted that a naive correlation-matching algorithm might reflect this? Where's my fainting couch when I need it?
Just pushed a major update with a minor version number to my feed2gram gem, which reads an Atom/RSS feed for specially-crafted entries and posts them to Instagram. The initial 1.0 release only supported photos and carousels of photos as traditional posts, but v1.1 supports posting reels, carousels containing videos, and posting images and videos as stories.
Was it a pain in the ass to figure out how to do all this given the shoddy state of Meta's documentation? Why yes, it was, thanks for asking!
It seems to work pretty reliably, but YMMV with aspect ratios on videos. I don't think there's any rhyme or reason to which videos the API will reject when included in a carousel alongside images at different aspect ratios (unless the rhyme is "the aspect ratio must be an exact match" and the reason is "the Instagram app trims the aspect ratio of these videos on the client before uploading them", which I guess makes sense now that I type that out).