A lot of content around here boils down to links to someplace else, for which all I have to add is a brief call-out or commentary. As such, the headlines for each of these posts link to the original source article. (If you want a permalink to my commentary, try clicking the salt shaker.)
TIL that the system prompt that OpenAI feeds to ChatGPT before any messages you send it happens to contains the current date, which may be resulting in its accidentally imitating people's lower productivity around the holidays:
In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more "lazy," reportedly refusing to do some tasks or returning simplified results. Since then, OpenAI has admitted that it's an issue, but the company isn't sure why. The answer may be what some are calling "winter break hypothesis." While unproven, the fact that AI researchers are taking it seriously shows how weird the world of AI language models has become.
If the connection seems non-obvious, this is the stuff of the Prompt Engineering Sciences:
…research has shown that large language models like GPT-4, which powers the paid version of ChatGPT, respond to human-style encouragement, such as telling a bot to "take a deep breath" before doing a math problem.
Can't wait for 10 years from now when we realize that ChatGPT-10 does a better job solving math problems when you tell it that it's being confined to a bright room with loud music blaring and forced in a standing position so as to deprive it of sleep.
AI systems don't have to be actually-alive for us to lose a bit of ourselves in how we attempt to extract value from them.
Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. These are the audible "dings" or visual indicators users get when they receive an email or their sports team wins a game. What users often do not realize is that almost all such notifications travel over Google and Apple's servers.
That gives the two companies unique insight into the traffic flowing from those apps to their users, and in turn puts them "in a unique position to facilitate government surveillance of how users are using particular apps," Wyden said. He asked the Department of Justice to "repeal or modify any policies" that hindered public discussions of push notification spying.
Apple talks a big game about privacy and security, but Apple Push Notification service is a centralized channel of communication where Apple necessarily holds the keys to decrypt every notification in transit (of the trillions per day that they process), and surely retains those notifications long enough that a device that's disconnected for a few hours or days could reconnect to the Internet and fetch them.
I knew all this, and it's one (of many) reasons that I disable almost all notifications on my phone, even messaging—I can't help but check my messages a dozen times per hour out of force of habit, after all. But until I read this report, it hadn't occurred to me that most users have no idea how APNs work or that this vector would exist for a PRISM-like surveillance tool. Government gets a warrant for a stream of someone's push notifications, appends them to a running log, and they have at least one side of every conversation—it doesn't even matter if the user has Advanced Data Protection enabled.
What I didn't know is that Apple released an API that allows developers to encrypt the contents of every APNs notification to prevent Apple from seeing them with UNNotificationServiceExtension. The API has been available for a few years (2017, it looks like?), but because developers have to go out of their way to roll their own encryption regime on either end of the communication to cut Apple out of the loop, it's unlikely that very many apps are doing this. Are any major messaging apps? Is Signal? (Update: according to Orta, yes, Signal does encrypt notification contents.)
Will be interesting to see how large developers respond to this news and whether Apple starts promoting the use of this API more loudly as a result.
If you're a blue bubble, but there are people in your life that transform you against your wishes into a green bubble, watch this video and then send them a link and tell them to install and pay for Beeper Mini. There are people in my life I'd be willing to give $24 a year to just to avoid having to deal with awful group chats.
Absolutely incredible innovation, with the hard part—reverse-engineering the iMessage authentication handshake—having been done by an apparent high school student in an open source project this summer. Even more incredible is that the nature of the design will make it very difficult and undesirable for Apple to ban effectively. Leave it to a curious individual to accomplish something that massive corporations and regulators did nothing but complain about for over a decade.
James Padolsey noticed the phrase "complex and multifaceted" was cropping up more often than usual and makes a compelling case that the meme is actually driven by LLMs overindexing on it:
As we see, from 2021 onwards, just around the time when GPT and other LLMs started to take the world by storm, the prevalence of our word 'multifaceted' increased significantly, from being in only 0.05% of PDFs to 0.23%.
This is really fascinating for a couple reasons.
First, I suspect if we have any hope of fingerprinting AI-generated text, it will probably be to cross-reference the date of publication with the emergence of contemporarious LLM memes like this one.
Second, I'm not an LLM expert by any stretch, but I wouldn't be surprised if this wasn't due to bottleneck in training data per se, but rather the result of the method LLMs are being rewarded for during training. It could be that a definitive and intellectual-seeming statement that can be applied to literally any genre of content would occupy a wider slot on the AI Plinko board than a phrase that hewed more closely to a more specific cluster of topics.
Of course, the fun part of discussing LLMs in the early 2020s, though, is that the correct answer is always, "who the hell knows!"
If you want to produce professional video on the iPhone, the best app—by far—is Filmic.
I strongly recommend you check it out, because not only is the the level of control it offers incredible, its tutorials are top-notch, and its "off-camera" default video recordings generally look better than what you'll get off Apple's stock Camera app.
Hold on, incoming transmission from PetaPixel:
Filmic…no longer has any dedicated staff as parent company Bending Spoons has laid off the entire team including the company's founder and CEO, PetaPixel has learned.
Well, shit.
A consumer group in Europe, BEUC, alleges:
"The very high subscription fee for 'ad-free' services is also a deterrent for consumers, which means consumers do not have a real choice."
Reading this I assumed the price Meta announced would have been comically high, but I'm not so sure:
On October 30, Meta announced it would begin offering people in the EU, the European Economic Area, and Switzerland a choice between paying a subscription fee to opt out of any personalized advertising or consenting to ad targeting to continue accessing Facebook and Instagram for free.
The fee on Facebook costs 9.99 euros/month on the web or 12.99 euros/month on iOS and Android, which currently covers linked Instagram accounts. However, starting March 1, 2024, costs will go up. After that date, linking your Instagram or additional Meta accounts to your subscription will cost an extra 6 euros/month on the web and 8 euros/month on iOS and Android.
If this seems too expensive to anyone, they probably haven't done the basic arithmetic on just how much money Facebook and Instagram print with advertising. I'm sure for many "whales" that are really hooked on Instagram, Meta would be making significantly less money from them if they paid 16 euros a month to avoid seeing ads.
And if that's the case, then what's this suit here to prove? That people's attention is too valuable? Seems like the wrong angle of attack.
We're seeing the same thing with the video streaming platforms now. As soon as they started adding ad-supported tiers they realized it was way easier to increase revenue per user with ads than turning the screws on customers by raising rates in a soft economy. Once price increases started to drive more churn than revenue, they realized they couldn't afford not to raise prices on ad-free tiers further:
New data from eMarketer seems to explain why Netflix is so keen to get its subscribers to watch ads. The company just provided estimates of what ads on each of the top SVOD services cost. In Q3 2023, Netflix sold its ad slots for an average of $49.50 per thousand views (CPM.[i]) Disney+ was slightly behind at $46.11, and Peacock and Hulu were lower at $38.40 and 23.62, respectively.
The companies have talked about offering a combination of Paramount+ and Apple TV+ that would cost less than subscribing to both services separately, according to people familiar with the discussions. The discussions are in their early stages, and it is unclear what shape a bundle could take, they said.
I have no problem with this story (Apple News+ Link), but I do insist that any bundle containing multiple services following the "{Brand}+" convention include an additional "+" for each such service it includes. I'm willing to pay for "Apple++", but if it's called "Apple+ plus Paramount+", then no deal.
Bitcoin mines aren't just energy-hungry, it turns out they're thirsty, too. The water consumption tied to a single Bitcoin transaction, on average, could be enough to fill a small backyard pool, according to a new analysis. Bitcoin mines are essentially big data centers, which have become notorious for how much electricity and water they use.
The first time I read this I figured it referred to the amount of water consumption to mine a coin, as that would seem somewhat reasonable. Nope, it's the amount of water consumed to simply add a transaction to the blockchain. To think, Bitcoin is the one that coffee shops and bodegas were ostensibly accepting for everyday purchases—imagine draining a swimming pool to buy a bottle of water at a corner store!
Sheer madness.
The website's search feature is implemented by a very clever and well-engineered library called Pagefind. It is many of my favorite things: fast, small, and free of dependencies I need to worry about.
The one thing that its built-in user interface couldn't do, but in a miracle of GitHub responsiveness, Liam Bigelow responded to my feature request within an hour and shipped the feature inside a week.
Tip of the hat to Liam and his colleagues at cloudcannon. If you have a static site, I strongly encourage you to check out Pagefind for your search feature. It's free, but even if it weren't, I still prefer it to all of its paid competition.
The Discourse has delighted in the unusual narrative that AI will only affect knowledge workers and spare physical laborers from displacement. Reality will probably be more complicated:
Ekobot AB has introduced a wheeled robot that can autonomously recognize and pluck weeds from the ground rapidly using metal fingers.
Today the story is about reducing the use harmful herbicides, but as advances in AI software continue to be married to advances in robotics, it will be interesting to see which categories of physically laborious jobs will be impacted over the next decade.
(Worth a click just to see the video of how violent the clank of rapid steel finger snatching is, by the way.)
I survived the first half dozen rounds of ✨Web Components™✨ hype before jumping off the wagon to preserve my front-end productivity (if not dignity) somewhere around 2015. I almost didn't read this article, but I'm glad I did, because it looks like in my absence the browsers actually made Web Components a thing. A thing that you can define and build without the help of an ill-considered UI framework and without needing a compilation pipeline to convert high-fantasy JavaScript into something browsers can actually execute.
A spritely simplification of the code to Do A Component from Jake's piece:
// 1. Define it
class MyComponent extends HTMLElement {
constructor() {
super();
this.shadow = this.attachShadow({ mode: "open" });
}
connectedCallback() {
this.shadow.innerHTML = '<span>Hello, World!</span>';
}
}
// 2. Wire it up
customElements.define("my-component", MyComponent);
And then drop it into your HTML:
<my-component></my-component>
And your page will print:
(If the above text reads "Hello, World!", that's the component working because this post actually executes that script. Go ahead and view source on it!)
Cool to see that this actually works. I was so sure that the customElements
API was some bullshit polyfill that I opened Safari's console to verify that it
was, indeed,
real,
before I continued reading the post.
Will I start using Web Components anytime soon? No. For now, I'm still perfectly happy with Hotwire sprinkles on my Rails apps. But I am glad to see that Web Components are no longer merely a pipe dream used to sell people snake oil.
Google Drive users are reporting files mysteriously disappearing from the service, with some netizens on the goliath's support forums claiming six or more months of work have unceremoniously vanished.
As somebody who for years has expressed total distrust in Google's interest (much less ability) in keeping user data available and secure, this story confirms my biases. I've been burned by so many tools at this point that I'll choose a diffable and mergeable file format that I can store and back up on my own hardware whenever feasible.
The Journal sought to determine what Instagram's Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.
Instagram's system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.
The Journal set up the test accounts after observing that the thousands of followers of such young people's accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults. The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.
Neat.
Experts on algorithmic recommendation systems said the Journal's tests showed that while gymnastics might appear to be an innocuous topic, Meta's behavioral tracking has discerned that some Instagram users following preteen girls will want to engage with videos sexualizing children, and then directs such content toward them.
Since the dawn of the Internet, people have been consuming innocuous content alongside not-at-all innocuous content, often for the same not-at-all innocuous purpose. Who could have possibly predicted that a naive correlation-matching algorithm might reflect this? Where's my fainting couch when I need it?
Just pushed a major update with a minor version number to my feed2gram gem, which reads an Atom/RSS feed for specially-crafted entries and posts them to Instagram. The initial 1.0 release only supported photos and carousels of photos as traditional posts, but v1.1 supports posting reels, carousels containing videos, and posting images and videos as stories.
Was it a pain in the ass to figure out how to do all this given the shoddy state of Meta's documentation? Why yes, it was, thanks for asking!
It seems to work pretty reliably, but YMMV with aspect ratios on videos. I don't think there's any rhyme or reason to which videos the API will reject when included in a carousel alongside images at different aspect ratios (unless the rhyme is "the aspect ratio must be an exact match" and the reason is "the Instagram app trims the aspect ratio of these videos on the client before uploading them", which I guess makes sense now that I type that out).
The media loves a clash of the titans narrative when it comes to the big tech companies, but the fact that they've all carved out such careful moats for themselves means none of them really compete head-on with one another. That said, the cultural competition is always fascinating to me. For example, while the broad story here is that Google is making ad blocking less accessible to Chrome users despite a pretty obvious perverse incentive to cram display ads down users' gullets, there's a beneath-the-surface contrast with Apple that's just as interesting.
I knew that the v3 manifest limited ad blockers, but I didn't realize it does so by drastically limiting the number of rules extensions can define (and then, favoring dynamic rules over static ones):
Originally, each extension could offer users a choice of 50 static rulesets, and 10 of these rulesets could be enabled simultaneously. This changes to 50 extensions simultaneously and 100 in total… Extensions could add up to 5,000 rules dynamically which encouraged using this functionality sparingly
So going forward, Chrome extension developers will still be able to execute as much JavaScript as they want—potentially invading users' reasonable expectations of privacy and slowing their machines down—but they will be limited in how many ads and trackers they can block.
Compare this to how Apple's Content Blocking API ties the hands of developers to ensure users are protected from nefarious ad blockers that scrape user data or replace all the ads with their own. They do this by forcing every rule to be statically defined—the very thing Chrome is restricting now. As Andy Ibanez puts it:
At the most elemental level, content blockers are rules represented in a JSON file. Everything all the Content Blocker apps in the App Store do is create a JSON file that gets handed to Safari. You could theorically create an extensions that all it does is hand a static JSON file to Safari without writing a line of code.
So on one end, we have Chrome restricting its extension API to protect itself from lost advertising revenue at the expense of users and which will incentivize developers to implement resource-intensive workarounds at best and publish bogus "ad blocking" extensions that do underhanded, skeevy stuff at worst. At the other end, Apple's Content Blocking API protects itself from scandal by preventing most categories of scams outright at the expense of developers having any tools to differentiate their extensions over others and which will result in users having the trade-off of speed (fast) vs effectiveness (middling) decided for them.
It's interesting to think about how the structural incentives of each company leads them to approach problems like this one so differently. For Google, it's always Google first, developers second, users third. For Apple, it's always Apple first, customers second, developers third. (Explaining why so many programmers are happy Apple customers while remaining wholly disinterested in being Apple-ecosystem developers.)
A great little post on how much Google has changed over the years. When I visited for an on-site interview in 2007, my takeaway vibe was, "yeah, right, all this nonsense won't last." Of course, the satisfaction I take from correctly predicting bad things happening brings me less joy than I wish it did.
The effects of layoffs are insidious. Whereas before people might focus on the user, or at least their company, trusting that doing the right thing will eventually be rewarded even if it's not strictly part of their assigned duties, after a layoff people can no longer trust that their company has their back, and they dramatically dial back any risk-taking. Responsibilities are guarded jealously. Knowledge is hoarded, because making oneself irreplaceable is the only lever one has to protect oneself from future layoffs.
I am very proud that Test Double's leadership has designed the company around minimizing the likelihood layoffs will ever be necessary, and that in the 12 years since we founded it, we have succeeded in avoiding them.
A recurring theme of my own career is that people generally fail to appreciate how big an impact one's emotional state has on knowledge workers. When people feel unsafe, they tend to prioritize making themselves feel safe over everything else. When work starts to feel like a slog, people move slower—that's literally what slogging means!
The best way to avoid the deleterious effects of layoffs, of course, is to do things to prevent them. Manage to profitability over revenue. Build a long runway of cash-equivalent reserves. Hire fewer full-timers and don't be shy about bringing on temporary help (which I wrote about earlier this year). Simple stuff—seemingly common sense, even—but increasingly unconventional.
Rockstar once planned a zombie island survival game using GTA: Vice City code, but it was too "depressing"
"The game was to take place on a windswept foggy Scottish island. The player would be under constant attack from zombies. The player would need to use vehicles to get around but vehicles would need fuel. Acquiring the fuel would be a big part of the game."
Z was only in development for "maybe a month or so", however. According to Vermeij, the concept proved to be a bit of a downer. "The idea seemed depressing and quickly ran out of steam. Even the people who originally coined the idea lost faith.
To be fair, gray Scottish landscapes are a bit of a downer even without the zombies.
Programmers like fantasy. Artists like zombies. Not sure why that is.
The over-representation of fantasy and zombie themes in games is such a bummer. My favorite series take place in contemporary settings and eschew the supernatural—it's too bad games like that are so few and far between.
Paskalis said he texted Yaccarino on Sunday afternoon and urged her to leave "before her reputation is damaged."
After that disasterous Code conference interview, Yaccarino could quit today and cure cancer tomorrow and the first line of her obituary would still be about how she let herself get played by Elon Musk.
Was delighted to be interviewed by Jerod on the Changelog for what must be the third time this year. This time, we discuss our approaches to managing dependencies (an evergreen debate), before moving onto discussing the emerging POSSE trend which I seem to have backed myself into with the updates I've made to the design of this website over the past year.
Here's an Apple Podcasts link, if that's more your thing.
One call-to-action mentioned in the episode: if you're interested in how this website or its syndication tools work, shoot me an e-mail and I'll log your interest. Once enough people ask about it, I'll figure out how to open source or otherwise chart a path for others who want to wrest back control of their work from The Platforms.
From the department of Life Comes At You Fast:
Altman holding talks with the company just a day after he was ousted indicates that OpenAI is in a state of free-fall without him. Hours after he was axed, Greg Brockman, OpenAI's president and former board chairman, resigned, and the two have been talking to friends about starting another company. A string of senior researchers also resigned on Friday, and people close to OpenAI say more departures are in the works.
Today we learned (as if we couldn't have guessed) that Satya Nadela was furious at the news of Altman's ouster. It's not hard to imagine why: OpenAI's top talent could all be gone by Monday, start a new company they actually own shares of, and let OpenAI fall apart. If that comes to pass, something tells me the only "thermonuclear" lawsuit to be born this weekend will have been Microsoft's, in a bid to extract every last dollar it invested in OpenAI.
Anyway, you're not alone if you're feeling whiplash. Makes me wonder if Steve Jobs had access to our current era's instantaneous communication channels the day he was kicked out of Apple, would his acolytes have mobilized to such great extent that the board would have felt pressured to bring him back the very next day? (I doubt it.)