justin․searls․co

Joe Rossignol at MacRumors:

Apple says turning on Low Power Mode reduces the Mac Studio's fan noise, which is useful for tasks that require a quieter environment, and it also allows for reduced power consumption if the computer is left running continuously.

The reduced fan noise aspect of Low Power Mode requires macOS Sequoia 15.1 or later. The new Mac Studio ships with macOS Sequoia 15.3.

A few Reddit users said macOS Sequoia 15.3 enabled Low Power Mode on the previous-generation Mac Studio with the M2 Max chip, and presumably on M2 Ultra configurations too. This is not reflected in Apple's support document.

I can confirm, a "Low Power Mode" toggle appears in the Energy settings of my M2 Ultra Mac Studio.

I really put this thing through the ringer with video and AI workloads and I have never been able to hear the fan (even with my ear right to the back of the thing), so I guess I was lucky to get one whose fan holes don't whistle. I'm always glad to receive new features through software, but am comfortable promising you that I will never turn this on.

ChatGPT has become my search engine

Perhaps this is partly because the Japanese Internet continues to be needlessly opaque (it has now been zero days since I encountered a UI built around HTML image maps), but ChatGPT has become an indispensable planning partner whenever Google would normally let me down. In the case above, I just typed "search for Indigo La End concerts in Japan this June", and it got me back only the June dates of a tour I didn't know was going on, and for which tickets only went on sale two days ago. From there it was literally two more clicks to be where I needed to be in order to buy a paper ticket I could pick up at any 7-Eleven nationwide.

These days, whenever a traditional Google-oriented search workflow would have been particularly fraught—whether that's due to out-of-date information (past concerts) or SEO spam (ticket resellers)—the fact that ChatGPT will jump through two or three extra hoops for me has made a huge difference in whether I find the answer I'm looking for or get lost in a forest of tabs and give up.

The 12" MacBook was announced 10 years ago

On March 9, 2015, Apple announced a redesigned MacBook, notable for a few incredible things:

  • 2 pounds (the lightest computer Apple currently sells is 35% heavier at 2.7 pounds)
  • 13.1mm thin
  • A 12-inch retina screen (something the MacBook Air wouldn't receive until late 2018)
  • The Force Touch trackpad (which came to the MacBook Pro line the same day)

It also became infamous for a few less-than-incredible things:

  • A single port for charging and data transfer, heralding the dawn of the Dongle Era
  • That port was USB-C, which most people hadn't even heard of, and which approximately zero devices supported
  • A woefully-underpowered 5-watt TDP Intel chip
  • The inadvisably-thin butterfly keyboard, which would go on to hobble Apple's entire notebook line for 5 years (though my MacBooks never experienced any of the issues I had with later MacBooks Pro)

Still, the 2015 MacBook (and the 2016 and 2017 revisions Apple would go on to release) was, without-a-doubt, my favorite computer ever. When I needed power, I had uncompromised power on my desktop. When I needed portability, I had uncompromised portability in my bag.

It was maybe Phil Schiller's best pitch for a new Mac, too. Here's the keynote video, scrubbed to the MacBook part:

Literally the worst thing about traveling with the 12" MacBook was that I'd frequently panic—oh shit, did I forget my computer back there?—when in fact I had just failed to detect its svelte 2-pound presence in my bag. I lost track of how many times I stopped in traffic and rushed to search for it, only to calm down once I confirmed it was still in my possession.

I started carrying it in this ridiculous-looking 12-liter Osprey pack, because it was the only bag I owned that was itself light enough for me to feel the weight of the computer:

This strategy backfired when I carelessly left the bag (and computer) on the trunk of our car, only for Becky to drive away without noticing it (probably because it was barely taller than the car's spoiler), making the 12" MacBook the first computer I ever lost. Restoring my backup to its one-port replacement was a hilarious misadventure in retrying repeatedly until the process completed before the battery gave out.

I have many fond memories programming in the backyard using the MacBook as a remote client to my much more powerful desktop over SSH, even though bright sunlight on a cool day was all it took to discover Apple had invented a new modal overheating screen just for the device.

Anyway, ever since the line was discontinued in 2019, I've been waiting for Apple to release another ultraportable, and… six years later, I'm still waiting. The 11-inch MacBook Air was discontinued in late 2016, meaning that if your priority is portability, the 13" MacBook Air is the best they can offer you. Apple doesn't even sell an iPad and keyboard accessory that, in combination, weigh less than 2.3 pounds. Their current lineup of portable computers are just nowhere near light enough.

More than the raw numbers numbers, none of Apple's recent Macs have sparked the same joy in me that the 11" Air and 12" MacBook did. Throwing either of those in a bag had functionally zero cost. No thicker than a magazine. Lighter than a liter of water. Today, when I put a MacBook Air in my bag, it's because I am affirmatively choosing to take a computer with me. In 2015, I would regularly leave my MacBook in my bag even when I didn't expect to need it, and often because I was too lazy to take it out between trips. That is the benchmark for portable computing, and Apple simply doesn't deliver it anymore. Hopefully that will change someday.

Delta: "Instead of waiting on hold you can message us on the go. Current messaging hold time is 11 hours 12 minutes."

Delta, after I decline: "Current call hold time is 2 minutes."

Arguing about divisive political issues in the open at work (esp. in Slack/Teams/etc) is generally not a good idea.

Interjecting to say it's a bad idea is not the same as banning it.

I don't know who needs to know this but Meta's Threads app counts characters based on unicode code points and not grapheme clusters. The documentation doesn't say how it counts the 500 character limit, so there you go.

If the tariff-coaster is stressing you out, I guess it's a small kindness that the US manufactures more than enough liquor domestically.

How to run Claude Code against a free local model

Last night, Aaron shared the week-old Claude Code demo, and I was pretty blown away by it:

I've tried the "agentic" features of some editors (like Cursor's "YOLO" mode) and have been woefully disappointed by how shitty the UX always is. They often break on basic focus changes, hang at random, and frequently require fussy user intervention with a GUI. Claude Code, however, is a simple REPL, which is all I've ever really wanted from a coding assistant. Specifically, I want to be able to write a test in my editor and then tell a CLI to go implement code to pass the test and let it churn as long as it needs.

Of course, I didn't want to actually try Claude Code, because it would have required a massive amount of expensive API tokens to accomplish anything, and I'm a cheapskate who doesn't want to have to pay someone to perform mundane coding tasks. Fortunately, it took five minutes to find an LLM-agnostic fork of Claude Code called Anon Kode and another five minutes to contribute a patch to make it work with a locally-hosted LLM server.

Thirty minutes later and I have a totally-free, locally-hosted version of the Claude Code experience demonstrated in the video above working on my machine (an MacBook Pro with M4 Pro and 48GB of RAM). I figured other people would like to try this too, so here are step-by-step instructions. All you need is an app called LM Studio and Anon Kode's kode CLI.

Running a locally-hosted server with LM Studio

Because Anon Kode needs to make API calls to a server that conforms to the Open AI API, I'm using LM Studio to install models and run that server for me.

  1. Download LM Studio
  2. When the onboarding UI appears, I recommend unchecking the option to automatically start the server at login
  3. After onboarding, click the search icon (or hit Command-Shift-M) and install an appropriate model (I started with "Qwen2.5 Coder 14B", as it can fit comfortably in 48GB)
  4. Once downloaded, click the "My Models" icon in the sidebar (Command-3), then click the settings gear button and set the context length to 8192 (this is Anon Kode's default token limit and it currently doesn't seem to respect other values, so increasing the token limit in LM Studio to match is the easiest workaround)
  5. Click the "Developer" icon in the sidebar (Command-2), then in the top center of the window, click "Select a model to load" (Command-L) and choose whatever model you just installed
  6. Run the server (Command-R) by toggling the control in the upper left of the Developer view
  7. In the right sidebar, you should see an "API Usage" pane with a local server URL. Mine is (and I presume yours will be) http://127.0.0.1:1234

Configuring Anon Kode

Since Claude Code is a command-line tool, getting this running will require basic competency with your terminal:

  1. First up, you'll need Node.js (or an equivalent runtime) installed. I use homebrew and nodenv to manage my Node installation(s)
  2. Install Anon Kode (npm i -g anon-kode)
  3. In your terminal, change into your project directory (e.g. cd ~/code/searls/posse_party/)
  4. Run kode
  5. Use your keyboard to go through its initial setup. Once prompted to choose between "Large Model" and "Small Model" selections, hit escape to exit the wizard, since it doesn't support specifying custom server URLs
  6. When asked if you trust the files in this folder (assuming you're in the right project directory), select "Yes, proceed"
  7. You should see a prompt. Type /config and hit enter to open the configuration panel, using the arrow keys to navigate and enter to confirm
    1. AI Provider: toggle to "custom" by hitting enter
    2. Small model name:" to "LM Studio" or similar
    3. Small model base URL: http://127.0.0.1:1234/v1 (or whatever URL LM Studio reported when you started your server)
    4. API key for small model: provide any string you like, it just needs to be set (e.g. "NA")
    5. Large model name: to "LM Studio" or similar
    6. API key for large model: again, enter whatever you want
    7. Large model base URL: http://127.0.0.1:1234/v1
    8. Press escape to exit
  8. Setting a custom base URL resulted in Anon Kode failing to append v1 to the path of its requests to LM Studio until I restarted it (If this happens to you, press Ctrl-C twice and run kode again)
  9. Try asking it to do stuff and see what happens!

That's it! Now what?

Is running a bootleg version of Claude Code useful? Is Claude Code itself useful? I don't know!

I am hardly a master of running LLM locally, but the steps above at least got things working end-to-end so I can start trying different models and tweaking their configuration. If you try this out and have landed on a configuration that works really well for you, let me know!

Gurman with the scoop, summarized by MacRumors:

An updated version of the Mac Studio could launch as soon as this week, reports Bloomberg's Mark Gurman. The new machine is expected to be equipped with the M4 Max chip that we first saw in the 2024 MacBook Pro models, but Apple apparently does not have an M4 Ultra chip ready to go. Instead, there could be a version of the ‌Mac Studio‌ that uses an M3 Ultra chip. Apple didn't release an M3 Ultra chip alongside the M3 chip lineup, so it would be a new chip even though it's not part of the current M4 family. The current ‌Mac Studio‌ has an M2 Ultra chip, as does the Mac Pro.

Releasing a previous-generation, higher-end chip is utterly routine from every other manufacturer, but Apple doesn't sell chips, it sells computers.

Offering a Mac Studio in M4 Max and M3 Ultra configurations would give Apple’s marketing team a really fucking narrow needle to thread. One imagines the Ultra will be better for massive video exports and the Max will be better for literally every other workflow. Woof.

Breaking Change artwork

v32 - Large Lemon Model

Breaking Change

Turns out I'd accidentally been letting some of your e-mail get filtered as junk mail. I want to be clear that I don't think you're junk, Apple does. Write in with your feelings to podcast@searls.co and I'll read it, assuming I get it.

Video of this edition of the show is up on YouTube.

Without further ado, links!

Show those show notes…

When I drive a Tesla with Full-Self Driving enabled it feels just like riding a bike with no hands—I'm simultaneously amazed by what is happening but so nervous it'll kill me that I'm ready to take over in an instant.

I've found real value in Open AI's "Deep Research" mode, so today I added the 10th(!) multimedia type to my website (called "Slops") to embed AI-generated content, including my prompt, stylized citations, and metadata indicating which content is AI-generated justin.searls.co/slops/

What to do when you can't sleep in hot, stale air?

Prompt:

I have a real struggle that is hugely impacting my quality of life: whenever I'm in a room with less than stellar airflow, higher than ideal temperature, or a bed that traps heat, I will often wake up within a couple hours of going to sleep in a panic that I'm not getting enough to breathe.

I've been tested for apnea and it appears to not be that. I have an Eight Sleep at home and that's also very helpful. When I'm traveling, I try to bring a USB-powered fan or borrow one from a hotel, but neither are fool-proof.

What is going on here and what can I do?

Here's what the computer said:
(Generated by OpenAI o3-mini-high)

Waking up suddenly feeling you “can’t get enough air” can be scary. This nighttime air hunger often results from a mix of physiological factors (like how your body senses CO₂ or airway resistance), environmental conditions (stuffy room, temperature, humidity), and psychological triggers (anxiety or learned associations). Below, we’ll explore possible causes and practical strategies to manage and prevent these episodes, both at home and while traveling.

Physiological Factors Behind Nighttime Air Hunger

  • CO₂ Sensitivity (False Suffocation Alarm): Some people have an over-sensitive response to rising CO₂ levels in the blood. Even a mild buildup of CO₂ (for example, in a poorly ventilated room) can trigger the brain’s “suffocation alarm,” causing you to wake up gasping. Research on panic disorder shows that patients with prominent respiratory symptoms are more sensitive to CO₂ and prone to nocturnal panic attacks​ scielo.br. In other words, your body might overreact to normal fluctuations in CO₂ at night, jolting you awake with a feeling of dread or need for air. One theory (Klein’s false suffocation alarm) suggests this sensitivity leads to a cycle of over-breathing and breath-holding that triggers panic during sleep​ scielo.br.

Spoiler alert: there's more to this…

The thing I can't stand about Vegas is that even when I eat right, exercise, and go to bed early, I still feel like absolute garbage by the end.

If my body's gonna feel this wrecked, I should at least get to enjoy way harder drugs and kinkier shit than I actually get up to.

Calling private methods without losing sleep at night

Today, I'm going to show you a simple way to commit crimes with a clean conscience.

First, two things I strive for when writing software:

  1. Making my code do the right thing, even when it requires doing the wrong thing
  2. Finding out that my shit is broken before all my sins are exposed in production

Today, I was working on a custom wrapper of Rails' built-in static file server, and deemed that it'd be wiser to rely on its internal logic for mapping URL paths (e.g. /index.html) to file paths (e.g. ~/app/public/index.html) than to reinvent that wheel myself.

The only problem? The method I need, ActionDispatch::FileHandler#find_file is private, meaning that I really "shouldn't" be calling it. But also, it's a free country, so whatever. I wrote this and it worked:

filepath, _ = @file_handler.send(:find_file,
  request.path_info, accept_encoding: request.accept_encoding)

If you don't know Ruby, send is a sneaky backdoor way of calling private methods. Encountering send is almost always a red flag that the code is violating the intent of whatever is being invoked. It also means the code carries the risk that it will quietly break someday. Because I'm calling a private API, no one on the Rails team will cry for me when this stops working.

So, anyway, I got my thing working and I felt supremely victorious… for 10 whole seconds. Then, the doubts crept in. "Hmm, I'm gonna really be fucked if 3 years from now Rails changes this method signature." After 30 seconds hemming and hawing over whether I should inline the functionality and preemptively take ownership of it—which would separately run the risk of missing out on any improvements or security fixes Rails makes down the road—I remembered the answer:

I can solve this by codifying my assumptions at boot-time.

A little thing I tend to do whenever I make a dangerous assumption is to find a way to pull forward the risk of that assumption being violated as early as possible. It's one reason I first made a name for myself in automated testing—if the tests fail, the code doesn't deploy, and nothing breaks. Of course, I could write a test to ensure this method still works, but I didn't want to give this method even more of my time. So instead, I codified this assumption in an initializer:

# config/initializers/invariant_assumptions.rb
Rails.application.config.after_initialize do
  next if Rails.env.production?

  # Used by lib/middleware/conditional_get_file_handler.rb
  unless ActionDispatch::FileHandler.instance_method(:find_file).parameters == [[:req, :path_info], [:keyreq, :accept_encoding]]
    raise "Our assumptions about a private method call we're making to ActionDispatch::FileHandler have been violated! Bailing."
  end
end

Now, if I update Rails and try to launch my dev server or run my tests, everything will fail immediately if my assumptions are violated. If a future version of Rails changes this method's signature, this blows up. And every time I engage in risky business in the future, I can just add a stanza to this initializer. My own bespoke early warning system.

Writing this note took 20 times longer than the fix itself, by the way. The things I do for you people.

Welp, I just did the thing I promised I'd never do and read the Hacker News comments for this otherwise lovely post pointing out the durable relevance of Ruby on Rails twenty years later. One comment stood out as so wrong, however, I couldn't help but clap back.

It started when someone wrote, presumably in praise of Rails, "I really like web apps that are just CRUD forms." CRUD is shorthand for the basic operations of "create, read, update, and delete", and those four verbs can express the vast majority of what anyone has ever wanted to do with a computer. It's why the spreadsheet was the killer app of the 80s and early 90s. It's why Rails 2's embrace of REST as a way to encapsulate CRUD over HTTP redefined how pretty much everyone has made web applications ever since.

Anyway, in response to the fellow above said he enjoys simple CRUD apps, somebody else wrote:

I really like easy problems too. Unfortunately, creating database records is hardly a business. With a pure CRUD system you're only one step away from Excel really. The business will be done somewhere else and won't be software driven at all but rather in people's heads and if you're lucky written in "SOP" type documents.

This struck me as pants-on-head levels of upside down. Father forgive me, but I hit reply with this:

As someone who co-founded one of the most successful Ruby on Rails consultancies in the world: building CRUD apps is a fantastic business.

There are two types of complexity: essential and incidental. Sometimes, a straightforward CRUD app won't work because the product's essential complexity demands it. But at least as often, apps (and architectures, and engineering orgs, and businesses) are really just CRUD apps with a bunch of incidental complexity cluttering up the joint and making everything confusing, painful, and expensive.

I've served dozens of clients over my career, and I can count on one hand the number of times I've found a company whose problem couldn't more or less be solved with "CRUD app plus zero-to-one interesting features." No technologist wants to think they're just building a series of straightforward CRUD apps, so they find ways to complicate it. No businessperson wants to believe their company isn't a unique snowflake, so they find ways to complicate it. No investor wants to pour their money into yet another CRUD app, so they invent a story to complicate it.

IME, >=90% of application developers working today are either building CRUD apps or would be better off if they realized they were building CRUD apps. To a certain extent, we're all just putting spreadsheets on the Internet. I think this—more than anything else—explains Rails' staying power. I remember giving this interview on Changelog and the host Adam asking about the threat Next.js posed to Rails, and—maybe I'd just seen this movie too many times since 2005—it didn't even register as a possible contender.

Any framework that doesn't absolutely nail a batteries-included CRUD feature-set as THE primary concern will inevitably see each app hobbled with so much baggage trying to roundaboutly back into CRUD that it'll fall over on itself.

Anyway, I believe in this point enough that I figured I should write it down someplace that isn't a deeply-nested comment at a place where I'd recommend you not read the comments, so that's how you and I wound up here just now.

I used to think I loved to travel, but it turns out I just hated where I lived.

Now that I’m where I want to be, travel has just become an expensive way to feel inconvenienced and come home sick.