Now that I'm syndicating to Threads via my new feed2thread gem, I've updated my
POSSE Pulse to explain all the site's automations. Seven so far!
justin.searls.co/posse
Now that I'm syndicating to Threads via my new feed2thread gem, I've updated my
POSSE Pulse to explain all the site's automations. Seven so far!
justin.searls.co/posse
Just published feed2thread, a Ruby gem that reads your site's feed and posts each entry to Threads, using the newly-released Threads API. Can run in Docker, similar to feed2toot for Mastodon github.com/searls/feed2thread
Last week, when David declared that system tests have failed, my main reaction was: "well, yeah." UI tests are brittle, and if you write more than a handful, the cost to maintain them can quickly eclipse any value they bring in terms of confidence your app is working.
But then I had a second reaction, "come to think of it, I wrote a smoke test of a complex UI that relies heavily on Turbo and it seems to fail all the damn time." Turbo's whole cloth replacement of large sections of the DOM seemed to be causing numerous timing issues in my system tests, wherein elements would frequently become stale as soon as Capybara (under Selenium) could find them.
Finally, I had a third reaction, "I've been sick of Selenium's bullshit for over 14 years. I wonder if I can dump it without rewriting my tests?" So I went looking for a Capybara adapter for the seemingly much-more-solid Playwright.
And—as you might be able to guess by the fact I bothered to write a blog post—I found one such adapter! And it works! And things are better now!
So here's my full guide on how to swap Selenium for Playwright in your Rails system tests:
This post is trending but it's completely wrongheaded and misunderstands why test doubles were invented in the first place.
I'd explain why, but it's probably not worth my time. You've figured it out at this point or you haven't amazingcto.com/mocking-is-an-antipattern-how-to-test-without-mocking/
I was excited to be hosted by the Changelog folks for a recap discussion following Apple's WWDC keynote on Monday. If you listened to my WWDC spoiler cast, you might like this after-action report.
Changelog & Friends 48: Putting the Apple in AI – Listen on Changelog.com
A few errata and missed pitches:
I hope you'll listen! I like Jerod and Adam a lot. The whole Changelog family of podcasts is fantastic. You can tell they're smart because they have yet to invite Breaking Change to join the network.
Was thinking last night how much I loved Fallout 2 when it first released and how much of the magic was lost when Bethesda moved it to a high-budget FPS format.
Given all the Fallout buzz right now and the nature of Game Pass, Microsoft should find a studio to make a lower-budget Fallout 2.5 in the style of the original games but for modern platforms. Could be great.
Microsoft's new line of computers to be rebranded as "Copilot plus or minus" PCs
after indefinitely shelving Recall feature
blogs.windows.com/windowsexperience/2024/06/07/update-on-the-recall-preview-feature-for-copilot-pcs/
Last night, I read a flurry of angry feedback following WWDC. It appears some people are mad about Apple's AI announcements. Just like they were mad about Apple's hydraulic press ad last month.
I woke up this morning with a single question:
"Am I the only person on earth who actually wants AI companies to scrape my website?"
Publications that depend on ad revenue don't. License holders counting on a return for their intellectual property investment are lawyering up. Quite a few Mastodon users appear not to be on board, either.
Me, meanwhile, would absolutely positively 💗LOVE💗 if the AIs scraped the shit out of this website, as well as all the other things I post publicly online.
Really, take my work! Go nuts! Make your AI think more like me. Make your AI sound more like me. Make your AI agree with my view of the world more often.
The entire reason I create shit is so that others will take it! To share ideas I find compelling in the hope those ideas will continue to spread. Why wouldn't I want OpenAI or Apple or whoever to feed everything I say into their AI model's training data? Hell, scrape me twice if it'll double the potency. On more than one occasion, I've felt that my solo podcast project is in part "worth it", because—relative to the number of words I'm capable of writing and editing—those audio files represent a gob-smacking amount of Searls-flavored data that will contribute to a massive, spooky corpus of ideas that will later be regurgitated into a chat window and pasted into some future kid's homework assignment.
I'm not going to have children. I don't believe in God. I know that as soon as I'm dead, it's game over. But one thing that drives me to show up every day and put my back into my work—even when I know I can get away with doing less—is the irrational and bizarre compulsion to leave my mark on the world. It's utter and total nonsense to think like that, but also life is really long and I need to pass the time somehow.
So I make stuff! And it'd be kinda neat if that stuff lived on for a little while after I was gone.
And I know I'm not alone. Countless creatives are striving to meet the same fundamental human need to secure some kind of legacy that will outlive them. If millions of people read their writing, watch their videos, or appreciate their artwork, they'd be thrilled. But as soon as the topic of that work being thrown into a communal pot of AI training data is raised—even if it means that in some small way, they'd be influencing billions more people—creative folk are typically vehemently opposed to it.
Is it that AI will mangle and degrade the purity of their work? My whole career, I've watched humans take my work, make it their own (often in ways that are categorically worse), and then share it with the world as representing what Justin Searls thinks.
Is it the lack of attribution? Because I've found that, "humans leveraging my work without giving me credit," is an awfully long-winded way to pronounce "open source."
Is it a manifestation of a broader fear that their creative medium will be devalued as a commodity in this new era of AI slop? Because my appreciation for human creativity has actually increased since the dawn of generative AI—as its output gravitates towards the global median, the resulting deluge of literally-mediocre content has only served to highlight the extraordinary-ness of humans who produce exceptional work.
For once, I'm not trying to be needlessly provocative. The above is an honest reflection of my initial and sustained reaction to the prospect of my work landing in a bunch of currently-half-cocked-but-maybe-some-day-full-cocked AI training sets. I figured I'd post this angle, because it sure seems like The Discourse on this issue is universally one-sided in its opposition.
Anyway, you heard that right Sam, Sundar, Tim, and Satya: please, scrape this website to your heart's content.
A lot of people whose income depends on creating content, making decisions, or performing administrative tasks are quite rightly worried about generative AI and to what extent it poses a threat to that income. Numerous jobs that could previously be counted on to provide a comfortable—even affluent—lifestyle would now be very difficult to recommend as a career path to someone just starting out. Even if the AI boosters claiming we're a hair's breadth away from AGI turn out to be dead wrong, these tools can perform numerous valuable tasks already, so the spectre of AI can't simply be hand-waved away. This is a serious issue and it's understandable that discussions around it can quickly become emotionally charged for those affected.
Jerod, Adam, and myself chat about the ill-fated WWDC in which a more intelligent Siri would be forthcoming.
Appearing on: The Changelog
Recorded on: 2024-06-14
Original URL: https://changelog.com/friends/48
Comments? Questions? Suggestion of a podcast I should guest on? podcast@searls.co
My system tests fail all the time too, but I didn't realize I was supposed to write a blog post every time CI goes red world.hey.com/dhh/system-tests-have-failed-d90af718
Boy I hope iOS 18 fixes this bug. If you have the beta installed, has Photo Shuffle for Lock Screen been refreshed at all? reddit.com/r/ios/comments/yn5gog/missing_people_in_lock_screen_photo_shuffle/
Yesterday in the Platform State of the Union, Apple celebrated 10 years of Swift and then in the very next slide announced they finally built a testing framework for it.
Digging in more, I found this forum post that a "vision document" (I'm not familiar with Swift people's vernacular for this stuff) for testing direction had been accepted.
Anyway, this is all interesting in its own right and something I'll be following generally, but I have to say that for such a broad and important topic, that forum thread had an absolutely incredible time-to-mocking-derail metric. The very first reply is about mocking and then seemingly half of the subsequent posts in the thread are people spouting off their personal opinions on mocking instead of any of the more important stuff.
I don't claim to be an expert in very much, but having built several mocking frameworks, spoken about mocking practices (including a stealth mocking keynote at RailsConf and a not-so-stealth mocking closing talk at JSConf), and even named the company I co-founded after a mocking term, I feel like I have some authority to say the following: boy howdy is it a bummer that most developers only understand mocking as a utility and lack any comprehension of how to deploy mocks in a well-defined, consistently-executed software development workflow.
I'm very happy to have nailed an approach that works really well for me, but I'll probably always view it as a major failure of my career that I was never able to market that approach effectively. Even now, I don't have a single authoritative URL to point you to for my "Discovery Testing" approach that would provide a clear explanation of what it is, why it's good, and how to use it. And for me personally, the moment has probably passed and I just need to live with that failure, I imagine.
One of the things I think about a lot with respect to testing practices (including test-driven development) and their failure to really "stick" or spread more broadly is that they transcend any one testing framework or programming language. As a result, consultants like me were so absorbed just porting slightly-different versions of the various tools we needed to every new language that there's no such thing as a single README or book that could explain to a normal developer how to be successful. I tried to write a book once that would serve as a tabula rasa across languages and I almost immediately became trapped in a web of complexity. Other programming idioms and methods that apply across languages are similarly fraught, but mocking's relative unimportance to the primary task of shipping working software probably doomed it from the start.
Anyway, it's cool that mocking will be possible in Swift Testing.
If you've ever felt distracted and thrown your phone in your bag so you can focus on your Mac, you're going to need a new strategy to achieve self control thanks to macOS Sequoia's new iPhone Mirroring feature.
Saving this for posterity as it seems likely Apple is 90 minutes away from announcing a first-party Calculator app for iPad. Only took 14 years.
If you're the DOJ, this is definitely a sign that they're abusing their market position and stifling competition!
If you're anyone else, you're amazed that Apple let them use an icon so evocative of their own Calculator app on iPhone.
When it comes to AI, it seems like the vast majority of people I talk to believe large language models (LLMs) are either going to surpass human intelligence any day now or are a crypto-scale boondoggle with zero real-world utility. Few people seem to land in-between.
Not a ton of nuance out there.
The truth is, there are tasks for which LLMs are already phenomenally helpful, and tasks for which today's LLMs will invariably waste your time and energy. I've been using ChatGPT, GitHub Copilot, and a dozen other generative AI tools since they launched and I've had to learn the hard way—unlike with web search engines, perhaps—that falling into the habit of immediately reaching for an LLM every single time I'm stuck is a recipe for frustratingly inconsistent results.
As B.F. Skinner taught us, if a tool is tremendously valuable 30% of the time and utterly useless the other 70%, we'll nevertheless keep coming back to it even if we know we're probably going to get nothing out of it. Fortunately, I've been able to drastically increase my success rate by developing a set of heuristics to determine whether an LLM is the right tool for the job before I start typing into a chat window. They're based on the grand unifying theory that language models produce fluent bullshit, which makes them the right tool for the job when you desire fluent output and don't mind inaccurate bullshit.
Generative AI is perhaps the fastest-moving innovation in the history of computing, so It goes without saying that that everything I suggest here may be very useful on June 9th, 2024, but will read as a total farce in the distant future of November 30th, 2024. That said, if you've been sleeping on using LLMs in your daily life up to this point and are looking to improve your mental model of how to best relate to them (as opposed to one-off pro-tips on how to accomplish specific tasks), I hope you'll find this post useful.
So here they are, three simple rules to live by.