justin․searls․co

This time, it feels different

This essay almost exactly mirrors my feelings about the AI innovations we've seen spring forth over the last year. If you know me at all, you know that I've made my career by sticking my head in the sand and ignoring new technology trends in favor of following the market's fundamentals, even when I'm very excited about those innovations personally. (It's why I'm still making full-stack web apps instead of building everything with native Apple SDKs.)

That said, I've been dismayed to see so many of my friends that reside along the same pessimistic-bordering-on-cynical gradient continue to be stubbornly dismissive of AI as just another fad. This isn't crypto. The real-world economic impact of only the most basic tools has already been profound, and humans are nowhere close to catching up with its implications for a huge swath of jobs in the knowledge economy.

Sure. Few claim that LLMs possess human-like intelligence, “think” like humans, or exhibit self-awareness. Then again, there are also schools of thought that argue that humans are also just glorified response generators. Regardless of philosophical dispositions, it is important to note that a large number of white collar jobs on top of which economies are built involve just reasonable amounts of comprehension, bounded decision making, and text generation—paper pushing, “code monkeying” and whatnot. Really though, probabilistic text generation that maintains context in numerous computer and human languages while being meaningful, useful, and conversational, at the same time exhibiting at least an illusion of reason, in what world is that a trivial achievement ought to be dismissed with a “just”!? Those goal posts have shifted hilariously fast.

Ever since my post about AI and jobs in March, I have felt my take was overly optimistic. The obvious limitations of the tools we see today (e.g. LLM hallucination) do indeed limit the practical application of AI, but the potential for composability to address these concerns is sobering (e.g. a supervisory model that audits and reinforces the accuracy of an LLM's hallucination) and should distress anyone who would prefer that AI didn't devour the middle-class economy.


Got a taste for fresh, hot takes?

Then you're in luck, because you can subscribe to this site via RSS or Mastodon! And if that ain't enough, then sign up for my newsletter and I'll send you a usually-pretty-good essay once a month. I also have a solo podcast, because of course I do.