Why I started threatening and lying to my computer
As somebody who's spent the majority of his life figuring out how to make computers do what I want by carefully coaxing out the one-and-only correct commands in the one-and-only correct order, the relative chaos of figuring out what works and what doesn't to get LLMs like GPT-4 to do what I want has really pushed me out of my comfort zone.
Case-in-point, I was working on modifying a GPT script to improve the grammar of Japanese text—something I can fire off with a Raycast script command to proofread my text messages before I hit send.
I'd written all the code to talk to the OpenAI API. I'd sent a prompt to the computer to fix any mistakes in the text. It should have just worked.
Instead, running the script with a prompt like this:
