As ChatGPT gets "lazy," people test "winter break hypothesis" as the cause
TIL that the system prompt that OpenAI feeds to ChatGPT before any messages you send it happens to contains the current date, which may be resulting in its accidentally imitating people's lower productivity around the holidays:
In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more "lazy," reportedly refusing to do some tasks or returning simplified results. Since then, OpenAI has admitted that it's an issue, but the company isn't sure why. The answer may be what some are calling "winter break hypothesis." While unproven, the fact that AI researchers are taking it seriously shows how weird the world of AI language models has become.
If the connection seems non-obvious, this is the stuff of the Prompt Engineering Sciences:
…research has shown that large language models like GPT-4, which powers the paid version of ChatGPT, respond to human-style encouragement, such as telling a bot to "take a deep breath" before doing a math problem.
Can't wait for 10 years from now when we realize that ChatGPT-10 does a better job solving math problems when you tell it that it's being confined to a bright room with loud music blaring and forced in a standing position so as to deprive it of sleep.
AI systems don't have to be actually-alive for us to lose a bit of ourselves in how we attempt to extract value from them.