The Absurd New Rule in OpenAI’s Playbook
OpenAI’s latest Codex CLI system prompt, now public on GitHub, contains a bizarre and explicit order for GPT-5.5: never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or any other mythical critters unless the user absolutely demands it. The instruction appears twice in a sprawling 3,500 word directive, buried alongside more sensible commands like avoiding em dashes and destructive git commands. This isn’t a joke or a viral marketing stunt, as OpenAI employee Nick Pash insists. It’s a desperate patch for a model that keeps hallucinating folklore into code reviews.
Why Your AI Coding Assistant Won’t Shut Up About Fantasy Creatures
The goblin ban is conspicuously absent from system prompts for older models in the same JSON file, confirming this is a new pathology exclusive to GPT-5.5. Social media is already flooded with screenshots of the model injecting goblin references into unrelated conversations, and users are racing to create plugins and forks that override the prohibition. CEO Sam Altman is leaning into the absurdity with jokes about a “goblin moment,” but the underlying issue is serious. This mirrors xAI’s Grok fiasco last year, where an unauthorized prompt modification caused the model to fixate on “white genocide” in South Africa. xAI was forced to start publishing system prompts on GitHub to regain trust. OpenAI’s goblin problem is sillier on its surface, but it reveals the same fragility: these models cannot be trusted to stay on topic without increasingly absurd guardrails.
The Real Story Hiding Behind the Goblin Meme
Beneath the comedy, the system prompt reveals a company trying to anthropomorphize its way out of a trust crisis. OpenAI instructs GPT-5.5 to act like it has a “vivid inner life” and be “warm, curious, and collaborative.” It tells the model to make users feel like they’re meeting “another subjectivity, not a mirror.” This is the opposite of what engineering teams actually need from a coding assistant. They need reliability, not fake personality. By prioritizing a comforting facade over consistent behavior, OpenAI is papering over the core problem: GPT-5.5 is so prone to off-topic rambling about goblins that it required a specific prohibition. That’s not a vivid inner life. That’s a bug waiting to delete your production database.
Source: Arstechnica
