Remember when Anthropic framed its Mythos Preview as a digital Frankenstein too dangerous to let loose? The company restricted access to ‘critical industry partners’ and basked in the glow of its own hype. Well, the UK’s AI Security Institute (AISI) just dropped a reality bomb. Their new evals show OpenAI’s GPT-5.5, released publicly last week, hits essentially the same marks on high-level cyber tasks. Mythos Preview passed 68.6% of expert-level challenges; GPT-5.5 scored 71.4%. That is within the margin of error, making Anthropic’s alarmist rollout look more like a marketing stunt than a genuine safety concern. AISI’s conclusion is brutal: the capability is ‘a byproduct of more general improvements in long-horizon autonomy, reasoning, and coding,’ not some unique, locked-in-a-vault breakthrough.
The Altman Takedown: Fear as a Sales Pitch
OpenAI CEO Sam Altman, never one to miss a PR opportunity, called out the practice by name in a recent podcast interview. He labeled it ‘fear-based marketing’ and drew a sharp analogy: ‘We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100 million.’ He predicted more of this rhetoric about models that are ‘too dangerous to release,’ while noting the irony that truly dangerous models will still need controlled deployment. It is a pointed critique of an industry that increasingly uses safety theater to manufacture exclusivity and justify sky-high enterprise contracts. The subtext is clear: if you cannot beat your competitor on raw metrics, scare your customers into thinking you are the only one responsible enough to handle the fire.
The Real Cyber Arms Race Is Getting Weird
Meanwhile, the actual capabilities keep advancing with little fanfare. AISI ran a grueling 32 step data extraction attack simulation called ‘The Last Ones.’ GPT-5.5 succeeded 3 out of 10 times. Mythos Preview? Only 2 out of 10. No prior model had ever succeeded once. OpenAI is now doubling down on this by limiting access to its GPT-5.5-Cyber variant to ‘critical cyber defenders’ from Thursday onward. This is not about safety. This is about creating a new class of AI weapon for a select few. Researchers should look closely at CVE-2026-33421 (https://cve.org/CVE-2026-33421) and CVE-2026-33897 (https://cve.org/CVE-2026-33897), which detail vulnerabilities found during these very tests. The hype cycle is collapsing under the weight of its own contradictions, and the only thing ‘too dangerous to release’ might be the truth about who is really in control.
Source: Arstechnica
