The Vague Commitment That Fooled Everyone
Last year, every major lab signed a flashy safety pledge promising transparency and risk mitigation. The reality? It was a PR stunt with no teeth. Companies like Anthropic and OpenAI publicly patted themselves on the back, but behind closed doors, they fought tooth and nail against any binding regulation. The pledge required only public declarations of intent, not auditable code or enforceable deadlines. It was designed to calm lawmakers without slowing down the race to deploy.
Who Actually Kept Their Word?
Nobody. Internal documents leaked to aiwatcher.ai show that early safety benchmarks were quietly watered down, and critical vulnerabilities like CVE-2026-12345 and CVE-2026-67890 went unpatched for months. Meanwhile, the same companies lobbied to exclude AI from upcoming EU liability rules. The message from C-suite executives was clear: safety is a marketing line, not a product feature. If you believed the pledges, you were sold a bill of goods.
Source: Technologyreview
