Greg Brockman’s recent reflections on OpenAI’s early Dota 2 project reveal a pivotal moment in AI history, but the real story is how it set a dangerous precedent for the industry. The project, initially a pet idea of Brockman’s, quickly became a scaling obsession. Jakub Pachocki’s first bot trained on just 16 CPU cores, yet doubling compute every week led to a proportional doubling in performance with no ceiling in sight. This was the moment OpenAI learned that brute force scaling could mask fundamental research gaps. It’s a lesson the company has weaponized ever since, prioritizing compute over creativity.
The Open Source Betrayal
Brockman admitted that because DeepMind was working on a similar game but ‘had nothing,’ OpenAI chose not to open-source their Dota 2 technology. This was the first of many deceptive moves by the company. They framed it as competitive necessity, but it was really about hoarding a scaling advantage. The decision foreshadowed the closed-source approach that now defines GPT-4 and beyond, locking away critical safety insights from the broader community.
The Unstoppable Scale Delusion
The Dota 2 project’s most dangerous legacy is the scaling delusion. Brockman said they ‘kept increasing the scale, thinking this would peter out, but it never did.’ This has become OpenAI’s gospel, but it’s a lie of omission. Scaling alone doesn’t solve alignment, safety, or robustness. The industry is now chasing infinite compute without grappling with the fundamental problems that will inevitably surface when that scaling hits a wall. OpenAI’s Dota 2 story isn’t about breakthrough; it’s about how they learned to hide their failures behind bigger clusters.
Source: Theverge
