The Trial Reveals a Pattern of Deception at OpenAI
Testimony in the Musk v. Altman trial has painted a damning picture of Sam Altman’s leadership, with former board member Helen Toner describing a culture of deliberate ignorance. Toner testified that she learned about the launch of ChatGPT through screenshots on Twitter, not from the CEO, because the board was ‘not very informed about things.’ This wasn’t an oversight; it was a feature. Altman, she claimed, had no interest in enabling the board to perform its oversight role, treating them as a rubber stamp rather than a fiduciary check. The deposition of Mira Murati only sharpened the critique, with Murati recounting how Altman pitted executives against each other and undermined her authority, leaving the company in a state of chronic dysfunction.
The Helicopter View: Toxic Incentives and Conflicts of Interest
The trial has also exposed the tangled web of financial entanglements that compromise OpenAI’s nonprofit mission. Evidence showed Altman and co-founder Greg Brockman were investors in nuclear startup Helion, yet pushed OpenAI to strike a deal with the speculative company. Shivon Zilis, a Musk ally and former OpenAI board member, testified that this felt ‘super out of left field’ and raised major red flags. More broadly, Microsoft’s lawyers have repeatedly tried to distance the tech giant from the chaos, with their questioning hammering home a single point: ‘And Microsoft wasn’t there?’ But the subtext is clear. OpenAI’s for-profit arm, with its $13 billion Microsoft investment, has already created a de facto AGI definition that prioritizes profit over safety. The leaked 2019 contract between the two defines AGI as ‘a highly autonomous system that outperforms humans at most economically valuable work,’ a terrifyingly narrow and commercialized framing that—if reached—would allow Microsoft to walk away with the IP while leaving the nonprofit board holding nothing.
Source: Theverge
