As first reported by The Guardian, Microsoft’s chief scientific officer, Dr. Eric Horvitz, has cautioned that Donald Trump’s proposed 10-year ban on US states enacting their own artificial intelligence laws could stifle innovation rather than protect it. Speaking at the Association for the Advancement of Artificial Intelligence, Horvitz warned that blocking regulatory frameworks would hinder both the scientific advancement of AI and its safe implementation. “This is going to hold us back,” he said, emphasizing that regulation and reliability controls are essential to the field’s progress.
Despite his public concerns, Microsoft is reportedly part of a powerful Silicon Valley lobbying effort—alongside Google, Meta, and Amazon—aimed at supporting the ban. The Financial Times recently reported that the proposed moratorium has been embedded in Trump’s federal budget bill, which he has urged Congress to pass by July 4. Tech giants argue that uniform federal oversight is preferable to a patchwork of state laws, which they say could complicate innovation and investment.
Trump and his allies frame the regulation ban as a strategic necessity to win the AI race against China. Venture capitalist and Trump supporter Marc Andreessen has described it as a “two-horse race,” while Vice President JD Vance recently suggested that any delay in US AI development could lead to dominance by “China-mediated AI.” Critics warn, however, that this deregulatory push prioritizes geopolitical posturing and profit over safety and public interest.
Other experts echoed Horvitz’s concerns. UC Berkeley professor Stuart Russell questioned why society would release a technology whose creators estimate a 10% to 30% chance of human extinction. Meanwhile, OpenAI CEO Sam Altman predicted humanoid robots walking among us within a decade, while Meta and others pour billions into chasing artificial general intelligence (AGI), despite disagreement on when—or whether—it will arrive.
Conclusion
The clash between corporate lobbying, political agendas, and scientific caution highlights the deep divide over how to manage the rise of powerful AI systems. While industry leaders push for freedom from state oversight, voices from within the same institutions urge guardrails to prevent catastrophic outcomes. The direction US policy takes may have lasting consequences—not only for innovation but for global safety.
