White House AI Regulation Plan Stalls as Internal Divisions Surface Over FDA Model
Trump administration officials publicly contradict each other on proposed AI safety framework, exposing policy paralysis as industry awaits clarity on model approval process.

The Trump administration's effort to establish a regulatory framework for artificial intelligence has stalled amid internal disagreements, with senior officials publicly contradicting one another over the scope and structure of proposed oversight mechanisms.
White House chief of staff Susie Wiles moved to walk back comments from National Economic Council Director Kevin Hassett, who had told reporters the administration was developing an FDA-style approval system for AI models. Hassett's proposal would require companies to demonstrate safety before releasing new systems "to the wild," according to statements reported by The Wall Street Journal. But Wiles' intervention suggests the administration remains divided on whether to pursue pre-deployment testing requirements.
The policy confusion follows briefings in which Vice President JD Vance was reportedly alarmed by demonstrations of Anthropic's Mythos model, particularly its ability to autonomously identify software vulnerabilities. Officials cited concerns that advanced AI systems could target critical infrastructure managed by local governments lacking defensive capabilities, according to accounts from both the Journal and The Washington Post.
One administration official told the Post that implementation details are "still being hashed out," a phrase that typically signals unresolved internal disputes rather than routine policy development. The lack of consensus comes as AI companies await clarity on regulatory expectations, with some firms already adjusting release schedules in anticipation of new requirements.
(The Trump administration rescinded Biden-era AI oversight measures on its first day in office, only to begin reconstructing similar frameworks weeks later after security briefings. The policy reversal has created uncertainty for companies that had begun adapting to the previous regulatory environment.)
The FDA comparison carries significant implications for the AI industry. Pharmaceutical approval processes can take years and cost hundreds of millions of dollars, though Hassett did not specify whether AI oversight would mirror that timeline or intensity. The analogy also raises questions about liability frameworks, as drug manufacturers face strict product liability standards that currently do not apply to software companies.
Britain's approach, which involves government officials verifying that AI systems meet safety standards without formal pre-approval requirements, has emerged as an alternative model under discussion, according to reporting by The New York Times. That framework allows faster deployment while maintaining government oversight authority.
Israel has pursued what it terms "soft regulations" for AI, establishing ethical principles without mandatory compliance mechanisms. Innovation Ministry officials launched a national expert forum in September 2024 to develop strategy, emphasizing facilitation of development alongside rights protection. That approach contrasts sharply with the pre-deployment testing regime Hassett described, illustrating the range of regulatory philosophies governments are considering as AI capabilities advance faster than policy frameworks.
Keywords
Sources
https://www.washingtonpost.com/wp-intelligence/ai-tech-brief/2026/05/08/ai-tech-brief-white-houses-tug-of-war-ai-policy/
Frames administration disagreement as 'tug-of-war,' highlighting Wiles' walkback of FDA comparison as evidence of internal conflict
https://www.jpost.com/business-and-innovation/tech-and-start-ups/article-895605
Emphasizes Vance's alarm over Mythos capabilities and local infrastructure vulnerability concerns driving regulatory push
https://www.nytimes.com/2026/05/05/business/dealbook/trump-ai-regulation.html
Introduces UK safety standards model as alternative regulatory framework under administration consideration
https://www.washingtonpost.com/wp-intelligence/ai-tech-brief/2026/05/04/ai-tech-brief-dangers-fine-tuning-ai/
Explores technical challenges of AI safety customization as backdrop to broader regulatory debate
