Trump Team Revives Biden-Era AI Oversight After Rejecting It on Day One
National security fears over Anthropic's Mythos model drive White House toward policies it dismantled in January 2025, exposing internal rifts between economic and security advisers.

The Trump administration is quietly resurrecting AI oversight mechanisms it systematically dismantled during its first week in office, driven by alarm over cybersecurity-capable AI models and mounting pressure from national security officials.
The White House Office of the National Cyber Director convened two closed-door meetings last week with tech firms and trade groups to discuss security concerns raised by advanced AI systems, according to sources familiar with the discussions. Officials are exploring an executive order that would establish government-industry working groups to evaluate frontier AI models before public release, alongside a framework requiring Pentagon-led safety testing for AI deployments across federal, state, and local government.
The policy reversal marks a striking about-face for an administration that revoked President Biden's AI executive order on January 20, 2025—Trump's first day back in office—and three days later issued its own order titled "Removing Barriers to American Leadership in Artificial Intelligence." That directive signaled a sharp pivot from oversight toward deregulation. The U.S. AI Safety Institute was rebranded with the word "safety" removed, and its inaugural director, Elizabeth Kelly, stepped down shortly after the inauguration before joining Anthropic.
Internal tensions are complicating the policy shift. "There are still real tensions being worked through internally," one source told Axios, describing a rift between economic advisers worried about hampering AI deployments and national security officials concerned about AI-enabled cyberattacks. The discussions remain preliminary, with no proposal finalized and no timeline set.
The reconsideration comes as business leaders increasingly expect AI policy changes to reshape operations. Among 300 U.S. executives surveyed by employment law firm Littler, 84 percent said they anticipate AI regulatory changes will affect their businesses—up from 42 percent in 2025. Concerns about diversity program regulations, which dominated 2025 at 84 percent, dropped to 38 percent.
(The U.S. lacks the legal authority to mandate pre-release AI reviews, unlike the UK's AI Security Institute, which has conducted pre-deployment evaluations with labs including Anthropic and OpenAI, or the EU AI Act, which began phasing in mandatory conformity assessments for high-risk applications last year.)
Policy experts warn the administration risks creating a politicized vetting system without clear standards. "The definition of 'safe' is contested," Sarah Kreps, director of the Tech Policy Institute at Cornell University, told Ars Technica. "Once you build a government vetting process for technology, you get the good with the bad." Without defined standards, "the process can be politicized," creating a system where "whoever holds power gets to shape how the vetting works."
The European Union dispatched a fact-finding delegation to Washington to explore AI policy cooperation with U.S. lawmakers, led by European Parliament Vice President Victor Negrescu. The visit underscores growing transatlantic interest in coordinating AI governance approaches as the U.S. policy landscape remains in flux.
Keywords
Sources
https://fortune.com/2026/05/06/trump-administration-embraces-ai-oversight-policies-it-once-rejected-anthropic-mythos-caisi/
Frames policy shift as 'head-spinning pirouette' driven by Mythos security concerns and broader cyber fears
https://www.axios.com/2026/05/04/trump-white-house-ai-safety-tests-mythos
Reports internal White House tensions between economic advisers and national security officials over deployment restrictions
https://www.csoonline.com/article/4166824/anthropic-mythos-spurs-white-house-to-weigh-pre-release-reviews-for-high-risk-ai-models.html
Emphasizes U.S. lacks legal authority for mandatory reviews, contrasting with UK and EU frameworks already operational
https://arstechnica.com/tech-policy/2026/05/everything-that-could-go-wrong-with-trumps-ai-safety-tests-according-to-experts/
Highlights expert warnings that vetting process risks politicization without clear standards defining 'safe' AI
