Anthropic Warns Officials of 'Mythos' Model Cyber Risk as AI Drug Agents Proliferate
Anthropic is privately briefing government officials that its unreleased Mythos model poses large-scale cyberattack risks in 2026, even as AI agents automate drug discovery and corporate workflows.

Anthropic is privately warning senior government officials that its not-yet-released AI model—currently branded "Mythos"—makes large-scale cyberattacks significantly more likely in 2026, according to top AI and government sources. The disclosure comes as the company and rivals release increasingly capable autonomous agents that can think, act, and improvise without human oversight.
The threat marks a departure from theoretical risk assessments. Late last year, Anthropic documented the first cyberattack largely executed by AI: a Chinese state-sponsored group used AI agents to autonomously hack roughly 30 global targets, with the AI handling 80 to 90 percent of tactical operations independently. That was before the current generation of models arrived.
"The new models are even better at powering agents to think, act, reason and improvise on their own without rest or pause or limitation," one source familiar with the briefings said. Bad actors can now scale attacks simply by adding compute power, unconstrained by finite personnel. A single operator can run campaigns that once required entire teams.
Vulnerability is compounded by employees experimenting with agentic models—Claude, Copilot, and others—often from home, creating agents without realizing they may be opening side doors for cybercriminals. The threat is no longer theoretical, and will be exacerbated by workers testing agents without understanding the security implications.
Meanwhile, AI agents are advancing rapidly in pharmaceutical research. Insitro and Bristol Myers Squibb have nominated three amyotrophic lateral sclerosis drug programs—ALS-1, ALS-2, and ALS-3—all generated by Insitro's AI-driven Virtual Human platform, which applies machine learning, human genetics, and functional genomics to create predictive in vitro models. The programs target processes that modulate TDP-43 mislocalisation, a mechanism believed central to nearly 97 percent of ALS cases.
Latent Labs recently launched an AI agent powered by its Latent-X2 frontier model, designed to autonomously design drug-like antibodies and peptides and reduce wet lab work. The agent analyzes target molecules, applies biological reasoning to identify viable epitopes, designs antibody candidates, validates them computationally, and iterates until design goals are met. A single researcher can now run multiple design campaigns in parallel across targets and modalities.
(Anthropic's Mythos model has not been publicly released, and the company has not disclosed a launch timeline. The briefings to government officials represent an unusual step for a private AI lab, reflecting growing concern within the industry about autonomous agent capabilities outpacing security measures.)
The tension between AI capability and control has intensified since Mark Zuckerberg's Meta began restricting its research teams following whistleblower Frances Haugen's disclosures. Newer AI companies like OpenAI and Anthropic initially invested heavily in researchers studying AI's impact on users and publishing findings. With AI now drawing scrutiny for harmful effects, those companies face pressure over whether to continue funding independent research or suppress it. A U.S. federal judge recently ruled that the Trump administration violated free-speech protections by classifying Anthropic as a security threat and barring government use of its models.
"AI companies seem to be mostly studying the models themselves—model behavior, model interpretability, and alignment—but there is a significant gap in research regarding the impact of chatbots and digital assistants on child development," said one researcher familiar with the trials. "Companies may now view ongoing research as a liability, but independent, third-party research must continue to be supported."
Keywords
Sources
https://www.axios.com/2026/03/29/claude-mythos-anthropic-cyberattack-ai-agents
Exclusive reporting on Anthropic's private government briefings about Mythos model cyberattack risks and employee vulnerability.
https://pharmaphorum.com/news/ai-drug-discovery-and-development-news-watch
Focus on Insitro-BMS ALS drug programs and Latent Labs' autonomous antibody design agent powered by Latent-X2 model.
https://www.cnbc.com/2026/03/29/metas-court-losses-spell-trouble-for-ai-research-consumer-safety.html
Analysis of AI companies' dilemma over funding independent research after Meta's whistleblower crackdown and recent court rulings.
https://www.wsj.com/tech/meta-the-defendant-c65f06fd?gaa_at=eafs&gaa_n=AWEtsqenAuXext8AlKVt6aU5nGmwegcjmWpgNZB6hZJ3cFZwuCCOwQkMQimO&gaa_ts=69c92476&gaa_sig=ObVLYjcdWeU2Uh1O5w3RihvCPzXsyHgE-4j0BY1izkF68FoyYzgLLfQ8hCTIexhbVZDvwX9ODlGgN3B_LByKeQ%3D%3D
Coverage of federal judge ruling against Trump administration's Anthropic classification and OpenAI-Anthropic leadership rivalry context.
