AI Accelerates Cyber Exploitation and Research Output, Raising Systemic Questions
New data shows AI tripling researcher productivity while cutting vulnerability exploitation time from weeks to hours, forcing institutions to confront whether speed metrics reflect progress or risk.

Artificial intelligence is compressing timelines across two critical domains—scientific research and cybersecurity—in ways that amplify both opportunity and systemic risk, according to multiple recent assessments.
A January study published in Nature analyzing 41.3 million research papers found that scientists using AI publish 3.02 times more papers, receive 4.84 times more citations, and become research project leaders 1.37 years earlier than peers who do not use the technology. The findings, produced by academics at Tsinghua University and the University of Chicago, raise questions about whether AI is improving the quality of science or merely turbocharging problematic incentive structures tied to publication volume and citation counts.
In cybersecurity, the acceleration is even more stark. Confirmed exploitation of newly disclosed high-severity vulnerabilities increased 105 percent year-over-year, jumping from 71 in 2024 to 146 in 2025. The time required to move from vulnerability discovery to active exploitation has collapsed from days or weeks to mere hours, according to security researchers tracking the trend.
"Tenzai now showing how their agents win at 99% of six CTFs shows a maturity of the capability in the market, even though the proliferation of such capabilities to pretty much everybody is already there, and growing," said Gadi Evron, cofounder and CEO at Knostic. His firm tracks offensive AI capabilities that have reached what he describes as a "singularity moment" for hackers.
