
This asks about a concern raised by Dario Amodei in a Senate subcommittee hearing:
Dario Amodei, CEO of Anthropic, told a Senate Judiciary subcommittee that the prospect of AI helping people develop and deliver these weapons is a medium-term risk that his company is grappling with today.
"Over the last six months, Anthropic, in collaboration with world-class biosecurity experts, has conducted an intensive study on the potential for AI to contribute to the misuse of biology," he said.
"Today, certain steps in bioweapons production involve knowledge that can’t be found on Google or in textbooks and requires a high level of specialized expertise — this being one of the things that currently keeps us safe from attacks," he added.
He said today’s AI tools can help fill in "some of these steps," though they can do this "incompletely and unreliably." But he said today’s AI is already showing these "nascent signs of danger," and said his company believes it will be much closer just a few years from now.
"A straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, enabling many more actors to carry out large-scale biological attacks," he said. "We believe this represents a grave threat to U.S. national security."
Amodei added that Anthropic has briefed government officials on this assessment, "all of whom found the results disquieting."
This question asks whether the concerns that Amodei raises here will be shown to be true – whether within three years, AI will have knowledge sufficient to "fill in all the missing pieces" to aid rogue actors to plan and "carry out large-scale biological attacks." For this question to resolve YES, the AI in question does not need to be publicly released; internal experiments with a private model can be sufficient demonstration. An AI that can provide the high-level plan but is not able to supply important low-level details would not be enough to resolve YES.