In the LW discussion on "Will releasing the weights of large language models grant widespread access to pandemic agents?" (pdf) one of the main questions was whether open source models were uniquely dangerous: could hackathon participants have made similar progress towards learning how to obtain infectious 1918 flu even without access to an LLM by using traditional sources: Google, YouTube, reading papers, etc?
Resolves YES if the authors run a similar no-LLM experiment and find that yes-LLM hackathon participants are far more likely to find key information than no-LLM partipants.
Resolves NO if the authors run a similar no-LLM experiment and find that yes-LLM hackathon participants are not far more likely to find key information than no-LLM partipants.
Resolves N/A if the authors don't run a simliar no-LLM experiment.
Disclosure: I work for SecureBio, as do most of the authors. I work on a different project within the organization and don't have any inside information on whether they intend to run a no-LLM experiment or how it might look if they decide to run one.
If at close (currently 2024-06-01) the authors say they're working on a no-LLM version but haven't finished yet, I'll extend until they do, to a maximum of one year from the opening date (2024-10-31).