The Nonlinear Library

LW - Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk by 1a3orn


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk, published by 1a3orn on November 2, 2023 on LessWrong.
0: TLDR
I examined all the biorisk-relevant citations from a policy paper arguing that we should ban powerful open source LLMs.
None of them provide good evidence for the paper's conclusion. The best of the set is evidence from statements from Anthropic -- which rest upon data that no one outside of Anthropic can even see,
and
on Anthropic's interpretation of that data. The rest of the evidence cited in this paper ultimately rests on a single extremely questionable "experiment" without a control group.
In all, citations in the paper provide an illusion of evidence ("look at all these citations") rather than actual evidence ("these experiments are how we know open source LLMs are dangerous and could contribute to biorisk").
A recent
further
paper on this topic (published
after
I had started writing this review) continues this pattern of being more advocacy than science.
Almost all the bad papers that I look at are funded by Open Philanthropy. If Open Philanthropy cares about truth, then they should stop burning the epistemic commons by funding "research" that is always going to give the same result no matter the state of the world.
1: Principles
What
could
constitute evidence that powerful open-source language models contribute or will contribute substantially to the creation of biological weapons, and thus that we should ban them?
That is, what kind of anticipations would we need to have about the world to make that a
reasonable thing to think
? What other beliefs are a necessary part of this belief making any sense at all?
Well, here are two pretty-obvious principles to start out with:
Principle of Substitution
: We should have evidence of some kind that the LLMs can (or will) provide information that humans cannot also easily access through other means -- i.e., through the internet, textbooks, YouTube videos, and so on.
Blocker Principle
: We should have evidence that the
lack of information
that LLMs can (or will) provide is in fact a significant blocker to the creation of bioweapons.
The first of these is pretty obvious. As example: There's no point in preventing a LLM from telling me how to make gunpowder, because I can find out how to do that from an encyclopedia, a textbook, or a novel like Blood Meridian. If you can substitute some other source of information for an LLM with only a little inconvenience, then an LLM does not contribute to the danger.
The second is mildly less obvious.
In short, it could be that
most of
the blocker to creating an effective bioweapon is not knowledge -- or the kind of knowledge that an LLM could provide -- but something else. This "something else" could be access to DNA synthesis; it could be the process of culturing a large quantity of the material; it could be the necessity of a certain kind of test; or it could be something else entirely.
You could compare to atomic bombs -- the chief obstacle to building atomic bombs is
probably not
the actual knowledge of how to do this, but access to refined uranium. Thus, rather than censor every textbook on atomic physics, we can simply control access to refined uranium.
Regardless, if this other blocker constitutes 99.9% of the difficulty in making an effective bioweapon, and lack of knowledge only constitutes 0.1% of the difficulty, then an LLM can only remove that 0.1% of the difficulty, and so open source LLMs would only contribute marginally to the danger. Thus, bioweapons risk would not be a good reason to criminalize open-source LLMs.
(I am not speaking theoretically here -- a
paper
from a researcher at the Future of Humanity Institute argues that the actual product development cycle involved in creating a bioweapon is far, far more of an obstacle to its cre...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings