Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Executive Order, published by Zvi on November 1, 2023 on LessWrong.
Or: I read the executive order and its fact sheet, so you don't have to.
I spent Halloween reading the
entire Biden Executive Order on AI
.
This is the pure 'what I saw reading the document' post. A companion post will cover reactions to this document, but I wanted this to be a clean reference going forward.
Takeaway Summary: What Does This Do?
It mostly demands a lot of reports, almost entirely from within the government.
A lot of government employees will be writing a lot of reports.
After they get those reports, others will then write additional reports.
There will also be a lot of government meetings.
These reports will propose paths forward to deal with a variety of AI issues.
These reports indicate which agencies may get jurisdiction on various AI issues.
Which reports are requested indicates what concerns are most prominent now.
A major goal is to get AI experts into government, and get government in a place where it can implement the use of AI, and AI talent into the USA.
Another major goal is ensuring the safety of cutting-edge foundation (or 'dual use') models, starting with knowing which ones are being trained and what safety precautions are being taken.
Other ultimate goals include: Protecting vital infrastructure and cybersecurity, safeguarding privacy, preventing discrimination in many domains, protecting workers, guarding against misuse, guarding against fraud, ensuring identification of AI content, integrating AI into education and healthcare and promoting AI research and American global leadership.
There are some tangible other actions, but they seem trivial with two exceptions:
Changes to streamline the AI-related high skill immigration system.
The closest thing to a restriction are actions to figure out safeguards for the physical supply chain for synthetic biology against use by bad actors, which seems clearly good.
If you train a model with 10^26 flops, you must report that you are doing that, and what safety precautions you are taking, but can do what you want.
If you have a data center capable of 10^20 integer operations per second, you must report that, but can do what you want with it.
If you are selling IaaS to foreigners, you need to report that KYC-style.
What are some things that might end up being regulatory requirements in the future, if we go in the directions these reports are likely to lead?
Safety measures for training and deploying sufficiently large models.
Restrictions on foreign access to compute or advanced models.
Watermarks for AI outputs.
Privacy enhancing technologies across the board.
Protections against unwanted discrimination.
Job protections of some sort, perhaps, although it is unclear how or what.
Essentially that this is the prelude to potential government action in the future. Perhaps you do not like that for various reasons. There are certainly reasonable reasons. Or you could be worried in the other direction, that this does not do anything on its own, and that it might be confused for actually doing something and crowd out other action. No laws have yet been passed, no rules of substance put into place.
One can of course be reasonably concerned about slippery slope or regulatory ratcheting arguments over the long term. I would love to see the energy brought to such concerns here, being applied to actual every other issue ever, where such dangers have indeed often taken place. I will almost always be there to support it.
If you never want the government to do anything to regulate AI, or you want it to wait many years before doing so, and you are unconcerned about frontier models, the EO should make you sad versus no EO.
If you do want the government to do things to regulate AI within the next few years, or if you are concerned about existen...