The Nonlinear Library

LW - On the UK Summit by Zvi


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the UK Summit, published by Zvi on November 7, 2023 on LessWrong.
In the eyes of many, Biden's Executive Order somewhat overshadowed the UK Summit. The timing was unfortunate. Both events were important milestones. Now that I have had time, here is my analysis of what happened at the UK Summit.
As is often the case with such events, there was a lot of talk relative to the amount of action. There was a lot of diplomatic talk, talk of that which everyone agrees upon, relative to the amount of talk of real substance. There were days of meetings that resulted in rather unspicy summaries and resolutions. The language around issues that matter most was softened, the actual mission in danger of being compromised.
And as usual, the net result was reason for optimism, a net highly positive event versus not having it, while also in some ways being disappointing when compared to what might have been. A declaration was signed including by China, but it neglected existential risk. Sunak's words on AI were not as strong as his words have been previously.
We got promises for two additional summits, in South Korea and France. Given that, I am willing to declare this a success.
One area of strong substance was the push for major AI labs to give substantive safety policies addressing a variety of issues, sometimes largely called Responsible Scaling Policies (RSPs). The biggest labs all did so, even Meta. Now we can examine their responses, know who is being how responsible, and push for better in the future or for government action to fix issues or enshrine progress. This was an excellent development.
This post will look at the rest of what happened at the Summit. I will be writing about the RSPs and other safety policies of the labs in a distinct post next week.
Looking Back at People's Goals for the Summit and TaskforceJack Clark's proposal from July 5 for what the Foundation Model taskforce might do to evaluate frontier models as its priority, and how it might prioritize that, Simeon's response emphasizing the need for a good way to know whether a proposal is safe enough to allow it to proceed.Navigating AI Risks asked on July 17 what the taskforce should do, advising focus on interventions to impact policy at labs and other governments. Suggested
focus was risk assessment methodology, demonstrating current risks and assessing current state of the art models, and to avoid direct alignment work.Lennart Heim's (GovAI) July 10 proposal of what the summit should try to accomplish, which he reviewed after the summit.Matt Clifford from the PM's office shared on September 10 their objectives for the summit: A shared understanding of the risks posed by frontier AI and the need for action, a forward process for international collaboration, measures
for organizations, finding areas for safety collaboration and showcasing how safe AI development can enhance global good.
AI Safety Summit AgendaWhat has the UK Taskforce been up to in advance of the summit (report)?
Ian Hogarth (Chair UK AI Frontier Model Taskforce): The Taskforce is a start-up inside government, delivering on the mission given to us by the Prime Minister: to build an AI research team that can evaluate risks at the frontier of AI. We are now 18 weeks old and this is our second progress report.
The frontier is moving very fast. On the current course, in the first half of 2024, we expect a small handful of companies to finish training models that could produce another significant jump in capabilities beyond state-of-the-art in 2023.
As these AI systems become more capable they may augment risks. An AI system that advances towards expert ability at writing software could increase cybersecurity threats. An AI system that becomes more capable at modelling biology could escalate biosecurity threats.
We believe it is critical that f...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings