Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to lead a project during the next virtual AI Safety Camp, published by Linda Linsefors on September 13, 2023 on The AI Alignment Forum.
Do you have AI Safety research ideas that you would like others to work on? Is there a project you want to do and you want help finding a team to work with you? AI Safety Camp could be the solution for you!
Summary
AI Safety Camp Virtual is a 3-month long online research program from January to April 2024, where participants form teams to work on pre-selected projects. We want you to suggest the projects!
If you have an AI Safety project idea and some research experience, apply to be a Research Lead.
If accepted, we offer some assistance to develop your idea into a plan suitable for AI Safety Camp. When project plans are ready, we open up team member applications. You get to review applications for your team, and select who joins as a team member. From there, it's your job to guide work on your project.
Your project is totally in your hands. We, Linda and Remmelt, are just there at the start.
Who is qualified?We require that you have some previous research experience. If you are at least 1 year into a PhD or if you have completed an AI Safety research program (such as a previous AI Safety Camp, Refine or SERI MATS), or done a research internship with an AI Safety org, then you are qualified already. Other research experience can count too.
More senior researchers are of course also welcome, as long as you think our format of leading an online team inquiring into your research questions suits you and your research.
Apply here
If you are unsure, or have any questions you are welcome to:
Book a call with Linda
Message Linda Linsefors on the Alignment Slack
Send an email
Choosing project idea(s)
AI Safety Camp is about ensuring future AI are safe. This round, we split work into two areas:
To not build uncontrollable AIFocussed work toward restricting corporate-AI scaling. Given reasons why 'AGI' cannot be controlled sufficiently (in time) to stay safe.
Everything elseOpen to any other ideas, including any work toward controlling/value-aligning AGI.
We welcome diverse projects! Last round, we accepted 14 projects - including in Theoretical research, Machine learning experiments, Deliberative design, Governance, and Communication.
If you already have an idea for what project you would like to lead, that's great. Apply with that one!
You don't need to come up with an original idea though. What matters is you understanding the idea you want to work on, and why. If you base your proposal on someone else's idea, make sure to cite them.
Primary reviewers:
Remmelt reviews uncontrollability-focussed projects.
Linda reviews everything else.
We will also ask for assistance from previous Research Leads, and up to a handful of other trusted people, to review and suggest improvement for your project proposals.
You can submit as many project proposals as you want. However, we will not let you lead more than two projects, and we don't recommend leading more than one.
Use this template to describe each of your project proposals. We want one document per proposal.
Team structure
Every team will have:
one Research Lead
one Team Coordinator
other team members
To make progress on your project, every team member is expected to work at least 5 hours per week (however the RL can choose to favour people who can put in more time, when selecting their team). This includes time joining weekly team meetings, and communicating regularly (between meetings) with other team members about their work.
Research Lead (RL)
The RL suggests one or several research topics. If a group forms around one of their topics, the RL will guide the project, and keep track of relevant milestones. When things inevitably don't go as planned (this is research after all) the RL is in ch...