Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing EffiSciences’ AI Safety Unit, published by WCargo on June 30, 2023 on LessWrong.
This post was written by Léo Dana, Charbel-Raphaël Ségerie, and Florent Berthet, with the help of Siméon Campos, Quentin Didier, Jérémy Andréoletti, Anouk Hannot, Angélina Gentaz, and Tom David.
In this post, you will learn what were EffiSciences’ most successful field-building activities as well as our advice, reflections, and takeaways to field-builders. We also include our roadmap for the next year. Voilà.
What is EffiSciences?
EffiSciences is a non-profit based in France whose mission is to mobilize scientific research to overcome the most pressing issues of the century and ensure a desirable future for generations to come.
EffiSciences was founded in January 2022 and is now a team of ~20 volunteers and 4 employees.
At the moment, we are focusing on 3 topics: AI Safety, biorisks, and climate change. In the rest of this post, we will only present our AI safety unit and their results.
TL;DR: In one year, EffiSciences created and held several AIS bootcamps (ML4Good), taught accredited courses in universities, organized hackathons and conferences in France’s top research universities. We reached 700 students, 30 of whom are already orienting their careers into AIS research or field building. Our impact was found to come as much from kickstarting students as from upskilling them. And we are in a good position to become an important stakeholder in French universities on those key topics.
Field-building programsMachine Learning for Good bootcamp (ML4G)Parts of the MLAB and AGISF condensed in a 10-day bootcamp (very intense)2 ML4G, 36 participants, 16 are now highly involved.
This program was reproduced in Switzerland and Germany with the help of EffiSciences.Turing Seminar AGISF-adapted accredited course that has been taught with talks, workshops, and exercises.3 courses in France’s top 3 universities: 40 students attended, 5 are now looking to upskill, and 2 will teach the course next year.AIS Training DaysThe Turing Seminar compressed in a single day (new format)3 iterations, 45 studentsEffiSciences’ educational hackathonsA hackathon to introduce robustness to distribution change and goal misgeneralization.2 hackathons, 150 students, 3 are now highly involvedApart Research’s hackathonsWe hosted several Apart Research hackathons, mostly with people already onboarded4 hackathons hosted, 3 prizes won by EffiSciences’ teamsConferencesIntroductions to AI risks250 students were reached, ~10 are still in contact with usLovelace programSelf-study groups on AIS4 groups of 5 people each, which did not work well for upskilling.
TL;DR of the content
Results
In order to assess the effectiveness of our programs, we have estimated how many people have become highly engaged thanks to each program, using a single metric that we call “counterfactual full-time equivalent”. This is our estimate of how many full-time equivalent these people will engage in AI safety in the coming months (thanks to us, counterfactually). Note that some of these programs have instrumental value that is not reflected in the following numbers.
ActivityNumber of eventsBy occurrenceFounding the AI safety unit (founders & volunteers)166,0French ML4Good bootcamp27,43,7Word-of-mouth outreach12,92,9Training Day21,91,0Hackathon42,50,6Turing Seminars (AGISF adaptations)31,30,4Research groups in uni40,40,1Frid'AI (coworking on Fridays)50,50,1Conference50,10,0Total3023,0
Counterfactual
full time equivalent
In total, those numbers represent the aggregation of 43 people who are highly engaged, i.e. that have been convinced by the problem and are working on solving it through upskilling, writing blog posts, facilitating AIS courses, doing AIS internships, attending to SERI MATS, doing policy work in various orgs, etc. The time spen...