Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 1-year update on impactRIO, the first AI Safety group in Brazil, published by João Lucas Duim on June 30, 2024 on The Effective Altruism Forum.
This post may include my own personal perspective as president, not representing other organisers' opinions. Special thanks to David Solar and Zoé Roy-Stang for their valuable feedback on the draft.
TL;DR
There are important universities in Rio de Janeiro that offer undergraduate, master and doctorate programs in AI and adjacent areas for some of the most talented students in the country.
We can list only about 5 Brazilian researchers who work on AI safety, and we're the first AI safety group in Brazil[1], so we face challenges like promoting and engaging AI safety from scratch and having members network with more experienced people.
impactRIO has offered 3 fellowships (Governance, Alignment, and Advanced Alignment) based on AI Safety Fundamentals courses.
I estimate that the group has 8 highly-engaged people among members and organisers.
Organisers and members sum at least 27 attendances in international EA and AI safety events.
Impactful outcomes have been noticed and reported, including 7 warm impact stories and 9 people changing their career plans to mitigate AI risks.
I summarise some lessons learned regarding logistics, methodology, bureaucracy and engagement.
There are many uncertainties regarding the future of the group, but we foresee a huge opportunity for the AI safety community to grow and thrive in this city.
Overview
This post is aimed at those doing community building and those interested in having more context on the AI safety community in Rio de Janeiro, Brazil. I expect the section about lessons learned to be the most valuable for the former and the section about impactful outcomes to be the most valuable for the latter.
impactRIO is an AI safety group in Rio de Janeiro, Brazil. The group was founded in July 2023 with the support of UGAP, Condor Initiative, EA Brazil, and during the last semester we had the support of OSP AIS (now named FSP). We remain as an unofficial club at a university that has very ambitious plans to become an AI hub in Latin America. Last month the university expressed disagreement with the existence of our group and stopped giving us any kind of support.
There are at least 2 projects which fully fund students to live in Rio. Usually, they're from many different parts of the country and were awarded with medals at scientific olympiads. Therefore, Rio has probably the biggest talent pool in Brazil and you can find very bright students around.
Given all this context, we decided to mainly focus on AI safety because we would likely have more students engage with the group, and we could shed light on AI safety for researchers and professors, so we had a larger expected impact this way.
26 people completed at least one fellowship offered. Only around 7 of them are seniors or graduates. The vast majority are currently sophomores. We estimate that 8 people among organisers and members are taking significant actions motivated by EA principles. We had 2 guest speakers who work full time with safety-related research, and they enriched the content and inspired participants!
What we've been doing
We can list only about 5 Brazilian researchers who work full time on AI safety and there weren't any other students aware of AI safety at our university before we founded the group. So, our group faces challenges like promoting and engaging AI safety from scratch and having members network with more experienced people. Given this context, it was natural to offer introductory fellowships and meetups.
In the second semester of 2023, we started the AI Safety Fellowship by running an Alignment cohort based on BlueDot's Alignment curriculum since we had a stronger technical background and little governance know...