Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective: PIBBSS Fellowship 2023, published by DusanDNesic on February 16, 2024 on The AI Alignment Forum.
Between June and September 2023, we (Nora and Dusan) ran the second iteration of the PIBBSS Summer Fellowship. In this post, we share some of our main reflections about how the program went, and what we learnt about running it.
We first provide some background information about (1) The theory of change behind the fellowship, and (2) A summary of key program design features. In the second part, we share our
reflections on (3) how the 2023 program went, and (4) what we learned form running it.
This post builds on an extensive internal report we produced back in September. We focus on information we think is most likely to be relevant to third parties, in particular:
People interested in forming opinions about the impact of the PIBBSS fellowship, or similar fellowship programs more generally
People interested in running similar programs, looking to learn from mistakes that others made or best practices they converged to
Also see
our reflections on the 2022 fellowship program. If you have thoughts on how we can improve, you can use
this name-optional feedback form.
Background
Fellowship Theory of Change
Before focusing on the fellowship specifically, we will give some context on PIBBSS as an organization.
PIBBSS overall
PIBBSS is a research initiative focused on leveraging insights and talent from fields that study intelligent behavior in natural systems to help make progress on questions in AI risk and safety. To this aim, we run several programs focusing on research, talent and field-building.
The focus of this post is our fellowship program - centrally a talent intervention. We ran the second iteration of the fellowship program in summer 2023, and are currently in the process of selecting fellows for the 2024 edition.
Since PIBBSS' inception, our guesses for what is most valuable to do have evolved. Since the latter half of 2023, we have started taking steps towards focusing on more concrete and more inside-view driven research directions. To this end, we started hosting several full-time research affiliates in January 2024. We are currently working on a more comprehensive update to our vision, strategy and plans, and will be sharing these developments in an upcoming post.
PIBBSS also pursues a range of other efforts aimed more broadly at field-building, including (co-)organizing a range of topic-specific AI safety workshops and hosting semi-regular
speaker events which feature research from a range of fields studying intelligent behavior and exploring their connections to the problem of AI Risk and Safety.
Zooming in on the fellowship
The Summer Research Fellowship pairs fellows (typically PhDs or Postdocs) from disciplines studying complex and intelligent behavior in natural and social systems, with mentors from AI alignment. Over the course of the 3-month long program, fellows and mentors work on a collaborative research project, and fellows are supported in developing proficiency in relevant skills relevant to AI safety research.
One of the driving rationales in our decision to run the program is that a) we believe that there are many areas of expertise (beyond computer science and machine learning) that have useful (if not critical) insight, perspectives and methods to contribute to mitigating AI risk and safety, and b) to the best of our knowledge, there does not exist other programs that specifically aim to provide an entry point into technical AI safety research for people from such fields.
What we think the program can offer:
To fellows
increased understanding of the AI risk problem, as well as potential avenues for reducing these risks.
the opportunity to explore how they can usefully apply their expertise, including identifying promising lines of ...