The Nonlinear Library: Alignment Forum

AF - Apply to the PIBBSS Summer Research Fellowship by Nora Ammann


Listen Later

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the PIBBSS Summer Research Fellowship, published by Nora Ammann on January 12, 2024 on The AI Alignment Forum.
TLDR: We're hosting a 3-month, fully-funded fellowship to do AI safety research drawing on inspiration from fields like evolutionary biology, neuroscience, dynamical systems theory, and more. Past fellows have been mentored by John Wentworth, Davidad, Abram Demski, Jan Kulveit and others, and gone on to work at places like Anthropic, Apart research, or as full-time PIBBSS research affiliates.
Apply here: https://www.pibbss.ai/fellowship (deadline Feb 4, 2024)
''Principles of Intelligent Behavior in Biological and Social Systems' (
PIBBSS) is a research initiative focused on supporting AI safety research by making a specific epistemic bet: that we can understand key aspects of the alignment problem by drawing on parallels between intelligent behaviour in natural and artificial systems.
Over the last years we've financially supported around 40 researchers for 3-month full-time fellowships, and are currently hosting 5 affiliates for a 6-month program, while seeking the funding to support even longer roles. We also organise research retreats, speaker series, and maintain an active alumni network.
We're now excited to announce the 2024 round of our fellowship series!
The fellowship
Our Fellowship brings together researchers from fields studying complex and intelligent behavior in natural and social systems, such as evolutionary biology, neuroscience, dynamical systems theory, economic/political/legal theory, and more.
Over the course of 3-months, you will work on a project at the intersection of your own field and AI safety, under the mentorship of experienced AI alignment researchers. In past years, mentors included John Wentworth, Abram Demski, Davidad, Jan Kulveit - and we also have a handful of new mentors join us every year.
In addition, you'd get to attend in-person research retreats with the rest of the cohort (past programs have taken place in Prague, Oxford and San Francisco), and choose to join our regular speaker events where we host scholars who work in areas adjacent to our epistemic bet, like Michael Levin, Alan Love, and Steve Byrnes and a co-organised an event with Karl Friston.
The program is centrally aimed at Ph.D. or Postdoctoral researchers. However, we encourage interested individuals with substantial prior research experience in their field of expertise to apply regardless of their credentials.
Past scholars have pursued projects with titles ranging from: "Detecting emergent capabilities in multi-agent AI Systems" to "Constructing Logically Updateless Decision Theory" and "Tort law as a tool for mitigating catastrophic risk from AI". You can meet our alumni
here, and learn more about their research by checking out talks at our YouTube channel
PIBBSS summer symposium.
Our alumni have gone on to work at different organisations including OpenAI, Anthropic,
ACS,
AI Objectives Institute,
APART research, or as full-time researchers on our own PIBBSS research affiliate program.
Apply!
For any questions, you can reach out to us at
[email protected], or join one of our information sessions:
Jan 27th, 4pm Pacific (01:00 Berlin)
Link to register
Jan 29th, 9am Pacific (18:00 Berlin)
Link to register
Feel free to share this post with other who might be interested in applying!
Apply here: https://www.pibbss.ai/fellowship (deadline Feb 4, 2024)
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear Library: Alignment ForumBy The Nonlinear Fund


More shows like The Nonlinear Library: Alignment Forum

View all
AXRP - the AI X-risk Research Podcast by Daniel Filan

AXRP - the AI X-risk Research Podcast

8 Listeners