Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the London Initiative for Safe AI (LISA), published by James Fox on February 3, 2024 on LessWrong.
The LISA Team consists of James Fox, Mike Brozowski, Joe Murray, Nina Wolff-Ingham, Ryan Kidd, and Christian Smith.
LISA's Advisory Board consists of Henry Sleight, Jessica Rumbelow, Marius Hobbhahn, Jamie Bernardi, and Callum McDougall.
Everyone has contributed significantly to the founding of LISA, believes in its mission & vision, and assisted with writing this post.
TL;DR: The London Initiative for Safe AI (LISA) is a new AI Safety research centre. Our mission is to improve the safety of advanced AI systems by supporting and empowering individual researchers and small organisations. We opened in September 2023, and our office space currently hosts several research organisations and upskilling programmes, including
Apollo Research,
Leap Labs,
MATS
extension,
ARENA, and
BlueDot Impact, as well as many individual and externally affiliated researchers.
LISA is open to different types of membership applications from other AI safety researchers and organisations.
(Affiliate) members can access talks by high-profile researchers, workshops, and other events. Past speakers have included Stuart Russell (UC Berkeley, CHAI), Tom Everitt & Neel Nanda (Google DeepMind), and Adam Gleave (FAR AI), amongst others.
Amenities for LISA Residents include 24/7 access to private & open-plan desks (with monitors, etc), catering (including lunches, dinners, snacks & drinks), and meeting rooms & phone booths. We also provide immigration, accommodation, and operational support; fiscal sponsorship & employer of record (upcoming); and regular socials & well-being benefits.
Although we host a limited number of short-term visitors for free, we charge long-term residents to cover our costs at varying rates depending on their circumstances. Nevertheless, we never want financial constraints to be a barrier to leading AI safety research, so please still get in touch if you would like to work from LISA's offices but aren't able to pay.
If you or your organisation are interested in working from LISA, please
apply here
If you would like to support our mission, please visit our Manifund page.
Read on for further details about LISA's vision and theory of change. After a short introduction, we motivate our vision by arguing why there is an urgency for LISA. Next, we summarise our track record and unpack our plans for the future. Finally, we discuss how we mitigate risks that might undermine our theory of change.
Introduction
London stands out as an ideal location for a new AI safety research centre:
Frontier Labs: It is the only city outside of the Bay Area with offices from all major AI labs (e.g., Google DeepMind, OpenAI, Antropic, Meta)
Concentrated and underutilised talent (e.g., researchers & software engineers), many of whom are keen to contribute to AI safety but are reluctant or unable to relocate to the Bay Area due to visas, partners, family, culture, etc.
UK Government connections: The UK government has clearly signalled their concern for the importance of AI safety by hosting the first
AI safety summit, establishing an AI safety institute, and introducing favourable
immigration requirements for researchers. Moreover, policy-makers and researchers are all within 30 mins of one another.
Easy transport links: LISA is ideally located to act as a short-term base for visiting AI safety researchers from the US, Europe, and other parts of the UK who want to visit researchers (and policy-makers) in companies, universities, and governments in and around London, as well as those in Europe.
Regular cohorts of the
MATS program (scholars and mentors) because of the above (in particular, the favourable immigration requirements compared with the US).
Despite this favourable setting, so far little c...