Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Japan AI Alignment Conference Postmortem, published by Chris Scammell on April 20, 2023 on LessWrong.
The goal
Conjecture collaborated with Araya to host a two day AI Safety conference in Japan, the first Japan AI Alignment Conference (“JAC2023”). Our aim was to put together a small 30-40 person event to generate excitement around alignment for researchers in Japan and fuel new ideas for research topics. Wired Japan covered the event and interviewed Ryota Kanai (CEO of ARAYA), who co-organized it with us, here (original in JP).
The conference agenda was broken into four sections that aimed to progress deeper into alignment as the weekend went on (full agenda available here):
Saturday morning focused on creating common knowledge about AI safety and alignment.
Saturday afternoon focused on clarifying unaddressed questions participants had about AI alignment and moving towards thematic discussions.
Sunday morning focused on participant-driven content, with research talks in one room and opportunities for open discussion and networking in the other.
Sunday afternoon focused on bringing all participants together to discuss concrete takeaways.
While AI Safety is a discussion subject in Japan, AI alignment ideas have received very little attention. We organized the conference because we were optimistic about the reception to alignment ideas in Japan, having found on previous trips to Japan that researchers there were receptive and interested in learning more. In the best case, we hoped the conference could plant seeds for an organic AI alignment conversation to start in Japan. In the median case, we hoped to meet 2-3 sharp researchers who were eager to work directly on the alignment problem and contribute new ideas to the field.
Now that the conference is over, we're left wondering how successful we were in raising awareness of alignment issues in Japan and fostering new research directions.
What went well?
By the aims above, the event was a success.
We had a total of 65 participants, including 21 from the West, 27 from Japan, and 17 online attendees. We were pleasantly surprised by the amount of interest generated by the event, and had to turn down several participants as we reached capacity. We are grateful to LTFF for having supported the event via a grant, which allowed us to cover event costs and reimburse travel and accommodation for some participants who would not otherwise have come.
While it is too early to know whether or not the conference had a lasting impact, there seems to be some traction. CEA organizers Anneke Pogarell and Moon Nagai and other conference participants created the AI Alignment Japan Slack channel, which has nearly 150 members. Some participants have begun working on translating alignment-related texts into Japanese. Others have begun to share more alignment-related content on social media, or indicated that they are discussing the subject with their organizations. Some participants are planning to apply for grant funding to continue independent research. Conjecture is in talks with two researchers interested in pursuing research projects we think are helpful, and ARAYA has hired at least one researcher to continue working on alignment full-time.
As for the event itself, we conducted a survey after the event and found that 91% of respondents would recommend the conference to a friend, and that overall participant satisfaction was high. The "networking" aspect of the conference was rated as the most valuable component, but all other sections received a majority score of 4 out of 5, indicating that the content was received positively. Nearly all respondents from Japan indicated their knowledge of alignment had improved from the event. When asked how the conference had impacted their thoughts on the subject, the majority expressed a sense of urgen...