Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the AI Fables Writing Contest!, published by Daystar Eld on July 12, 2023 on The Effective Altruism Forum.
TL;DR: Writing Contest for AI Fables
Deadline: Sept 1
Prizes: $1500/1000/500, consideration for writing retreat
Word length: <6,000
How: Google Doc in replies
Purpose: Help shape the future by helping people understand relevant issues.
Hey everyone, I'd like to announce a writing competition to follow up on this post about AI Fables!
Like Bard, I write fiction and think it has a lot of power not just to expand our imaginations beyond what we believe is possible, but also educate and inspire. For generations the idea of what Artificial General Intelligence could or would look like has been shaped by fiction, for better and for worse, and that will likely continue even as what once seemed to be purely speculative starts to become more and more real in our everyday lives.
But there's still time for good fiction to help shape the future, and on this particular topic, with the world changing so quickly, I want to help fill the empty spaces waiting for stories that can help people grapple with the relevant issues, and I'd like to help encourage those stories to be as good as possible, meaning both engaging and well-informed.
To that end, I'm calling for submissions of short stories or story outlines that involve one or more of the "nuts and bolts" covered in the above post, as well as some of my own tweaks:
Basics of AI
Neural networks are black boxes (though interpretability might help us to see inside).
AI "Psychology"
AI systems are alien in how they think. Even AGI are unlikely to think like humans or value things we'd take for granted they would.
Orthogonality and instrumental convergence might provide insight into likely AI behaviour.
AGI systems might be agents, in some relatively natural sense. They might also simulate agents, even if they are not agents.
Potential dangers from AI
Outer misalignment is a potential danger, but in the context of neural networks so too is inner misalignment (related: reward misspecification and goal misgeneralisation).
Deceptive alignment might lead to worries about a treacherous turn.
The possibility of recursive improvement might influence views about takeoff speed (which might influence views about safety).
Broader Context of Potential Risks
Different challenges might arise in the case of a singleton, when compared with multipolar scenarios.
Arms races can lead to outcomes that no-one wants.
AI rights could be a real thing, but also incorrect attribution of rights to non-sapient AI could itself pose a risk by restricting society's ability to ensure safety.
Psychology of Existential Risk
Characters whose perspectives and philosophies show what it's like to take X-risks seriously without being overwhelmed by existential dread
Stories showing the social or cultural shifts that might be necessary to improve coordination and will to face X-risks.
...or are otherwise in some way related to unaligned AI or AGI risk, such that readers would be expected to better understand some aspect of the potential worlds we might end up in. Black Mirror is a good example of the "modern Aesop's Fables or Grimm Fairytales" style of commentary-through-storytelling, but I'm particularly interested in stories that don't moralize at readers, and rather help people understand and emotionally process issues related to AI.
Though unrelated to AI, Truer Love's Kiss by Eliezer Yudkowsky and The Cambist and Lord Iron by Daniel Abraham are good examples of "modern fables" that I'd like to see more of. The setting doesn't matter, so long as it reasonably clearly teaches something related to the unique challenges or opportunities of creating safe artificial intelligence.
At least the top 3 stories will receive at least $1500, $1000, and $500 in reward...