Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [About Me] Cinera's Home Page, published by DragonGod on February 7, 2023 on LessWrong.
I aspire to become an alignment theorist — all other details are superfluous — I leave them here anyway for historical purposes.
Introduction
I have a set of questions I habitually ask online acquaintances that pique my interest/who I want to get to know better. Many want to know my answer to those same questions.
It would be nice to have a central repository introducing myself that I can keep up to date.
Questions
A. What do you care about?
What are you passionate about?
What animates you?
B. What do you think is important?
C. What do you want/hope to do with your life?
D. What do you want/hope out of life?
E. Where are you coming from?
Context on who you are/what made you the person you are today.
F. How do you spend your time?
Work, volunteer, education, etc.
Basically, activities that aren't primarily for leisure/pleasure purposes.
G. What do you do for recreation/leisure/pleasure/fun time?
What are your hobbies?
Answers
A.
What do you care about?
What are you passionate about?
What animates you?
I care about creating a brighter future for humanity. I believe a world far better than any known to man is possible, and I am willing to fight for it.
I want humanity to be fucking awesome. To take dominion over the natural world and remake it in our own image, better configured to serve our values.
I want us to be as gods.
I outlined what godhood means for me here.
I think that vision is largely what drives me, what pushes me forward and keeps me going.
B.
B. What do you think is important?
Mitigating Existential Risk/Pursuing Existential Security
The obvious reasons are obvious.
But I am personally swayed by astronomical waste. I don't want us to squander our cosmic endowment. Especially because our future can be so wonderful, I think it would be very sad if we never realise it.
Promoting Existential Hope
I want to give people a positive vision of the future they can rally around and get excited by. Something that makes them glad to be alive. Eager to wake up each day. A goal to yearn for and aspire to.
To reach out to with relentless determination.
I'd like to communicate that:
The current state of the world is immensely better than even just three centuries ago
Life expectancy has doubled
Economic progress
The poverty rate has drastically fallen
Material abundance and comfort
Much faster and more reliable transport and communication
Etc.
Social progress
Slavery abolition
Women's suffrage
Spread of liberal democracy
Etc.
Etc.
Vastly better world states are yet possible
We can take actions that would make us significantly more likely to reach those vastly better states
We should do this
AI Safety
I believe that safely navigating the development of transformative artificial intelligence may be the most important project of the century.
Transformative AI could plausibly induce a paradigm shift in the human condition.
To explain what I mean by "paradigm shift in the human condition", I think we may see GDP doubling multiple times a year later this century.
(Depending on timelines and takeoff dynamics, doubling periods of a month or even shorter seem plausible.)
I'd like to approach AI safety from an agent foundations perspective (I think agent foundations work is neglected relative to its potential value and is a better fit for me). In particular, agent foundations solutions to alignment seem more likely to be:
Robust to arbitrary capability amplification
"Treacherous turns" seems less likely to be a challenge.
An agent becoming more capable wouldn't make it any more able to violate theorems.
Propagated across an agent's genealogy
Aligned agents could only create children that they believed to be "agent foundations aligned".
By induction this is propagated across all an agent's descen...