The Nonlinear Library: Alignment Forum

AF - 3. Premise three and Conclusion: AI systems can affect value change trajectories and the Value Change Problem by Nora Ammann


Listen Later

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 3. Premise three & Conclusion: AI systems can affect value change trajectories & the Value Change Problem, published by Nora Ammann on October 26, 2023 on The AI Alignment Forum.
In this post, I introduce the last of three premises - the claim that
AI systems are (and will become increasingly) capable of affecting people's value change trajectories
. With all three premises in place, we can then go ahead articulating the Value Change Problem (VCP) in full. I will briefly recap the full account, and then give an outlook on what is yet to come in post 4 and 5, where we discuss the risks that come from failing to take VCP seriously.
Premise three: AI systems can affect value change trajectories
The third and final premise required to put together the argument for the Value Change Problem is the following: AI systems are (and will become increasingly) capable of affecting people's value change trajectories.
I believe the case for this is relatively straightforward. In the previous post, we have seen several examples of how external factors (e.g. other individuals, societal and economic structures, technology) can influence an individual's trajectory of value change, and that they can do so in ways that may or may not be legitimate. The same is true for AI systems.
Value change typically occurs as a result of moral reflections/deliberation, or learning of new information/making new experiences. External factors can affect these processes - e.g. by affecting what information we are exposed to, by biasing our reflection processes towards some rather than other conclusions,etc. - , thereby influencing an individual's trajectory of value change. AI systems are another such external factor capable of similar effects. Consider for example the use of AI systems in media, advertisement or education, as personal assistants, to help with learning or decision making, etc. From here, it's not a big step to recognise that, with the continued increasing in capabilities and deployment of these systems, the overall effect AI systems might come to have over our value change trajectories.
Posts 4 and 5 will discuss all of this in more detail, including by proposing specific mechanisms by which AIs can come to affect value change trajectories, as well as the question when they are and aren't legitimate.
As such, I will leave discussing of the third premise and this and swiftly move on to putting together the full case for the Value Change Problem:
Putting things together: the Value Change Problem
Let us recap the arguments so far. First, I have argued that human values are malleable rather than fixed. In defence of this claim, I have argued that humans typically undergo value change over the course of their lives; that human values are sometimes uncertain, underdetermined or open-ended, and that some ways in which humans typically deal with this involves value change; and, finally, that transformative experiences (as discussed by Paul (2014)) and aspiration (as discussed by Callard (2018)), too, represent examples of value change.
Next, I have argued that some cases of value change can be (il)legitimate. In support of this claim, I have made an appeal to intuition by providing examples of cases of value change which I argue most people would readily accept as legitimate and illegitimate, respectively. I then strengthened the argument by proposing a plausible evaluative criteria - namely, the degree of self-determination involved in the process of value change - which lends further support and rational grounding to our earlier intuition.
Finally, I argued that AI systems are (and will become increasingly) capable of affecting people's value change trajectories. (While leaving some further details to posts 4 and 5.)
Putting these together, we can argue that ethical design of AI systems must b...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear Library: Alignment ForumBy The Nonlinear Fund


More shows like The Nonlinear Library: Alignment Forum

View all
AXRP - the AI X-risk Research Podcast by Daniel Filan

AXRP - the AI X-risk Research Podcast

9 Listeners