
Sign up to save your podcasts
Or
When people think about AI Risk they think about an artificial superintelligence malevolently and implacably pursuing its goals with humans standing by powerless to stop it. But what if AI Risk is more subtle? What if the same processes which have been so successfully used for image recognition are turned towards increasing engagement? What if that engagement ends up looking a lot like addiction? And what if the easiest way to create that addiction is by eroding the mental health of the user? And what if it's something we're doing to ourselves?
4.7
1717 ratings
When people think about AI Risk they think about an artificial superintelligence malevolently and implacably pursuing its goals with humans standing by powerless to stop it. But what if AI Risk is more subtle? What if the same processes which have been so successfully used for image recognition are turned towards increasing engagement? What if that engagement ends up looking a lot like addiction? And what if the easiest way to create that addiction is by eroding the mental health of the user? And what if it's something we're doing to ourselves?
26,462 Listeners
2,395 Listeners
111,827 Listeners
123 Listeners
1,874 Listeners
6,518 Listeners
5,248 Listeners
1,797 Listeners
408 Listeners
10,422 Listeners
15,237 Listeners
2,633 Listeners