
Sign up to save your podcasts
Or


When people think about AI Risk they think about an artificial superintelligence malevolently and implacably pursuing its goals with humans standing by powerless to stop it. But what if AI Risk is more subtle? What if the same processes which have been so successfully used for image recognition are turned towards increasing engagement? What if that engagement ends up looking a lot like addiction? And what if the easiest way to create that addiction is by eroding the mental health of the user? And what if it's something we're doing to ourselves?
By Jeremiah4.7
1818 ratings
When people think about AI Risk they think about an artificial superintelligence malevolently and implacably pursuing its goals with humans standing by powerless to stop it. But what if AI Risk is more subtle? What if the same processes which have been so successfully used for image recognition are turned towards increasing engagement? What if that engagement ends up looking a lot like addiction? And what if the easiest way to create that addiction is by eroding the mental health of the user? And what if it's something we're doing to ourselves?

1,993 Listeners

2,461 Listeners

113,121 Listeners

131 Listeners

7,244 Listeners

5,239 Listeners

443 Listeners

11,025 Listeners

2,854 Listeners

13 Listeners

292 Listeners

2,029 Listeners