
Sign up to save your podcasts
Or
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.
Want to help shape TED’s shows going forward? Fill out our survey!
Hosted on Acast. See acast.com/privacy for more information.
4.3
384384 ratings
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.
Want to help shape TED’s shows going forward? Fill out our survey!
Hosted on Acast. See acast.com/privacy for more information.
1,626 Listeners
1,845 Listeners
11,170 Listeners
386 Listeners
1,234 Listeners
1,120 Listeners
1,432 Listeners
22,146 Listeners
652 Listeners
321 Listeners
1,454 Listeners
1,418 Listeners
9,246 Listeners
1,261 Listeners
590 Listeners
1,497 Listeners
108 Listeners
777 Listeners
87 Listeners
1,353 Listeners
1,469 Listeners
292 Listeners
80 Listeners
215 Listeners
148 Listeners
15 Listeners
47 Listeners
5 Listeners