
Sign up to save your podcasts
Or


A group of 13 current and former employees from prominent AI companies, including Open AI and Google's DeepMind, have spoken out about the dangers of AI technology, warning of its potential to exacerbate inequality, spread misinformation, and even cause significant loss of life. They urge companies to prioritize transparency and foster a culture of public debate to ensure accountability. The employees propose four principles to achieve this, including allowing criticism of risks, establishing an anonymous process for raising concerns, supporting a culture of criticism, and promising not to retaliate against whistleblowers.
By Dr. Tony Hoang4.6
99 ratings
A group of 13 current and former employees from prominent AI companies, including Open AI and Google's DeepMind, have spoken out about the dangers of AI technology, warning of its potential to exacerbate inequality, spread misinformation, and even cause significant loss of life. They urge companies to prioritize transparency and foster a culture of public debate to ensure accountability. The employees propose four principles to achieve this, including allowing criticism of risks, establishing an anonymous process for raising concerns, supporting a culture of criticism, and promising not to retaliate against whistleblowers.

91,091 Listeners

32,156 Listeners

229,089 Listeners

1,099 Listeners

341 Listeners

56,456 Listeners

153 Listeners

8,876 Listeners

2,040 Listeners

9,907 Listeners

507 Listeners

1,863 Listeners

79 Listeners

268 Listeners

4,230 Listeners