
Sign up to save your podcasts
Or


Open AI, a leading artificial intelligence research organization, has dissolved its Superalignment team, which aimed to mitigate long-term risks associated with advanced AI systems. The team was established just a year ago and was supposed to dedicate 20% of the company's computing power over four years. The departure of two high-profile leaders, Ilya Sutskever and Jan Leike, who were instrumental in shaping the team's mission, has raised concerns about the company's priorities on safety and societal impact. Leike believes more resources should be allocated towards security and safety.
By Dr. Tony Hoang4.6
99 ratings
Open AI, a leading artificial intelligence research organization, has dissolved its Superalignment team, which aimed to mitigate long-term risks associated with advanced AI systems. The team was established just a year ago and was supposed to dedicate 20% of the company's computing power over four years. The departure of two high-profile leaders, Ilya Sutskever and Jan Leike, who were instrumental in shaping the team's mission, has raised concerns about the company's priorities on safety and societal impact. Leike believes more resources should be allocated towards security and safety.

91,091 Listeners

32,156 Listeners

229,089 Listeners

1,099 Listeners

341 Listeners

56,456 Listeners

153 Listeners

8,876 Listeners

2,040 Listeners

9,907 Listeners

507 Listeners

1,863 Listeners

79 Listeners

268 Listeners

4,230 Listeners