
Sign up to save your podcasts
Or
Open AI, a leading artificial intelligence research organization, has dissolved its Superalignment team, which aimed to mitigate long-term risks associated with advanced AI systems. The team was established just a year ago and was supposed to dedicate 20% of the company's computing power over four years. The departure of two high-profile leaders, Ilya Sutskever and Jan Leike, who were instrumental in shaping the team's mission, has raised concerns about the company's priorities on safety and societal impact. Leike believes more resources should be allocated towards security and safety.
4.9
88 ratings
Open AI, a leading artificial intelligence research organization, has dissolved its Superalignment team, which aimed to mitigate long-term risks associated with advanced AI systems. The team was established just a year ago and was supposed to dedicate 20% of the company's computing power over four years. The departure of two high-profile leaders, Ilya Sutskever and Jan Leike, who were instrumental in shaping the team's mission, has raised concerns about the company's priorities on safety and societal impact. Leike believes more resources should be allocated towards security and safety.
1,272 Listeners
9,257 Listeners
331 Listeners
4,716 Listeners
111,917 Listeners
192 Listeners
2,543 Listeners
2,969 Listeners
9,207 Listeners
5,462 Listeners
28,494 Listeners
15,335 Listeners
173 Listeners
121 Listeners
491 Listeners