
Sign up to save your podcasts
Or


Open AI, a leading artificial intelligence research organization, has dissolved its Superalignment team, which aimed to mitigate long-term risks associated with advanced AI systems. The team was established just a year ago and was supposed to dedicate 20% of the company's computing power over four years. The departure of two high-profile leaders, Ilya Sutskever and Jan Leike, who were instrumental in shaping the team's mission, has raised concerns about the company's priorities on safety and societal impact. Leike believes more resources should be allocated towards security and safety.
By Dr. Tony Hoang4.6
99 ratings
Open AI, a leading artificial intelligence research organization, has dissolved its Superalignment team, which aimed to mitigate long-term risks associated with advanced AI systems. The team was established just a year ago and was supposed to dedicate 20% of the company's computing power over four years. The departure of two high-profile leaders, Ilya Sutskever and Jan Leike, who were instrumental in shaping the team's mission, has raised concerns about the company's priorities on safety and societal impact. Leike believes more resources should be allocated towards security and safety.

204 Listeners

3,298 Listeners