
Sign up to save your podcasts
Or


Dr. Timnit Gebru was the co-lead of Google’s Ethical AI research team – until she raised concerns about bias in the company’s large language models and was forced out in 2020.
Her departure sent shockwaves through the AI and tech community and raised fundamental questions about how companies safeguard against bias in their own AI. Should in-house ethics research continue to be led by researchers who best understand the technology, or must ethics and bias be monitored by more objective researchers who aren’t employed by companies?
Harvard Business School professor Tsedal Neeley discusses how companies can approach the problem of AI bias in her case, “Timnit Gebru: ‘SILENCED No More’ on AI Bias and The Harms of Large Language Models.”
By HBR Presents / Brian Kenny4.5
190190 ratings
Dr. Timnit Gebru was the co-lead of Google’s Ethical AI research team – until she raised concerns about bias in the company’s large language models and was forced out in 2020.
Her departure sent shockwaves through the AI and tech community and raised fundamental questions about how companies safeguard against bias in their own AI. Should in-house ethics research continue to be led by researchers who best understand the technology, or must ethics and bias be monitored by more objective researchers who aren’t employed by companies?
Harvard Business School professor Tsedal Neeley discusses how companies can approach the problem of AI bias in her case, “Timnit Gebru: ‘SILENCED No More’ on AI Bias and The Harms of Large Language Models.”

378 Listeners

1,457 Listeners

106 Listeners

163 Listeners

1,105 Listeners

3,991 Listeners

1,375 Listeners

745 Listeners

104 Listeners

174 Listeners

40 Listeners

790 Listeners

668 Listeners

218 Listeners

78 Listeners

164 Listeners

82 Listeners