
Sign up to save your podcasts
Or


Earlier this month, OpenAI released its newest and most powerful chatbot, GPT-4, along with a technical paper summarizing the testing the company did to ensure its product is safe. The testing involved asking the chatbot how to build weapons of mass destruction or to engage in antisemitic attacks. In the cybersecurity world, this testing process is known as red teaming. In it, experts look for vulnerabilities, security gaps and anything that could go wrong before the product launches. Marketplace’s Meghan McCarty Carino spoke to Aviv Ovadya, a researcher at Harvard University’s Berkman Klein Center who was on the red team for GPT-4. He said this kind of testing needs to go further.
By Marketplace4.4
7676 ratings
Earlier this month, OpenAI released its newest and most powerful chatbot, GPT-4, along with a technical paper summarizing the testing the company did to ensure its product is safe. The testing involved asking the chatbot how to build weapons of mass destruction or to engage in antisemitic attacks. In the cybersecurity world, this testing process is known as red teaming. In it, experts look for vulnerabilities, security gaps and anything that could go wrong before the product launches. Marketplace’s Meghan McCarty Carino spoke to Aviv Ovadya, a researcher at Harvard University’s Berkman Klein Center who was on the red team for GPT-4. He said this kind of testing needs to go further.

32,214 Listeners

8,783 Listeners

5,125 Listeners

927 Listeners

1,385 Listeners

1,278 Listeners

6,439 Listeners

5,492 Listeners

112,858 Listeners

56,917 Listeners

9,567 Listeners

10 Listeners

16,376 Listeners

35 Listeners

6,082 Listeners

5,535 Listeners

16,173 Listeners