
Sign up to save your podcasts
Or


Earlier this month, OpenAI released its newest and most powerful chatbot, GPT-4, along with a technical paper summarizing the testing the company did to ensure its product is safe. The testing involved asking the chatbot how to build weapons of mass destruction or to engage in antisemitic attacks. In the cybersecurity world, this testing process is known as red teaming. In it, experts look for vulnerabilities, security gaps and anything that could go wrong before the product launches. Marketplace’s Meghan McCarty Carino spoke to Aviv Ovadya, a researcher at Harvard University’s Berkman Klein Center who was on the red team for GPT-4. He said this kind of testing needs to go further.
By Marketplace4.5
12561,256 ratings
Earlier this month, OpenAI released its newest and most powerful chatbot, GPT-4, along with a technical paper summarizing the testing the company did to ensure its product is safe. The testing involved asking the chatbot how to build weapons of mass destruction or to engage in antisemitic attacks. In the cybersecurity world, this testing process is known as red teaming. In it, experts look for vulnerabilities, security gaps and anything that could go wrong before the product launches. Marketplace’s Meghan McCarty Carino spoke to Aviv Ovadya, a researcher at Harvard University’s Berkman Klein Center who was on the red team for GPT-4. He said this kind of testing needs to go further.

32,238 Listeners

30,641 Listeners

8,799 Listeners

936 Listeners

1,387 Listeners

1,652 Listeners

2,177 Listeners

5,486 Listeners

113,434 Listeners

56,991 Listeners

9,563 Listeners

10,328 Listeners

3,617 Listeners

6,109 Listeners

6,590 Listeners

6,461 Listeners

163 Listeners

2,991 Listeners

154 Listeners

1,382 Listeners

91 Listeners