
Sign up to save your podcasts
Or


Earlier this month, OpenAI released its newest and most powerful chatbot, GPT-4, along with a technical paper summarizing the testing the company did to ensure its product is safe. The testing involved asking the chatbot how to build weapons of mass destruction or to engage in antisemitic attacks. In the cybersecurity world, this testing process is known as red teaming. In it, experts look for vulnerabilities, security gaps and anything that could go wrong before the product launches. Marketplace’s Meghan McCarty Carino spoke to Aviv Ovadya, a researcher at Harvard University’s Berkman Klein Center who was on the red team for GPT-4. He said this kind of testing needs to go further.
By Marketplace4.5
12471,247 ratings
Earlier this month, OpenAI released its newest and most powerful chatbot, GPT-4, along with a technical paper summarizing the testing the company did to ensure its product is safe. The testing involved asking the chatbot how to build weapons of mass destruction or to engage in antisemitic attacks. In the cybersecurity world, this testing process is known as red teaming. In it, experts look for vulnerabilities, security gaps and anything that could go wrong before the product launches. Marketplace’s Meghan McCarty Carino spoke to Aviv Ovadya, a researcher at Harvard University’s Berkman Klein Center who was on the red team for GPT-4. He said this kind of testing needs to go further.

31,960 Listeners

30,668 Listeners

8,768 Listeners

925 Listeners

1,386 Listeners

1,704 Listeners

4,335 Listeners

2,177 Listeners

5,487 Listeners

56,481 Listeners

1,449 Listeners

9,523 Listeners

3,590 Listeners

6,445 Listeners

6,387 Listeners

163 Listeners

2,996 Listeners

5,506 Listeners

1,383 Listeners

90 Listeners