
Sign up to save your podcasts
Or
Earlier this month, OpenAI released its newest and most powerful chatbot, GPT-4, along with a technical paper summarizing the testing the company did to ensure its product is safe. The testing involved asking the chatbot how to build weapons of mass destruction or to engage in antisemitic attacks. In the cybersecurity world, this testing process is known as red teaming. In it, experts look for vulnerabilities, security gaps and anything that could go wrong before the product launches. Marketplace’s Meghan McCarty Carino spoke to Aviv Ovadya, a researcher at Harvard University’s Berkman Klein Center who was on the red team for GPT-4. He said this kind of testing needs to go further.
4.4
7171 ratings
Earlier this month, OpenAI released its newest and most powerful chatbot, GPT-4, along with a technical paper summarizing the testing the company did to ensure its product is safe. The testing involved asking the chatbot how to build weapons of mass destruction or to engage in antisemitic attacks. In the cybersecurity world, this testing process is known as red teaming. In it, experts look for vulnerabilities, security gaps and anything that could go wrong before the product launches. Marketplace’s Meghan McCarty Carino spoke to Aviv Ovadya, a researcher at Harvard University’s Berkman Klein Center who was on the red team for GPT-4. He said this kind of testing needs to go further.
1,263 Listeners
1,647 Listeners
884 Listeners
8,641 Listeners
30,878 Listeners
1,359 Listeners
10 Listeners
37 Listeners
5,495 Listeners
1,440 Listeners
9,555 Listeners
3,595 Listeners
5,426 Listeners
1,322 Listeners
82 Listeners
221 Listeners
131 Listeners