
Sign up to save your podcasts
Or


Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.
Read the story here.
By WIRED4
6060 ratings
Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.
Read the story here.

2,001 Listeners

637 Listeners

56 Listeners

100 Listeners

50 Listeners

322 Listeners

112,454 Listeners

73 Listeners

77 Listeners