
Sign up to save your podcasts
Or


Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.
Read the story here.
By WIRED4
6060 ratings
Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.
Read the story here.

2,010 Listeners

653 Listeners

56 Listeners

102 Listeners

49 Listeners

319 Listeners

112,236 Listeners

74 Listeners

77 Listeners