
Sign up to save your podcasts
Or


Earlier this week, Variety and other Hollywood publications reported that Matt and Ross Duffer, the brothers who created “Stranger Things” (and wrote and directed many episodes), were in talks to sign an exclusive deal with Paramount (now under the ownership of David Ellison’s Skydance). Then on Friday evening, Puck’s Matthew Belloni posted that the Duffers had in fact “made their choice” and were going to Paramount. Also, do A.I. chatbots packaged inside cute-looking plushies offer a viable alternative to screen time for kids? That’s how the companies selling these A.I.-powered kiddie companions are marketing them, but The New York Times’ Amanda Hess has some reservations. And, Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
By TechCrunch3.8
2525 ratings
Earlier this week, Variety and other Hollywood publications reported that Matt and Ross Duffer, the brothers who created “Stranger Things” (and wrote and directed many episodes), were in talks to sign an exclusive deal with Paramount (now under the ownership of David Ellison’s Skydance). Then on Friday evening, Puck’s Matthew Belloni posted that the Duffers had in fact “made their choice” and were going to Paramount. Also, do A.I. chatbots packaged inside cute-looking plushies offer a viable alternative to screen time for kids? That’s how the companies selling these A.I.-powered kiddie companions are marketing them, but The New York Times’ Amanda Hess has some reservations. And, Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.
Learn more about your ad choices. Visit podcastchoices.com/adchoices

1,716 Listeners

1,291 Listeners

533 Listeners

1,637 Listeners

1,086 Listeners

339 Listeners

340 Listeners

235 Listeners

18 Listeners

81 Listeners

24 Listeners

965 Listeners

41 Listeners

476 Listeners

23 Listeners

60 Listeners

564 Listeners

34 Listeners

40 Listeners