
Sign up to save your podcasts
Or
Earlier this week, Variety and other Hollywood publications reported that Matt and Ross Duffer, the brothers who created “Stranger Things” (and wrote and directed many episodes), were in talks to sign an exclusive deal with Paramount (now under the ownership of David Ellison’s Skydance). Then on Friday evening, Puck’s Matthew Belloni posted that the Duffers had in fact “made their choice” and were going to Paramount. Also, do A.I. chatbots packaged inside cute-looking plushies offer a viable alternative to screen time for kids? That’s how the companies selling these A.I.-powered kiddie companions are marketing them, but The New York Times’ Amanda Hess has some reservations. And, Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
3.7
2323 ratings
Earlier this week, Variety and other Hollywood publications reported that Matt and Ross Duffer, the brothers who created “Stranger Things” (and wrote and directed many episodes), were in talks to sign an exclusive deal with Paramount (now under the ownership of David Ellison’s Skydance). Then on Friday evening, Puck’s Matthew Belloni posted that the Duffers had in fact “made their choice” and were going to Paramount. Also, do A.I. chatbots packaged inside cute-looking plushies offer a viable alternative to screen time for kids? That’s how the companies selling these A.I.-powered kiddie companions are marketing them, but The New York Times’ Amanda Hess has some reservations. And, Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
1,646 Listeners
1,269 Listeners
1,064 Listeners
18 Listeners
662 Listeners
341 Listeners
1,448 Listeners
223 Listeners
80 Listeners
23 Listeners
960 Listeners
43 Listeners
5,952 Listeners
470 Listeners
23 Listeners
57 Listeners
1,362 Listeners
129 Listeners
31 Listeners