
Sign up to save your podcasts
Or


We are in the midst of the first major U.S. election of the generative AI era. The people who want to win your vote have easy access to tools that can create images, video or audio of real people doing or saying things they never did — and slap on weird appendages or other make-believe effects along with targeted slogans. But the potential to deceive has led about two dozen states to enact some form of regulation requiring political ads that use artificial intelligence to include a label. So how do voters respond when they know a campaign has used AI? That’s what Scott Brennen and his team at New York University’s Center on Technology Policy set out to answer in a recent study.
By Marketplace4.5
12451,245 ratings
We are in the midst of the first major U.S. election of the generative AI era. The people who want to win your vote have easy access to tools that can create images, video or audio of real people doing or saying things they never did — and slap on weird appendages or other make-believe effects along with targeted slogans. But the potential to deceive has led about two dozen states to enact some form of regulation requiring political ads that use artificial intelligence to include a label. So how do voters respond when they know a campaign has used AI? That’s what Scott Brennen and his team at New York University’s Center on Technology Policy set out to answer in a recent study.

32,007 Listeners

30,681 Listeners

8,762 Listeners

14,401 Listeners

920 Listeners

1,385 Listeners

2,176 Listeners

5,490 Listeners

56,536 Listeners

1,447 Listeners

9,525 Listeners

3,579 Listeners

6,392 Listeners

163 Listeners

2,988 Listeners

5,510 Listeners

1,384 Listeners

91 Listeners

797 Listeners