
Sign up to save your podcasts
Or
We are in the midst of the first major U.S. election of the generative AI era. The people who want to win your vote have easy access to tools that can create images, video or audio of real people doing or saying things they never did — and slap on weird appendages or other make-believe effects along with targeted slogans. But the potential to deceive has led about two dozen states to enact some form of regulation requiring political ads that use artificial intelligence to include a label. So how do voters respond when they know a campaign has used AI? That’s what Scott Brennen and his team at New York University’s Center on Technology Policy set out to answer in a recent study.
4.5
12341,234 ratings
We are in the midst of the first major U.S. election of the generative AI era. The people who want to win your vote have easy access to tools that can create images, video or audio of real people doing or saying things they never did — and slap on weird appendages or other make-believe effects along with targeted slogans. But the potential to deceive has led about two dozen states to enact some form of regulation requiring political ads that use artificial intelligence to include a label. So how do voters respond when they know a campaign has used AI? That’s what Scott Brennen and his team at New York University’s Center on Technology Policy set out to answer in a recent study.
6,058 Listeners
883 Listeners
8,649 Listeners
30,938 Listeners
1,356 Listeners
32,141 Listeners
43,414 Listeners
2,169 Listeners
5,497 Listeners
1,445 Listeners
9,545 Listeners
3,599 Listeners
6,239 Listeners
163 Listeners
2,582 Listeners
1,323 Listeners
1,582 Listeners
82 Listeners
221 Listeners