
Sign up to save your podcasts
Or


Why Do Humans Anthropomorphize AI?
Artificial intelligence has become more sophisticated in a short period of time. Even though we may understand that when ChatGPT spits out a response, there’s no human behind the screen, we can’t help but anthropomorphize—imagining that the AI has a personality, thoughts, or feelings.
How exactly should we understand the bond between humans and artificial intelligence?
Guest host Sophie Bushwick talks to Dr. David Gunkel, professor of media studies at Northern Illinois University, to explore the ways in which humans and artificial intelligence form emotional connections.
When you want to look at the microbial health of a city, there are a variety of ways to go about it. You might look at medical records, or air quality. In recent years, samples of wastewater have been used to track COVID outbreaks. Studies of urban subway systems have involved painstaking swabs of patches of subway muck. But now, researchers are offering another approach to sample a city’s environment—its beehives.
A report recently published in the journal Environmental Microbiome used the bees foraging in a city to provide information about the town’s bacteria and fungi. The researchers found that by looking at the debris in the bottom of a beehive, they could learn about some of the environments in the blocks around the hives. The microbes they collected weren’t just species associated with flowers and plant life, but included organisms associated with ponds and dogs. The team found that the hive samples could reveal changes from one neighborhood to another in a city, and in the microbial differences between different cities—samples taken in Venice, for instance, contained signals associated with rotting wood that were not seen in samples from Tokyo.
Elizabeth Henaff, an assistant professor in the NYU Tandon School of Engineering at New York University and a co-author of the report, joins SciFri’s Kathleen Davis to talk about what bees and microbes can tell us about the cities we share.
What happens after you pick up a book, or pull up some text on your phone?
What occurs between the written words hitting your eyes and your brain understanding what they represent?
Scientists are trying to better understand how the brain processes written information—and how a primate brain that evolved to make sense of twisty branches and forking streams adapted to comprehend a written alphabet.
Researchers used electrodes implanted in the brains of patients being evaluated for epilepsy treatment to study what parts of the brain were involved when those patients read words and sentences. They found that two different parts of the brain are activated, and interact in different ways when you read a simple list of unrelated words, compared to when you encounter a series of words that builds up a more complex idea.
Dr. Nitin Tandon, a professor of neurosurgery at UTHealth Houston and one of the authors of a report on the work published in the Proceedings of the National Academy of Sciences, joins guest host Sophie Bushwick to talk about the study, and what scientists are learning about how the brain allows us to read.
Transcripts for each segment will be available the week after the show airs on sciencefriday.com.
Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
By Science Friday and WNYC Studios4.4
59435,943 ratings
Why Do Humans Anthropomorphize AI?
Artificial intelligence has become more sophisticated in a short period of time. Even though we may understand that when ChatGPT spits out a response, there’s no human behind the screen, we can’t help but anthropomorphize—imagining that the AI has a personality, thoughts, or feelings.
How exactly should we understand the bond between humans and artificial intelligence?
Guest host Sophie Bushwick talks to Dr. David Gunkel, professor of media studies at Northern Illinois University, to explore the ways in which humans and artificial intelligence form emotional connections.
When you want to look at the microbial health of a city, there are a variety of ways to go about it. You might look at medical records, or air quality. In recent years, samples of wastewater have been used to track COVID outbreaks. Studies of urban subway systems have involved painstaking swabs of patches of subway muck. But now, researchers are offering another approach to sample a city’s environment—its beehives.
A report recently published in the journal Environmental Microbiome used the bees foraging in a city to provide information about the town’s bacteria and fungi. The researchers found that by looking at the debris in the bottom of a beehive, they could learn about some of the environments in the blocks around the hives. The microbes they collected weren’t just species associated with flowers and plant life, but included organisms associated with ponds and dogs. The team found that the hive samples could reveal changes from one neighborhood to another in a city, and in the microbial differences between different cities—samples taken in Venice, for instance, contained signals associated with rotting wood that were not seen in samples from Tokyo.
Elizabeth Henaff, an assistant professor in the NYU Tandon School of Engineering at New York University and a co-author of the report, joins SciFri’s Kathleen Davis to talk about what bees and microbes can tell us about the cities we share.
What happens after you pick up a book, or pull up some text on your phone?
What occurs between the written words hitting your eyes and your brain understanding what they represent?
Scientists are trying to better understand how the brain processes written information—and how a primate brain that evolved to make sense of twisty branches and forking streams adapted to comprehend a written alphabet.
Researchers used electrodes implanted in the brains of patients being evaluated for epilepsy treatment to study what parts of the brain were involved when those patients read words and sentences. They found that two different parts of the brain are activated, and interact in different ways when you read a simple list of unrelated words, compared to when you encounter a series of words that builds up a more complex idea.
Dr. Nitin Tandon, a professor of neurosurgery at UTHealth Houston and one of the authors of a report on the work published in the Proceedings of the National Academy of Sciences, joins guest host Sophie Bushwick to talk about the study, and what scientists are learning about how the brain allows us to read.
Transcripts for each segment will be available the week after the show airs on sciencefriday.com.
Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

90,967 Listeners

21,953 Listeners

78,451 Listeners

43,816 Listeners

32,003 Listeners

38,498 Listeners

43,582 Listeners

38,827 Listeners

9,182 Listeners

1,572 Listeners

477 Listeners

942 Listeners

12,700 Listeners

14,501 Listeners

12,225 Listeners

826 Listeners

921 Listeners

1,542 Listeners

3,511 Listeners

2,801 Listeners

1,405 Listeners

1,196 Listeners

5,577 Listeners

5,771 Listeners

421 Listeners

16,359 Listeners

6,535 Listeners

665 Listeners

2,822 Listeners

644 Listeners

1,971 Listeners

80 Listeners

209 Listeners

20 Listeners