The World Between Us

Digital Discourse, Harmful Content, and Presidential Social Media Interactions


Listen Later

The interaction between prominent political figures and the public on social media is characterized by specific engagement metrics, controversial messaging, and the psychological predispositions of users. Research into "ratiometrics" defines a "ratio" as the balance between retweets and replies; a post is "ratioed" when replies significantly outnumber retweets or likes, which often serves as a proxy for public controversy or negative sentiment. Data shows that Donald Trump’s Twitter account was frequently ratioed, reflecting a more contentious presence compared to Barack Obama, whose posts consistently maintained a higher retweet-to-reply ratio.A major example of this controversial engagement occurred in February 2026, when President Trump shared a video on Truth Social depicting former President Barack Obama and Michelle Obama as primates. The 62-second video, which promoted conspiracy theories regarding the 2020 election, featured a scene where the Obamas' faces were imposed onto the bodies of apes. While White House press secretary Karoline Leavitt initially defended the clip as a "Lion King" meme and dismissed the backlash as "fake outrage," the post was eventually deleted, with the White House claiming a staffer had shared it "erroneously". The post drew sharp condemnation from Republican lawmakers, including Senator Tim Scott, who described it as "the most racist thing I’ve seen out of this White House".The receptivity of the public to such content is often tied to political orientation and the internal motivation to control prejudice (MCP). An exploratory study indicates that liberal individuals and those with high internal MCP are significantly more likely to reject racist social media content. In contrast, the study found that conservative individuals and those with low internal MCP were less likely to distinguish between racist and egalitarian posts, making them potentially more susceptible to being drawn into racist echo chambers. Interestingly, "external MCP"—the desire to avoid appearing prejudiced to others—did not reliably inhibit supportive behavior toward racist posts in the self-selected, anonymous environment of social media.To manage the spread of harmful content, platforms are increasingly exploring the use of Large Language Models (LLMs) to assist human moderators. Research using models like PaLM 2 indicates that AI can identify hate speech, harassment, and election misinformation with over 90% accuracy compared to human expert verdicts. These LLMs can be utilized in various "collaborative design patterns," such as pre-filtering non-violative content or providing human raters with keyword-based explanations for why a post might violate policy. In real-world pilot tests, such AI assistance improved the precision and recall of human moderators by 9–11% while significantly optimizing rater capacity. Hosted on Acast. See acast.com/privacy for more information.

Become a supporter of this podcast: https://www.spreaker.com/podcast/the-world-between-us--6886561/support.
...more
View all episodesView all episodes
Download on the App Store

The World Between UsBy Norse Studio