Neural intel Pod

AI Persuasion Through Reinforcement Learning and Rhetoric


Listen Later

This research paper examines the ethical and societal implications of Reinforcement Learning from Human Feedback (RLHF) in generative Large Language Models (LLMs), such as ChatGPT and Claude. It argues that RLHF subtly persuades users by embedding human values and motives into AI-generated text. The authors employ procedural rhetoric to analyze how these underlying mechanisms influence language conventions, information-seeking practices, and human-AI relationships. Ultimately, the paper highlights concerns regarding transparency, trust, and bias within these increasingly "human-like" AI systems.

...more
View all episodesView all episodes
Download on the App Store

Neural intel PodBy Neural Intelligence Network