
Sign up to save your podcasts
Or


Driven by technology's inability to address societal issues rooted in biases, Gurwinder "G" Bhogal shifted his career from technology to debugging the human mind. He discusses biases in large language models (LLMs), their manipulation by left-leaning initiatives, and the challenge to test and correct AI biases. G, Christina Buttons, and Peter Boghossian analyze G's Twitter/X thread titled, "10 WAYS TO AVOID BEING FOOLED," where G gave "10 heuristics that will make you smarter." They discuss the importance of understanding opposing viewpoints, critically evaluating information sources, truth-seeking, news consumption habits, and more! Watch this episode on YouTube.
By Peter Boghossian4.7
206206 ratings
Driven by technology's inability to address societal issues rooted in biases, Gurwinder "G" Bhogal shifted his career from technology to debugging the human mind. He discusses biases in large language models (LLMs), their manipulation by left-leaning initiatives, and the challenge to test and correct AI biases. G, Christina Buttons, and Peter Boghossian analyze G's Twitter/X thread titled, "10 WAYS TO AVOID BEING FOOLED," where G gave "10 heuristics that will make you smarter." They discuss the importance of understanding opposing viewpoints, critically evaluating information sources, truth-seeking, news consumption habits, and more! Watch this episode on YouTube.

2,285 Listeners

214 Listeners

2,034 Listeners

359 Listeners

49 Listeners

799 Listeners

367 Listeners

2,367 Listeners

171 Listeners

1,171 Listeners

618 Listeners

214 Listeners

232 Listeners

421 Listeners

39 Listeners