
Sign up to save your podcasts
Or
Read the full transcript here.
How much should we trust social science papers in top journals? How do we know a paper is trustworthy? Do large datasets mitigate p-hacking? Why doesn't psychology as a field seem to be working towards a grand unified theory? Why aren't more psychological theories written in math? Or are other scientific fields mathematicized to a fault? How do we make psychology cumulative? How can we create environments, especially in academia, that incentivize constructive criticism? Why isn't peer review pulling its weight in terms of catching errors and constructively criticizing papers? What kinds of problems simply can't be caught by peer review? Why is peer review saved for the very end of the publication process? What is "importance hacking"? On what bits of psychological knowledge is there consensus among researchers? When and why do adversarial collaborations fail? Is admission of error a skill that can be taught and learned? How can students be taught that p-hacking is problematic without causing them to over-correct into a failure to explore their problem space thoroughly and efficiently?
Daniel Lakens is an experimental psychologist working at the Human-Technology Interaction group at Eindhoven University of Technology. In addition to his empirical work in cognitive and social psychology, he works actively on improving research methods and statistical inferences, and has published on the importance of replication research, sequential analyses and equivalence testing, and frequentist statistics. Follow him on Twitter / X at @Lakens.
Further reading
Staff
Music
Affiliates
4.8
126126 ratings
Read the full transcript here.
How much should we trust social science papers in top journals? How do we know a paper is trustworthy? Do large datasets mitigate p-hacking? Why doesn't psychology as a field seem to be working towards a grand unified theory? Why aren't more psychological theories written in math? Or are other scientific fields mathematicized to a fault? How do we make psychology cumulative? How can we create environments, especially in academia, that incentivize constructive criticism? Why isn't peer review pulling its weight in terms of catching errors and constructively criticizing papers? What kinds of problems simply can't be caught by peer review? Why is peer review saved for the very end of the publication process? What is "importance hacking"? On what bits of psychological knowledge is there consensus among researchers? When and why do adversarial collaborations fail? Is admission of error a skill that can be taught and learned? How can students be taught that p-hacking is problematic without causing them to over-correct into a failure to explore their problem space thoroughly and efficiently?
Daniel Lakens is an experimental psychologist working at the Human-Technology Interaction group at Eindhoven University of Technology. In addition to his empirical work in cognitive and social psychology, he works actively on improving research methods and statistical inferences, and has published on the importance of replication research, sequential analyses and equivalence testing, and frequentist statistics. Follow him on Twitter / X at @Lakens.
Further reading
Staff
Music
Affiliates
4,248 Listeners
1,708 Listeners
2,652 Listeners
26,365 Listeners
2,404 Listeners
10,662 Listeners
898 Listeners
122 Listeners
90 Listeners
423 Listeners
60 Listeners
144 Listeners
43 Listeners
124 Listeners