#046 The Great ML Stagnation (Mark Saroufim and Dr. Mathew Salvaris)

03.06.2021 - By Machine Learning Street Talk (MLST)

Download our free app to listen on your phone

Academics think of themselves as trailblazers, explorers — seekers of the truth.
Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering. Unfortunately, this also means that most research careers will invariably be failures at least if failures are measured via “objective” metrics like citations. Today we discuss the recent article from Mark Saroufim called Machine Learning: the great stagnation. We discuss the rise of gentleman scientists, fake rigor, incentives in ML, SOTA-chasing, "graduate student descent", distribution of talent in ML and how to learn effectively.  

With special guest interviewer Mat Salvaris. 

Machine learning: the great stagnation [00:00:00]
Main show kick off [00:16:30]
Great stagnation article / Bad incentive systems in academia [00:18:24]
OpenAI is a media business [00:19:48]
Incentive structures in academia [00:22:13]
SOTA chasing [00:24:47]
F You Money [00:28:53]
Research grants and gentlemen scientists [00:29:13]
Following your own gradient of interest and making a contribution [00:33:27]
Marketing yourself to be successful [00:37:07]
Tech companies create the bad incentives [00:42:20]
GPT3 was sota chasing but it seemed really... "good"? Scaling laws? [00:51:09]
Dota / game AI [00:58:39]
Hard to go it alone? [01:02:08]
Reaching out to people [01:09:21]
Willingness to be wrong [01:13:14]
Distribution of talent / tech interviews [01:18:30]
What should you read online and how to learn? Sharing your stuff online and finding your niece [01:25:52]

Mark Saroufim:


Dr. Mathew Salvaris:


More episodes from Machine Learning Street Talk (MLST)