Natasha Jaques is currently a Research Scientist at Google Brain and a post-doc fellow at UC-Berkeley, where her research interests are in designing multi-agent RL algorithms while focusing on social reinforcement learning, that can improve generalization, coordination between agents, and collaboration between human and AI agents. She received her Ph.D. from MIT where she focused on Affective Computing and other techniques for deep/reinforcement learning. She has also received multiple awards for her research works submitted to venues like ICML and NeurIPS She has interned at DeepMind, Google Brain, and is an OpenAI Scholars mentor.
00:00 Introductions
01:25 Can you tell us a bit about what projects you are working on at Google currently? And what does the work routine look like as a Research Scientist?
06:25 You have worked as a researcher at many diverse backgrounds who are leading in the domain of machine learning: MIT, Google Brain, DeepMind - what are the key differences you have noticed while doing research in academia vs industry vs research lab?
10:00 About your paper, social influence as intrinsic motivation for multi-agents deep reinforcement learning, can you tell us more about how you are trying to leverage intrinsic rewards for better coordination?
12:00 Game Theory and Reinforcement Learning: discussion
16:00 What was the intuition behind that approach - did you resort to cognitive psychology to get this idea and later on the model it using standard DRL principles or something else?
20:00 Crackpot-y motivation behind the intuition of modeling social influence in MARL
24:00 What applications did you have in mind while working on that approach? What could be the potential domains you see people can use that approach?
25:35 Do you think generalization in RL is close enough to have an ImageNet moment?
28:35 Inspiration from social animals for better architectures - Yay/Nay?
30:20 How far are we in terms of using systems with DeepRL in day-to-day use? Or are there any such applications already in use?
34:40 Do you think these DRL can be made interpretable to some extent?
39:00 What really intrigued you to pursue a Ph.D. after your master's and not a job?
40:30 How did you go about deciding the topic for your Ph.D. thesis?
47:40 How do you typically go about segmenting a research topic into smaller segments, from the initial stage when it's more of an abstract and no connections to theory too much more implementable?
50:00 What are currently exploring and optimistic about?
Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.com
About the Host:
Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.
Jay Shah: https://www.linkedin.com/in/shahjay22/
You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.
Stay tuned for upcoming webinars!
***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahml
About the author: https://www.public.asu.edu/~jgshah1/