
Sign up to save your podcasts
Or


Check out our interview with Sonia Joseph, a member of South Park Commons and researcher at Mila, Quebec's preeminent AI research community.
Topics:
- India's Joan of Arc, Rani of Jhansi [[wiki](https://en.wikipedia.org/wiki/Rani_of...)] - Toxic Culture in AI - The Bay Area cultural bubble - Why Montreal is a great place for AI research - Why we need more AI research institutes - How doomerism and ethics come into conflict - The use and abuse of rationality - Neural foundations of ML
Links:
Mila: https://mila.quebec/en/ Follow Sonia on Twitter here: https://twitter.com/soniajoseph_ Follow your hosts: John: https://twitter.com/johnvmcdonnell Bryan: https://twitter.com/GilbertGravis And read their work:
Interview Transcript
Hi, I'm Bryan and I'm John. And we are hosting the Pioneer Park Podcast where we bring you in-depth conversations with some of the most innovative and forward-thinking creators, technologists, and intellectuals. We're here to share our passion for exploring the cutting edge of creativity and technology. And we're excited to bring you along on the journey. Tune in for thought-provoking conversations with some of the brightest minds of Silicon Valley and beyond.
John: okay, so today I'm super excited to invite Sonia onto the podcast. Sonia is an AI researcher at Mila Quebec AI Institute and co-founder of Alexandria, a Frontier Tech Publishing house. She's also a member of South Park Commons where she co-chaired a forum on agi, which just wrapped up in December.
We're looking forward to the public release of the curriculum later this year. So keep an eye out for that. Sonia, welcome to the.
Sonia: Hi John. Thanks so much for having me. [00:01:00] It's a pleasure to be here.
Bryan: Yeah, welcome.
Sonia: Hi, Bryan.
Bryan: Yeah, so I guess for full transparency, John and I were both attendees of this AGI forum.
And I was waiting every week's I guess session with baited breath. I thought that the discussions in the forum were super interesting. There was a bunch of really prominent, interesting guests that we had come through. And yeah, it was really interesting some intersection of like practical questions with sci.
And a lot of things that are like used to be sci-fi that are getting far more practical than perhaps we ever anticipated.
John: All right. So Sonia, I feel like the question that's on everyone's mind is, Who is Rahni of Jansi ?
Sonia: Oh my gosh. Yeah. Yeah. So basically like I grew up on a lot of like Indian literature and Indian myth.
And she's considered to be India's Jonah Arc. So female leader like has a place in feminist scholarship if you look at any literature. And I [00:02:00] believe she read. Of India against the British. I actually wanna fact check that .
John: Yeah, no, that's really cool. Just we love the the recent kind of blog post that you worked on with S and you pointed out how these kind of influences like really enabled you to succeed at your current endeavors.
So we're like, just curious about maybe like how your background. Made you who you are. .
Sonia: Yeah. Yeah. No, I appreciate that question a lot. So like I, I would say I had a kinda culturally schizophrenic background in some ways where I spent a lot of time. When I was a child in India but then the other half of my life was in Massachusetts.
Which was very like a lot of Protestantism and growing up on a lot of like American history. I like I saw things in a calculation of various like cultures and religions and that has like very much impacted like my entry into AI and how I'm conceiving of ai.
John: Yeah. Something that we loved about the AGI forum is that you have this [00:03:00] kind of really critical eye towards the culture of the way that AI is practiced and the way that research is going forward.
And we can I think you really brought this kind of unique perspective that was super valuable.
Bryan: Yeah, I'm curious do you, are there any points at which you think there's like current I guess problems either in the way that research is being done or the kind of I guess the moral framework in which that research is being done?
Sonia: It's a really interesting question. I would say the AI world is like very big first of all, so it's like hard to critique the entire thing. But it. Have it, parts of it have some of the problems that physics had in the 1990s or still has in being male dominated or like focused on like certain cultures.
And the culture will generate a certain type of research. So your scientific conclusions and the community or culture you're in, you have this like reciprocal relat. For example, in like the 1990s like there's this amazing book called The Trouble with Physics, with Lee [00:04:00] Smolin that goes into sort of like the anthropology of the physics community.
And the 1990s, the physics community was deeply obsessed with string theory. If you weren't working on string theory, you just weren't cool at all and you probably weren't gonna get tenure track. Goes into how string theory wasn't empirically proven. It was like mathematically, internally consistent, but it was by no means like a theory of everything.
And how the monoculture of physics and like the intellectual conclusion of St string theory would feed off each other in this like that cycle. Lee Smolin basically created his own institute to deal with this problem cuz he got just like very frustrated.
I don't think AI is quite so bad. But there are pockets of AI that I do notice. Similar dynamics. And particular the parts of AI that were previously like more influenced by effective altruism and LessWrong in this like the AI safety and alignment camp. I don't think these fields have as bad a problem anymore.
There have been recent. Attempts [00:05:00] called the reform attempt that Scott Aaronson had a very great blog post on how AI safety is being . There's an attempt for AI safety, like a legitimate science that's like empirically grounded and has mathematical theory. But I did notice that more classical AI safety definitely had these like 1990s style string theory problems, , both in the science being like not empirically verified, but like dogmatic. And also in the community that was generating it not being fairly healthy. And I guess with the caveat, I'll say have been either adjacent to or in these communities since I was basically like 12.
So I have seen like a very long history. And I also don't mean to like unilaterally critique these communities. think they have done a lot of good work and given a lot of contributions to the field both in terms of frameworks talent funding but a am looking at these communities with a critical eye, like as we move forward.
Cause it's like, what is, what are coming. Both as a scientific paradigm and as the research community that like generates that paradigm.
Bryan: I'm curious. To me there seemed like kind of two issues. I don't know if they're orthogonal but I think like the scientific integrity of a community and the ability for that community to [00:07:00] generate and falsify hypotheses and the culture of that community and whether or not that culture is a healthy culture to be in, whether it's like a nice place to work in and all that sort of stuff. And I guess my hypothesis is like none of us wanna work in a shitty culture and none of us wanna be part of communities where insults or like abusive behavior is tolerated at all.
But I think that a lot of scientific communities can be interpreted as quite dogmatic because there's an insistence on a specific sort of intellectual lens that you need to adapt to participate in the discussion. And for me it's it always seems like there's like a balance there.
Because for instance, if you wanna be a biologist, you better accept evolution. And like you, you're you have to meet that criteria. And I'm curious, do you think that, for instance in the, is there some sort of almost Intellectual cowtowing or basically a tip of the hat that one needs to do when you're studying artificial intelligence to make it into the room to be taken seriously.
Sonia: That's a great question. Yeah and evolution is an interesting example. Cause that's one that has been empiric. [00:08:00] Verified in various places and maybe the exact like structure o of evolution is open to debate. Like we dunno if it's more gradual or happens in leap burst.
But example in some AI communities is of accepting that on oncoming AI is gonna be bad. Or like culture or more apocalyptic culture. And this is prevalent in a lot of AI safety communities where in order to. Get your research like taken seriously or to even be viewed as an ethical person.
It becomes about character. You have to view AI as inevitable. It's coming fast, and it's more likely than not to be incredibly disastrous. And to be clear, I think ai, like we should be thinking about the safety behind incoming technologies. That's obvious and good.
If AI ends the world. That would be terrible. And even if there's a very small percentage that could happen, we should like to make sure it doesn't happen. But I do think that some of these communities like overweight that and make it almost part of the sort of dogma when it's not empirically proven that this is gonna happen.
We have no evidence this is going to happen. It's like a priority argument [00:09:00] that's actually like mimicking a lot of. Student stay cults and also like death cults that have been seen throughout history. And it's absolutely fascinating that much to less now than it was before.
A lot of possibly AI safety has become like modern alignment. Or practiced in more professional spheres where I think views are a lot more, more nuanced and balanced. But there is still a shadow of Bostrom and Yudkowski and these original thinkers who were influential, e even more influential like 10 to 15 years ago.
John: Sonia sometimes when I talk to people who are really into the alignment problem there's a kind of view that like the philosophical argument that is made is just like very strong.
And so They just, people just view it as actually just like a very strong argument that this, these systems are very dangerous. If you think about when I think about Holden Karnofsky's pasta, like I, I of imagine okay, I think if the system was that powerful, it seems like it would be dangerous.
I don't know exactly how likely I think that exact version of it is to be [00:10:00] created. When you think about those, when you think about the, I guess like the content of that alignment argument? Do you think the content did you just think it's do you think the argument is is strong or do you feel like it's actually overrated?
I guess what's your view on that. .
Sonia: Yeah. Re remind me of the so my memory of pasta is that there's some like math and AI that starts like, executing on experiments or like using the results of the experiments, like feedback.
John: That's right. Yeah.
Sonia: Yeah. Yeah. This is fascinating. I love pasta.
I, I think it's absolutely fascinating as a thought experiment. My pushback here would be like all of these scenarios strike me as being slow takeoff. Opposed to, someone develops, like an agent, like a single lab develops an agent, and the agent starts like recursively, self-improving and it like takes over the world, which is like often like the classic scenario presented.
John: Yeah.
Sonia: The reason this doesn't make sense to me is that there are so many like limitations in the physical. for example, just like the speed of molecules in biology. We're gonna be limited by that. The speed of a robot, like [00:11:00] traveling across the country. We're going to be limited by that.
Like we there's one, one argument that computers think so fast. They're not going to be able, they're gonna be able to outthink us. I think this is true, but ultimately for the co computer to interface with the physical world, it is going to be dealing with the slowness of the physical world.
And that is not something the computer can artificially speed up. There are also various other constraints, like the government has a lot of red tape and bureaucracy. In order to actually run any study you have to go through a certain approval process. Maybe the AI figures out how to bypass that.
That's possible. Maybe the AI has a physical army and it doesn't care if that's also possible. But I do think that. , the real world has enough red tape and constraints where we're not gonna wake up one day and see like drones like everywhere. And some like AI has taken over. I think it'll be like slower and more subtle than that.
This is also not to say not necessarily to worry, having some sort of superhuman scientist like that gets out of her control sounds objectively bad, but I don't actually think pas
By BryanCheck out our interview with Sonia Joseph, a member of South Park Commons and researcher at Mila, Quebec's preeminent AI research community.
Topics:
- India's Joan of Arc, Rani of Jhansi [[wiki](https://en.wikipedia.org/wiki/Rani_of...)] - Toxic Culture in AI - The Bay Area cultural bubble - Why Montreal is a great place for AI research - Why we need more AI research institutes - How doomerism and ethics come into conflict - The use and abuse of rationality - Neural foundations of ML
Links:
Mila: https://mila.quebec/en/ Follow Sonia on Twitter here: https://twitter.com/soniajoseph_ Follow your hosts: John: https://twitter.com/johnvmcdonnell Bryan: https://twitter.com/GilbertGravis And read their work:
Interview Transcript
Hi, I'm Bryan and I'm John. And we are hosting the Pioneer Park Podcast where we bring you in-depth conversations with some of the most innovative and forward-thinking creators, technologists, and intellectuals. We're here to share our passion for exploring the cutting edge of creativity and technology. And we're excited to bring you along on the journey. Tune in for thought-provoking conversations with some of the brightest minds of Silicon Valley and beyond.
John: okay, so today I'm super excited to invite Sonia onto the podcast. Sonia is an AI researcher at Mila Quebec AI Institute and co-founder of Alexandria, a Frontier Tech Publishing house. She's also a member of South Park Commons where she co-chaired a forum on agi, which just wrapped up in December.
We're looking forward to the public release of the curriculum later this year. So keep an eye out for that. Sonia, welcome to the.
Sonia: Hi John. Thanks so much for having me. [00:01:00] It's a pleasure to be here.
Bryan: Yeah, welcome.
Sonia: Hi, Bryan.
Bryan: Yeah, so I guess for full transparency, John and I were both attendees of this AGI forum.
And I was waiting every week's I guess session with baited breath. I thought that the discussions in the forum were super interesting. There was a bunch of really prominent, interesting guests that we had come through. And yeah, it was really interesting some intersection of like practical questions with sci.
And a lot of things that are like used to be sci-fi that are getting far more practical than perhaps we ever anticipated.
John: All right. So Sonia, I feel like the question that's on everyone's mind is, Who is Rahni of Jansi ?
Sonia: Oh my gosh. Yeah. Yeah. So basically like I grew up on a lot of like Indian literature and Indian myth.
And she's considered to be India's Jonah Arc. So female leader like has a place in feminist scholarship if you look at any literature. And I [00:02:00] believe she read. Of India against the British. I actually wanna fact check that .
John: Yeah, no, that's really cool. Just we love the the recent kind of blog post that you worked on with S and you pointed out how these kind of influences like really enabled you to succeed at your current endeavors.
So we're like, just curious about maybe like how your background. Made you who you are. .
Sonia: Yeah. Yeah. No, I appreciate that question a lot. So like I, I would say I had a kinda culturally schizophrenic background in some ways where I spent a lot of time. When I was a child in India but then the other half of my life was in Massachusetts.
Which was very like a lot of Protestantism and growing up on a lot of like American history. I like I saw things in a calculation of various like cultures and religions and that has like very much impacted like my entry into AI and how I'm conceiving of ai.
John: Yeah. Something that we loved about the AGI forum is that you have this [00:03:00] kind of really critical eye towards the culture of the way that AI is practiced and the way that research is going forward.
And we can I think you really brought this kind of unique perspective that was super valuable.
Bryan: Yeah, I'm curious do you, are there any points at which you think there's like current I guess problems either in the way that research is being done or the kind of I guess the moral framework in which that research is being done?
Sonia: It's a really interesting question. I would say the AI world is like very big first of all, so it's like hard to critique the entire thing. But it. Have it, parts of it have some of the problems that physics had in the 1990s or still has in being male dominated or like focused on like certain cultures.
And the culture will generate a certain type of research. So your scientific conclusions and the community or culture you're in, you have this like reciprocal relat. For example, in like the 1990s like there's this amazing book called The Trouble with Physics, with Lee [00:04:00] Smolin that goes into sort of like the anthropology of the physics community.
And the 1990s, the physics community was deeply obsessed with string theory. If you weren't working on string theory, you just weren't cool at all and you probably weren't gonna get tenure track. Goes into how string theory wasn't empirically proven. It was like mathematically, internally consistent, but it was by no means like a theory of everything.
And how the monoculture of physics and like the intellectual conclusion of St string theory would feed off each other in this like that cycle. Lee Smolin basically created his own institute to deal with this problem cuz he got just like very frustrated.
I don't think AI is quite so bad. But there are pockets of AI that I do notice. Similar dynamics. And particular the parts of AI that were previously like more influenced by effective altruism and LessWrong in this like the AI safety and alignment camp. I don't think these fields have as bad a problem anymore.
There have been recent. Attempts [00:05:00] called the reform attempt that Scott Aaronson had a very great blog post on how AI safety is being . There's an attempt for AI safety, like a legitimate science that's like empirically grounded and has mathematical theory. But I did notice that more classical AI safety definitely had these like 1990s style string theory problems, , both in the science being like not empirically verified, but like dogmatic. And also in the community that was generating it not being fairly healthy. And I guess with the caveat, I'll say have been either adjacent to or in these communities since I was basically like 12.
So I have seen like a very long history. And I also don't mean to like unilaterally critique these communities. think they have done a lot of good work and given a lot of contributions to the field both in terms of frameworks talent funding but a am looking at these communities with a critical eye, like as we move forward.
Cause it's like, what is, what are coming. Both as a scientific paradigm and as the research community that like generates that paradigm.
Bryan: I'm curious. To me there seemed like kind of two issues. I don't know if they're orthogonal but I think like the scientific integrity of a community and the ability for that community to [00:07:00] generate and falsify hypotheses and the culture of that community and whether or not that culture is a healthy culture to be in, whether it's like a nice place to work in and all that sort of stuff. And I guess my hypothesis is like none of us wanna work in a shitty culture and none of us wanna be part of communities where insults or like abusive behavior is tolerated at all.
But I think that a lot of scientific communities can be interpreted as quite dogmatic because there's an insistence on a specific sort of intellectual lens that you need to adapt to participate in the discussion. And for me it's it always seems like there's like a balance there.
Because for instance, if you wanna be a biologist, you better accept evolution. And like you, you're you have to meet that criteria. And I'm curious, do you think that, for instance in the, is there some sort of almost Intellectual cowtowing or basically a tip of the hat that one needs to do when you're studying artificial intelligence to make it into the room to be taken seriously.
Sonia: That's a great question. Yeah and evolution is an interesting example. Cause that's one that has been empiric. [00:08:00] Verified in various places and maybe the exact like structure o of evolution is open to debate. Like we dunno if it's more gradual or happens in leap burst.
But example in some AI communities is of accepting that on oncoming AI is gonna be bad. Or like culture or more apocalyptic culture. And this is prevalent in a lot of AI safety communities where in order to. Get your research like taken seriously or to even be viewed as an ethical person.
It becomes about character. You have to view AI as inevitable. It's coming fast, and it's more likely than not to be incredibly disastrous. And to be clear, I think ai, like we should be thinking about the safety behind incoming technologies. That's obvious and good.
If AI ends the world. That would be terrible. And even if there's a very small percentage that could happen, we should like to make sure it doesn't happen. But I do think that some of these communities like overweight that and make it almost part of the sort of dogma when it's not empirically proven that this is gonna happen.
We have no evidence this is going to happen. It's like a priority argument [00:09:00] that's actually like mimicking a lot of. Student stay cults and also like death cults that have been seen throughout history. And it's absolutely fascinating that much to less now than it was before.
A lot of possibly AI safety has become like modern alignment. Or practiced in more professional spheres where I think views are a lot more, more nuanced and balanced. But there is still a shadow of Bostrom and Yudkowski and these original thinkers who were influential, e even more influential like 10 to 15 years ago.
John: Sonia sometimes when I talk to people who are really into the alignment problem there's a kind of view that like the philosophical argument that is made is just like very strong.
And so They just, people just view it as actually just like a very strong argument that this, these systems are very dangerous. If you think about when I think about Holden Karnofsky's pasta, like I, I of imagine okay, I think if the system was that powerful, it seems like it would be dangerous.
I don't know exactly how likely I think that exact version of it is to be [00:10:00] created. When you think about those, when you think about the, I guess like the content of that alignment argument? Do you think the content did you just think it's do you think the argument is is strong or do you feel like it's actually overrated?
I guess what's your view on that. .
Sonia: Yeah. Re remind me of the so my memory of pasta is that there's some like math and AI that starts like, executing on experiments or like using the results of the experiments, like feedback.
John: That's right. Yeah.
Sonia: Yeah. Yeah. This is fascinating. I love pasta.
I, I think it's absolutely fascinating as a thought experiment. My pushback here would be like all of these scenarios strike me as being slow takeoff. Opposed to, someone develops, like an agent, like a single lab develops an agent, and the agent starts like recursively, self-improving and it like takes over the world, which is like often like the classic scenario presented.
John: Yeah.
Sonia: The reason this doesn't make sense to me is that there are so many like limitations in the physical. for example, just like the speed of molecules in biology. We're gonna be limited by that. The speed of a robot, like [00:11:00] traveling across the country. We're going to be limited by that.
Like we there's one, one argument that computers think so fast. They're not going to be able, they're gonna be able to outthink us. I think this is true, but ultimately for the co computer to interface with the physical world, it is going to be dealing with the slowness of the physical world.
And that is not something the computer can artificially speed up. There are also various other constraints, like the government has a lot of red tape and bureaucracy. In order to actually run any study you have to go through a certain approval process. Maybe the AI figures out how to bypass that.
That's possible. Maybe the AI has a physical army and it doesn't care if that's also possible. But I do think that. , the real world has enough red tape and constraints where we're not gonna wake up one day and see like drones like everywhere. And some like AI has taken over. I think it'll be like slower and more subtle than that.
This is also not to say not necessarily to worry, having some sort of superhuman scientist like that gets out of her control sounds objectively bad, but I don't actually think pas