Pigeon Hour

Vegan Hot Ones | EA Twitter Fundraiser 2024


Listen Later

A great discussion between my two friends Max Alexander of Scouting Ahead and Robi Rahman (in response to a fundraiser that we wrapped up more than 13 months ago)Tweet with context: https://x.com/AaronBergman18/status/1999918243205779864?s=20

Transcript

(AI-generated, likely imperfect)

MAX

Hello to the internet, maybe.

ROBI

Hey internet.

MAX

Um, I’m Max.

ROBI

I’m Robi.

MAX

Um, thank you all for donating, especially you. Um, so we’re gonna do a vegan version of Hot Ones. I actually don’t know if the camera can properly see. I mean, we took a photo as well, so someone will see it eventually. Um, but I have some very not spicy questions for you, and I hope.

ROBI

They get spicy, right?

MAX

Do you think it’s a little spicy?

ROBI

Um, I don’t know. Anyway.

MAX

Yeah, they’re not, you know, I’m sure someone will judge me greatly for this online.

ROBI

Um, yeah, the, uh, the food is spicy at least, or it gets a bit spicy. So, um, we’ve got, um, uh, we’ve got field roast, uh, buffalo wings without the buffalo sauce. We’ve got some spices on them. We’ve got, uh, Jack and Annie jackfruit nuggets and Impossible fake chicken nuggets, uh, with— my god, Sriracha, um, spicy chili.

MAX

Crisp.

ROBI

Calabrian hot chili powder, habanero hot salsa, Scotch bonnet puree, Elijah’s Extreme Regret Screamin’ Hot, um, Scorpion Reaper hot sauce.

MAX

Cool.

ROBI

And, um, some, uh, Dave’s Hot Chicken.

MAX

Reaper seasoning and Carolina I’m going to have a much worse time than you are.

ROBI

I’m looking forward to this.

MAX

Yeah, uh, I guess I think in tradition of hot ones, um, the guest, um, introduces themselves and like says a background. So I don’t know if you want.

ROBI

To— okay, yeah, um, let’s see, um, I’ve been involved in EA for— well, I think the first meetup I went to was 2017. Um, they, uh, EA was much smaller then and, uh, we didn’t have our own meetups. They were, um, the DCEA meetup group was, uh, combined with a vegan feminist environmentalist— [Speaker:MAX] That’s cool. [Speaker:ROBI] —something meetup. [Speaker:MAX] Yeah, nice. [Speaker:ROBI] Eventually we, we had enough EAs that we, you know, spun off our own, uh, effective altruism only thing. [Speaker:MAX] Cool. [Speaker:ROBI] Yeah, um, yeah, but, uh, that was fun. Um, that was also the first year I played giving games. Um, And then, uh, I was, I was kind of a global health person back then, but, um, um, Matt Ginsel was way ahead of his time, and he, um, like in the Giving Games, you get to— you like play all the games like poker or like whatever, whatever, and you win the chips, and then at the end you put the chips in, into the box for whatever charity you think should get the money. And, um, He surprised me by donating to pandemic prevention, which wasn’t even on my radar then. And then, like, 3 years later, he was totally right.

MAX

Yeah, unfortunately.

ROBI

Yeah. Uh, yeah.

MAX

And now you work at Epoch.

ROBI

I work at Epoch. Yeah. Um, I do AI forecasting, basically. My job is kind of to figure out when everyone else’s job will be automated. Delightful.

MAX

You know? Yeah. Cool. Um, yeah, I guess maybe our very lukewarm, uh, question is, uh, which do you think is better, fuel or soil land?

ROBI

Um, I think I prefer Soylent for the drinks.

MAX

[Speaker:Robi] Interesting.

ROBI

[Speaker:Max] But, um, Hewlett Hot Savory was great. They’ve recently rebranded, right? [Speaker:ROBI] I don’t know. [Speaker:MAX] Hot Savory to, um, Instant Meals or like, something like that? I haven’t bought it in a while.

MAX

[Speaker:Robi] I, yeah, I bought some for the fundraiser.

ROBI

[Speaker:Max] Should we eat some lukewarm nuggets to go with the lukewarm questions?

MAX

[Speaker:Robi] Yeah, yeah, exactly.

ROBI

[Speaker:Max] So let’s start off with the Chili Crisp, um, uh, buffalo wing. [Speaker:ROBI] Okay. [Speaker:MAX] Cheers.

MAX

Yeah, that’s not that spicy.

ROBI

[Speaker:Robi] Eat the whole thing.

MAX

[Speaker:Max] Oh no.

ROBI

I’m sorry. It’s so far. Chicken nugget. [Speaker:ROBI] Yeah, um, yeah, I don’t think I would— I don’t know if I would notice that’s not chicken.

MAX

[Speaker:Max] Oh yeah. For sure.

ROBI

I mean, I’m not a huge fan of chicken nuggets anyway, but yeah. Um.

MAX

Cool. Okay, um, let’s see.

ROBI

Uh.

MAX

Okay, well, this one’s a little spicy at least. Uh, what’s one thing you think everyone in EA is getting wrong?

ROBI

Um, I’m kind of like very EA orthodox, and I think EA is like basically right about everything. Um, the The thing I think EAs get wrong— I think the, um, I don’t believe in the, like, perils of maximizing stuff, or like— like, maximizing does have the problems that they point out, but like, I don’t think anyone has a good argument that, like, you should not maximize.

MAX

Sure.

ROBI

I think all of the, like— I don’t know, I just bite the bullet. I’m taking everything to the— like, if the principles are right and you have the facts, yeah, the conclusion is what it is.

MAX

Okay, well, that’s good. I think I have a question later that’s like Is the repugnant conclusion actually repugnant?

ROBI

I’ll have some thoughts on that. Yeah, I think I basically disagree with Holden Karnofsky and Scott Alexander on, like, you should get off the crazy train if it seems too weird. Like, no, if the reasoning checks out, you should do what you should do.

MAX

Cool.

ROBI

Yeah, I kind of think— this might be a bit spicy— Okay. I kind of think, um, they are— I slightly suspect they’re just saying that as cover, like after the FTX scandal and whatnot. Like, no, no, no, no, we don’t really believe in that stuff where you like take it to the extreme and like, yeah, yeah, yeah.

MAX

That is plausible. I don’t know Holden, so I cannot say for sure.

ROBI

Neither do I, but I’d like to think he’s smarter than to— sure.

MAX

Yeah, yeah. Um, cool. Yeah, though Yeah, I mean EA is a whole big thing, so, you know, um, cool, that’s a good one. That’s a— if you brought that to a party, you know, you would start a 3-hour discussion, sort of.

ROBI

No, I think that would be like, um, a 30th percentile EA spicy opinion.

MAX

Well, yeah, but then the other people, you like start the whole thing and they, uh, yeah, cool.

ROBI

Um, cool.

MAX

Oh wait, should we eat another thing first?

ROBI

Yeah, how many questions are there?

MAX

16? I have 16, but some of them are like not— Yeah, 2, 3 questions. Okay, cool.

ROBI

Um, yeah, uh, so you spoke at UHG once, right? I— not— I wasn’t quite a speaker. I was a, um, I ran a session. Yeah, it was, but it was, um, it was like a forecasting interactive exercise. So it was a, like, short presentation, and then we did a workshop.

MAX

Cool.

ROBI

Yeah, I think the EAG team has been trying to move away from static content and lectures, because EA has this meme of, like, you don’t go for the content, you go for the one-on-ones. Or a lot of people say, like, well, why should I watch a talk when my time is scarce and I could just watch it on YouTube anyway at 2x speed, thereby saving all this time? I don’t think people would— I don’t think the counterfactual is actually watching. I think it’s just never seeing the talk. Exactly.

MAX

Yeah.

ROBI

But, but, um, And there have been some really good talks at the AGs. Kevin Esvelt at EAJxBoston was incredible. Yeah, very, very good biosecurity presentation. But yeah, so I offered to— or like was, you know, talking to the content team about like they might have wanted a presentation, but they didn’t want it to just be a lecture. I could just give an Epoch spiel, but I think it was more fun with, you know, people who are in current views.

MAX

[Speaker:Max] Cool. Yeah, I guess if you were to do it now, has anything changed or.

ROBI

Is it mostly the— [Speaker:ROBI] Well, I would fix— one of my forecasting questions had a loophole. I think we were— so Matthew Barnett is another AI forecasting guy. He has just left Epoch to form a startup. Spicier than anything I’m doing. I can talk about that later.

MAX

[Speaker:Max] Yes, that’s a good question actually.

ROBI

Well, I’ll finish. Um, Matthew and I, you know, uh, had some questions. We adapted them for the UAG format. Um, I think I made some last-minute changes and then overlooked a loophole, which was— so the, um, I don’t remember what it was exactly, but it, it was something like one of the questions ended up being like— so there were 3 big questions of like different domains. Um, one was like superhuman in math, one was like, um, do all like households tasks by inventing robotics, and one was, um, um, synthetic biology capabilities. And one, uh, the last question was something like, um, when will it be possible to, with the aid of AI, invent a virus at least— like, synthesize a virus at least as dangerous as COVID or something. But I think I edited it last minute and then left some loophole where someone raised their hand and was like, “Well, you can already acquire a sample of a virus at least as dangerous as COVID by getting a sample of COVID.” Simply just have someone sneeze and then deliver it. So AI can already do that. But that’s not the point of the question. No, it was something like, “When will a rogue terror— when will it be possible for a rogue terrorist group with the aid of AI to get a sample of a virus at least as dangerous as COVID?” And they can already get COVID. Yeah, yeah, yeah, yeah.

MAX

Uh, yeah, cool.

ROBI

Uh, that wasn’t the exact question, but something like that. Yeah, nice.

MAX

Um, cool, that’s very fun. Yeah.

ROBI

Um.

MAX

Let’S see, uh, I guess, yeah, so if you kind of weren’t in EA now, is there like a career you would— do you have like a dream career that you’re like, ah, it’s just not impactful enough?

ROBI

So, um, that is a great question. I really like data science. Um, this is a little suspicious. Um, like Maybe I would do the same thing anyway. But yeah, I mean, I previously had a different job. I was like a construction engineer. But it was kind of boring and I wanted to switch to data science anyway. And then I found out— well, I was already considering it and then 80K was also a factor. Work on AI and all this stuff. It’s going to be really impactful. I had an old non-EA job and was just like earning to give. Um, but it like wasn’t much direct impact, and I wasn’t earning that much money, so like, um, and also I was like bored at my job, so I probably would have quit anyway. Um, but like that, uh, I ended up quitting I think in 2020, um, around when the principles came out, and that like influenced me a bit. Like that also spurred me to, you know, get into this.

MAX

Yeah, cool. Uh, should we do another one?

ROBI

Sure, yeah. Can I— interest you in, uh, an Italian spicy chicken nugget. Wait, this is, um— Oh yes, yeah, this is the, uh, field roast nugget but with a Calabrian hot chili powder.

MAX

Cool.

ROBI

Um, which is, uh, I believe it’s the spiciest thing from Europe. Um, nope, Scotch bonnets are not from Scotland. They’re, they’re named that because the pepper is in the shape of that, like, Scottish hat. Oh, okay. I’m putting some more spice on mine, but, uh, this stuff tastes really good. Like, well, apart from spiciness, but, uh, cool. Yeah.

MAX

Cheers.

ROBI

Cheers.

MAX

I can see how that’s the spiciest thing in here. Do you think it’s spicy, or— No.

ROBI

But yeah, um, This chili tastes so good, I put it on everything. It’s like great on like risotto, arancini, pizza, pasta.

MAX

Yeah, yeah, I mean, I could see why you would do that. And that would— I don’t need a lot of spice and it’s not like super, uh, yeah, but it’s nice. Cool. Um, let’s see. Yeah, so, well, this one’s a little spicy. What do you think a big mistake people in AI safety you’re making right now?

ROBI

[Speaker:Robi] Oh, I don’t know if I have any. Um, I think it’s become— at some point it was like low status to be too doomer. Like, I think AI safety didn’t— or like, EAs didn’t want to be associated with like being, um, like having high P doom. Um, because I don’t know, maybe government official— like, maybe it’s not put into the government, so like policy people didn’t want to like be one of those rabid doomer people, start to gain credibility, they went with the angle of, “I think it’s only 1% or 5%, but even regardless, you should still take it very seriously.” Which I agree. I totally think even if you have only 1% or 5% be doomed, this is possibly the most important issue. And the government is like sleeping on this and has no plan. But, um, no, but I don’t think these people— I don’t think there’s enough evidence to be like 95% confident this won’t cause doom, basically. Yeah.

MAX

I guess, do you have a P-Doom?

ROBI

It’s hard to define. I think it really depends what negative outcomes are included. Um, so I guess I, I don’t see humanity existing in its current form in like centuries from now.

MAX

Cool.

ROBI

But like, so a lot of people might think if we, um, like upload ourselves to cyborg bodies and then every 10 years like there’s more and more advantages of like, like having a robot arm instead of a human arm, they just get better and better, and then like people who are old-fashioned are like die out or are competed. Even if everyone at every step is happy with, like, “Oh, I would rather have robot hands instead of regular hands because robot hands are better.” Some people, if you look 100 years in the future and see that humans have turned into these cyborg monstrosities, might think, “Oh god, that’s horrid, that’s human extinction, like there are no biological humans left.” We’ve been destroyed. Even if it happened in a good way, where everyone is happier and happier each year, I think I would not count that as due. But, um, yeah, if I had to put a number on it, maybe 20-30%.

MAX

[Speaker:Max] Okay, well, you know, it never.

ROBI

Makes me happy to hear anyone’s numbers.

MAX

But thankfully I’m good at processing using, um, what’s the word, all those things you read when you get into EA where it’s like, ah, scope insensitivity and stuff.

ROBI

So yeah, you know, uh, I’ve never heard that one before. Luckily I’m very scope insensitive, so I’m not as worried. I’m not freaking out as much as I should be.

MAX

Okay, yeah. How many work trials have you done in your life?

ROBI

How many work trials have I done in my life? Um, at least one. I mean, I worked for Epoch and.

MAX

Then.

ROBI

Uh, Um, I mean, I solved the thing we were, we were trying to do in the work trial, so I got, got the job, uh, that way. Um, what other places have I worked out that— oh, um, I applied to Open Phil. Their, their hiring process is really long. I made it to the third round of work trials, and then I think they, uh, hired someone else for the position.

MAX

Yeah, yeah.

ROBI

Um, man, EA, uh, EA job market is rough. Yes.

MAX

This might be more of a more— Everyone’s super overqualified. Yes, yeah, uh, you know, as a young EA, you do a lot of work trials.

ROBI

Yeah.

MAX

What do you think the optimal number.

ROBI

Of work trials is? Optimal number of work trials is?

MAX

Yeah, in a hiring round, I guess. Um, but maybe in your life.

ROBI

Oh, for an employer to have?

MAX

Yeah.

ROBI

Or for you to do before picking a job?

MAX

Um, you know, either.

ROBI

So I, um, I— when you said that, I interpreted the question as like, what is the right number of work trials to do before you I thought this was like a secretary problem question, like how many jobs should you— how many job offers should you go through before you settle down on a job? The optimal number of work trials to do is I think 20 or 30, because that’s how many the guy did when he wrote that famous post. Getting an EA job is really, really, really hard. So, um, for that sweet, sweet forum karma, you should do 20 or 30 work trials.

MAX

Well, you gotta do like 10 more or something, see if I keep one up.

ROBI

I, I, I, I mean, you can’t just— you can’t just— that’s like 2020 talk. Yes, yeah, that was 5 years ago. The standards are, uh, yeah, much higher now. Yeah, cool. Um, should we eat another? Sure, yeah. Um, so this is just some, um, Tostitos Habanero Salsa on a, um, Jack and Annie jackfruit nugget. What do you think of the jackfruit or the habanero?

MAX

It’s spicier.

ROBI

I honestly, I’m not noticing any spice.

MAX

That— you know, well, um Um.

ROBI

Yeah.

MAX

I think the nuggets first though.

ROBI

I feel like you don’t like the nugget as much. Yeah, yeah. Um, my favorite is Impossible Nuggets, which are the last two. Um, yeah, um, I didn’t eat much jackfruit. Um, I think they had it at UAG, maybe in like a, like a salad or something. It was like kind of a meaty option. Um, but I looked at the macros of this. Unfortunately, jackfruit isn’t like very protein dense, so, um, it’s not my, uh, chicken replacement of choice.

MAX

That makes sense. Yeah. Uh, what do you think the best UAG venue is?

ROBI

Not UAG venue. Yeah, I don’t know how many you’ve been to. I freaking loved, um, London 2021, which was in— what’s the housing project called? Um, can you look up UG London 2021?

MAX

Yeah. Oh goodness.

ROBI

[Speaker:Robi] What’s it called? Uh, the Barbican. [Speaker:MAX] Okay. [Speaker:ROBI] Um, it’s like a, um, it’s like public housing, but it’s like freakishly nice. It’s like they have a museum and like an opera hall and bookstores and cafes and like an indoor tropical jungle.

MAX

[Speaker:Max] Okay, yeah, it seems like it would win, you know.

ROBI

And they had a conference, and they had, um, that was my— was that my first UAG? Yeah, it was. And then I was, I was blown away at the delicious vegan food. Maybe I had low standards for vegan food back then, but yeah, it was so good.

MAX

I’ve heard it’s gotten much better. I wasn’t really engaging with it that much in the past. I guess it got better than, like, 20 years ago or something.

ROBI

Yeah, sure.

MAX

So, you know, on some time horizon.

ROBI

I’m so glad that veggie burgers are good now. They used to be just like bean paste. Like, that’s not a burger substitute. Anyway.

MAX

Yes, yeah, yeah. Maybe we save that. Cool. #ADKpodcast or #Dworkishpodcast?

ROBI

Ooh, I really like both of them. I think I would— so if you took all of the 80K episodes I haven’t listened to and all the Dwarkesh episodes I haven’t listened to and randomly picked one of each without me seeing what they were, I would rather listen to the 80K episode just because— I think my reasoning is wrong, so I have to reconsider. I was going to say, because Dwarkesh mostly does AI stuff and I hear enough about— like, I have enough AI in my, uh, podcast ecosystem diet, uh, so I don’t need any more. Um, but actually Dwarkesh’s, uh, episodes on, like, history and, like, um, anthropology have been really good. So, um, yeah, now I’m torn. Um, gotta pick. Okay, I’m picking— I’m actually— now that I’ve remembered, he does non-AI stuff that’s also very good. Like extremely good. Um, no, I’m gonna say Dwarkesh, actually. Um, partly because I think I’ve already listened to most of the 80— like, I’ve gone through the 80K, like, episode list and listened to all the ones that seemed interesting. Um, so the ones that are left are, like, stuff I don’t really care.

MAX

And see here and there. Um, yeah, this one’s a little spicy, I guess. I was going to say this, but we’ll just do it because it’s on theme. Uh, what do you think about the 80K, uh, pivot?

ROBI

Kind of— 80K pivot to AI? Yeah. Um, I I really respect them for doing it. They’re doing what Holden is too chicken to do.

MAX

[Speaker:Max] Well, he’s an anthropic now, right?

ROBI

[Speaker:Robi] Oh, sure. I meant, um, with the maximization. [Speaker:MAX] Ah, yeah, fair. [Speaker:ROBI] If you think—if you have done the research, and you think you have the best—like, you have figured out what the best thing to do is, just friggin’ do it. Don’t waffle about how, “Oh, but we don’t want to maximize, we don’t want to—” optimized too hard, it would be too optimal. You can’t have that. Um, no, they, um, if they—their mission is to, like, um, maximize impact, and, um, they think AI is just, like, much more pivotal in the next few years than all, like, every other issue that could steer people to or focus on, which I think that’s correct. Um, it absolutely makes sense to go for it.

MAX

Nice.

ROBI

Um, yeah, but— and, and I mean, they’re leaving the career guides up on the other cause areas, so for people who are like not, you know, AI true believers or are, um, animal welfare fanatics, they’re still— yeah, yeah.

MAX

Do you, um, what do you think about like, uh, there’s kind of a thing where like EA is very young, like we have an oversupply of, you know, young 22-year-olds, uh, ambitious. You think we need to like, uh switch our recruiting? Yeah, recruiting, you know, now that— yeah.

ROBI

I think I rubbed spice in my account. Um, switch the recruiting? Uh, I don’t know. Um, I think I understand and mostly agree with most of EA’s, like, past decisions. Like, they get criticized for focusing on, like, um, um, like, fancy universities or something, but, like, the, the kids who go here— there— like, they’re the ones who have the most opportunities to go into these, like, uh, like, competitive, like, tech or, like, consulting jobs, which are, like, the kind of things that were needed by the movement. Um, Um, yeah, I think— I guess the shorter your timelines are, the more important it is to not do stuff like— so, like, Horizon Fellowship makes sense, right? They’re incubating people in with maybe technical or policy expertise and get— getting them placed in influential positions for policy. Um, If you have shorter timelines, maybe this, like, long setup stuff doesn’t make sense, and maybe you should just, like, directly, like, pitch to— just, like, try to convince the senator instead of, like, this, like, galaxy brain plan where you, like, train the staffers who will then be in the office of, like, the next candidate who wins, and then, like, when the House switches back to Democrats, then they’ll have like these people in a position of power, and then there’ll be the singularity in like 2032, and by then we will have like gotten all the— yeah.

MAX

Nice, cool.

ROBI

Yeah.

MAX

Um, should we get another one?

ROBI

Sure, yep. Um, okay, I’ve got a spicy, spicy take for you along with a spicy nugget. So this is, um, um, the last jackfruit nugget, and it’s got scotch bonnet puree, um, Yep, enjoy it too. I wasn’t thinking about this, but I put a lot of Scotch bonnet on mine. It might hurt my stomach later, but it’s delicious. How are you doing?

MAX

Oh, you know, okay. It’s just spicier than what I would put on my food, but you know. I eat a lot of cereal, so, you know, it’s like, yeah, all right.

ROBI

You know, I haven’t heard that one in, uh, I mean, like, yeah, not.

MAX

Like an insane amount of cereal, but, you know, probably more than average.

ROBI

I can’t remember the last time I ate cereal.

MAX

Like, maybe it’s not surprising. I guess, like, maybe it’s not. I only have, like, I only know the cereal eating habits of people I’m around, and like, if you live in a house where people— it’s like, you know, no one likes cereal. Yeah, I don’t know what the market cap is.

ROBI

I don’t know, I guess we’re just not a cereal household.

MAX

Yeah. So what’s your spicy take?

ROBI

Uh, well, I think, um, you know, politics is the mind killer from restaurant. Yeah. Um, I don’t know if I’ve actually.

MAX

Read it, to be honest.

ROBI

Sure, but I’ve heard people say like political— like having political opinions biases you. My spicy take is that, um, I think EAs are too Democrat and they’re like, oh, discriminating against Republicans. Like, we’re leaving billions of dollars of donation and like tons of political influence on the table because EAs won’t put aside their like partisanship.

MAX

Yeah, that I don’t know. I feel like you’re probably right there, though also being liberal-leaning, I’m like, I.

ROBI

Don’T know, like, you know, like, uh, we could get twice as many donations for vendettas. But on the other hand, Republicans are just so— like, if you met MAGA supporters. Yeah, yeah, yeah. But, um, yeah, but I think we’re seeing, um, maybe some bad effects of this kind of situation in EA. Um, EA’s being too left-wing, I guess, where the administration— there’s just like no EAs in power right now. And I think we could have, you know, we’re missing impactful opportunities to have the policy and implementation be less terrible if we just had people in the system who could like, you know, if we just built up the deep state in both parties. Like, if there were any EA Republicans, they would be in the government right now, and they would be, you know, throwing a spanner in the works of the disastrous tariffs and US aid cancellations.

MAX

[Speaker:Max] I’ve heard people say that Doge is EA, though I don’t think it is.

ROBI

[Speaker:Robi] Okay, there was this— there was this, uh, what’s, what’s a tweet but on Bluesky? [Speaker:MAX] I just rolled a tweet. [Speaker:ROBI] A Bluesky tweet or post.

MAX

[Speaker:Max] Yeah.

ROBI

[Speaker:Robi] Um, they got like 50,000 likes. And it was like quote tweeting an article about how Doge had just cut billions of dollars from USAID, and I’m like, “Millions of people are dying in Africa without the foreign aid.” And the tweet or whatever on Blue Sky is like, “This is horrible. Effective altruism always leads to disasters. Don’t you see how horrible these people are?” [Speaker:ROBI] Yeah. This is the exact opposite of effective altruism. Yeah, like, what do you think effective altruism is?

MAX

Yeah, there’s a— we’ll never live down that Elon Musk went to like one EHE one time. Yeah, it’s gotten worse every year. Oh man.

ROBI

Um.

MAX

Yeah, uh, what’s the most embarrassing thing you’ve donated to? I don’t know if this is like—.

ROBI

Most embarrassing thing— I kind of mean.

MAX

This is like, you know, like, I don’t know if you gave to the What’s like a— the plate pumps?

ROBI

[Speaker:Robi] Right, um, honestly I don’t think I’ve got any funny answer here.

MAX

[Speaker:Max].

ROBI

No. [Speaker:ROBI] Um, no, nothing. Everything I donate to is super effective. [Speaker:MAX] That’s great to hear. [Speaker:ROBI] Um, well, okay. Okay, maybe a spicy opinion. I think, um, so you know how, like, for example, the average charity is like— or sorry, an effective charity is like 2+ orders of magnitude as effective as some average charity? So if you just donate to random charities and you look at the bucket of that portfolio and what you’ve achieved with those Like 99% of donations, you’re just completely wasting your money, basically. And this, like, people get offended if you point this out. Like, how dare you imply that my cute neighborhood kitten shelter is not the most effective thing to do. Yeah, I think the Robin Hanson take of, like, at the scale of an individual donor, There is, for whatever your values are, you should not diversify. You, um, you should spend up— like, you should research as much as you have time for, or like bounded rationality, and then all of your donations should go to one thing, um, and anything else you do is just like sabotaging your own effectiveness.

MAX

So you’re not a Headspace giving guy?

ROBI

No, no, even if you’re— oh my God, this is, uh, Yeah, you should just— you should simply— okay, you have an assessment of each different charity. You have an expected value and the variance. But at the level of an individual donor who is not saturating— like you’re not donating enough for the charity you donate to, then fulfills its most urgent funding need and then becomes less effective than something else. Um, like, hitspace giving is good, but at the level of an individual, you should not be splitting it up. You should not be trying to, like, do the hits. Um, yeah, uh, my roommate is like— she, she really, like, intuitively resists this, um, even though we’ve tried to explain to her many times, but it’s just like very unintuitive to her. Um, Like, she wants to donate to global health and animal welfare and AI. I just like— but what if I’m wrong and the AI, like, animals are, like, more important than, um— well, okay, if you think the animals are more important, with your, like, $10,000 of donation, you should put all that money into animal welfare. Um, and then she said, but no, but what about the, the people? I should at least donate something to— yeah. [Speaker:MAX] I’m confident in that. [Speaker:ROBI] It’s just like— yeah. But it’s just like a mathematical fact. You should not be splitting up.

MAX

[Speaker:Max] Have you seen the meme that’s like, “I’m doing a billion calculations a second, but all of them are wrong”?

ROBI

[Speaker:Robi] No. [Speaker:MAX] Okay.

MAX

It’s like— maybe one day you’ll see it. Like, people reply with it and this sort of things, but that’s, you know, maybe. So we have to hedge a little Yeah.

ROBI

Another bad argument I hear in favor of diversifying individual donations is like, “Well, you diversify your portfolio, why wouldn’t you diversify your donations?” But this is also wrong. It’s not like— going from everything donated to one charity to everything donated to a couple of charities is not Like having a portfolio that consists entirely of like one stock and going to an index fund, it’s like you already have an index. Let’s say, uh, stocks on average return 7% per year and bonds return 3% and savings accounts yield 1%. Um, this would be like you already have a diversified account including, um, like 70% stocks and like 20% bonds and 10% cash, um, and then like, “Oh, but I want to diversify, so I’m going to take a dollar out of stocks and put it into like something with like— something with like lower EV.” Like, that doesn’t actually help. Um, it’s just reducing your, um, uh, expected value without improving the, like, risk that balance.

MAX

Yeah. Sure.

ROBI

Um.

MAX

Let’S see, what remaining questions do I have?

ROBI

How many do we have left?

MAX

Uh, I haven’t gone fully in order, so that actually makes it annoying to, uh, count.

ROBI

I can make more nuggets, but I don’t know if we have, um, more hot sauces.

MAX

No, I think we have like 4 questions left.

ROBI

Uh, um.

MAX

Should we do another nugget?

ROBI

Uh, sure. Okay, so, um, Oh, I encourage you to, um, eat them like sauce down so you can taste it better. You want— like, flip it? Yeah, that’s what I’m doing. Okay, um, so this is Elijah’s Extreme Regret.

MAX

Okay, fantastic.

ROBI

Scorpion and Carolina Reaper. Okay, and, um, it’s on an Impossible Chicken Nugget, so enjoy it.

MAX

Yeah, that’s much harder, hotter.

ROBI

Are you, um, extremely regretting eating this?

MAX

Not yet, but we’ll see how it— uh, it’s still, you know, you’ll get.

ROBI

Some hiccups in the podcast increasing.

MAX

Yeah, in spice.

ROBI

You didn’t put very much, uh, of the last few sauces on yours. I thought they were supposed to be like, you know, Doused in the sauce.

MAX

Well, you know, I’m not the guest.

ROBI

All right, well, yeah, but, um, that was delicious. I love that.

MAX

That was a good nugget. I’m glad it was hot. That is beyond where I would, uh.

ROBI

You know— Well, you’ve got some oat milk here.

MAX

I’m gonna drink some of this, actually.

ROBI

What do we got left?

MAX

Okay, um, what’s an essay or blog— I guess you’ve already kind of said one, but what’s another one? Forum post that you, uh, find really annoying but everyone keeps sharing? Ooh, um.

ROBI

An essay or forum post that I find really annoying but everyone keeps sharing? Yeah. I don’t know, um, I don’t think I have anything like that. Um, closest might be— so I think, uh, you know, Bentham’s Bulldog? Um, I mean, he’s really smart. Um, I respect his, um, moral philosophy takes. Prolific writer, like— yeah. Um, but he believes in God, and this opinion is just like really stupid. Like, um, yeah, and I, I think some of the stuff he’s written about, like, reasons for God are dumb. Like, and he’s way more into philosophy than I am. Like, he’s got lots of arguments that I, like, I can’t refute this specific argument. I’m not gonna, like, I don’t think it’s worth my time to, like, dig into this. Um, Yeah, I guess that’s the only.

MAX

Um— [Speaker:ROBI] Yep, that’s fair. [Speaker:MAX] Stuff I’ve been annoyed with among.

ROBI

You know, the EA blogosphere.

MAX

[Speaker:Robi] Yeah, um, yeah, cool. Let’s see, um, what’s the, the best moral philosophy? I mean, I feel like I don’t know why I put that one down because I don’t know, what are you going to say?

ROBI

[Speaker:Robi] Yeah, um, so I’m— I have most of my, like, probability mass on utilitarianism being, like, the best way to act. Or, well, actually, um, I’m a moral anti-realist. I’m, like, 95+ confident. Like, I just don’t see how moral realism could be true, or, like, what it would even mean if it were true. Um, I’ve also never even heard an argument in or like never heard—seen any evidence in favor of it. Like, as far as I’m aware, the only arguments I know of in favor of moral realism are, uh, if God exists, like Divine Command Theory. Like, if God exists and he set the laws of the universe, fine, uh, in that case, sure, that’ll make sense. Um, every other argument I’ve heard is just like moral intuition. Like, oh, it seems like there probably are Like, clearly this— it’s wrong to murder because I intuitively know it’s wrong to murder. [Speaker:MAX] Yeah. [Speaker:ROBI] Um, which is just like, okay, interesting claim. Uh, do you have any evidence for that? Also, like, evolution is a much better, uh, explanation for people having this intuition than, like, moral facts existing in some metaphysical sense that, like, interacts with your mind and then causes you to believe this. I don’t know.

MAX

[Speaker:Max] Yeah.

ROBI

But, so conditional on— conditional on moral realism being true, I’m actually a deontologist. The only moral philosophy that seems viable to me if moral facts exist would be something like the non-aggression principle. I think conditional on moral realism being true, I’m like, very deontologically libertarian, like, um, it’s immoral to, like, harm another sentient being, basically. Um, a lot of other moral philosophies don’t really make sense to me if moral realism is true. So, like, uh, I think I said this on Aaron’s podcast, but, um, so suppose moral realism is true and utilitarianism is true. This would be kind of— actually, like, this would be really weird. So for example, you meet a stranger, and unbeknownst to you, they really, really, really love the color yellow, but they really, really hate the color purple. In fact, purple was like— [Speaker:ROBI] Purple killed my father. [Speaker:MAX] Purple— A guy, a murderer wearing purple, genocided their ancestors or something, and seeing anything in the color purple will cause them vast anguish. You’re going to meet them, so you bring a gift. It’s like a thank you card or like a flower or something. If moral realism is true and utilitarianism is true, then you’re either like— being extremely moral or committing some heinous atrocity based on the random happenstance of this card you give them being purple or yellow, which strikes me as a bit nonsensical. So, the only viable moral realism kind of ethics I’ve thought of is like, if someone is a sentient conscious being, just like, don’t hurt them. Um, or like, don’t like destroy their atoms or like, uh, inflict pain on.

MAX

Their mind or something.

ROBI

I don’t know.

MAX

Yeah, discount utilitarianism.

ROBI

Or discount— no, no, no, but it would be like, hurting people is immoral. Yeah. Um, and then anything else is supererogatory. Great. Thanks. Cool.

MAX

Um, do I have another question? Oh yeah, um, I guess maybe let’s eat the, the final nugget and then you can say those— Republican conclusion takes you, uh, a little bit too earlier.

ROBI

Um, so, um, we’ve got another Impossible Nugget, uh, my favorite kind of nugget, and, um, it’s got some Dave’s Hot Chicken Reaper seasoning and some This is, um, did we put any of the Carolina Reaper batter? Oh, basically it’s Carolina Reaper batter. Yeah.

MAX

I think I’m going to regret this.

ROBI

Cheers. I don’t think we put that much spice on it.

MAX

I don’t think I did.

ROBI

Yeah, the, um, Reaper-flavored chicken or cauliflower at Dave’s Hot Chicken is, um, that is beyond my spice tolerance. They put, um, I think capsaicin extract on it. It’s basically like getting pepper sprayed when you take a bite.

MAX

Yeah, that doesn’t really sound fun to me. I know someone will be like, “Ah!”.

ROBI

[Speaker:Robi] It is very fun, but, um, I threw up after taking 2 bites.

MAX

[Speaker:Max] Okay, yeah, so, you know.

ROBI

[Speaker:Robi] Delicious though, in my— [LAUGHTER] [Speaker:MAX] Mass acoustic device opinion.

MAX

[Speaker:Robi] Yeah, I, uh— [Speaker:MAX] Ooh, that does taste good.

ROBI

Are you sure you don’t want some more of— more of Reaper spice?

MAX

[Speaker:Robi] No, I have some on my tongue. I can feel that. [Speaker:MAX] Mmm, that’s tasty. [Speaker:ROBI] Oh man. I should not be a permanent host for the show. I guess maybe that’d be funny, but usually I think the conceit is like the other person, yeah, the guest, but it’s maybe it’s funnier if, uh, the host is just dying every time. Yeah, yeah, cool. Yeah, so what’s your, uh, delicious, um, illusion?

ROBI

Well, isn’t there the— do you know the Hillary newspaper about like, um The paper being referred to is called Population Axiology by Hilary Graves.

MAX

I feel like she’s, uh, you know, underrated EA sort of, uh— I feel.

ROBI

Like she was everywhere in EA at some point, like every meetup I’d go to was like talking about a Hilary Graves, um, something something.

MAX

Yeah, of course, that was, you know, when I was around.

ROBI

The one I was thinking of was, um, is it, um, impossibility theorems for, um, Do you know what I’m talking about? So like, if you have these assumptions, you get the repugnant conclusion. But if you try to behave any differently or construct some other axiology, it has other worse— [Speaker:MAX] Yep. [Speaker:ROBI] Yeah. So I agree with this. I think if you assume anything else other than utilitarianism, you end up with even worse problems than the repugnant conclusion. And also, I just bite the bullet. Like, I don’t think there are— I disagree with Scott on this. The repugnant conclusion is not repugnant. I think there’s the classic situation, like, imagine a planet with a million happy people, and then imagine a series of slightly modified planets where you take the people and you replace them with two people who are slightly more than half as happy. And then you go all the way down, uh, a very long series of planets until you get to people who are like just infinitesimally happy, and their lives are just barely worth living. Um, they like eat boiled potatoes and listen to elevator music. Um, but there are so many of them that adding up all their epsilon happiness, you have more— you have a better planet than the million blissfully idyllic happy live people. And people say, “Oh, this is obviously not better. No number of these people could be better than this.” Yeah, I think they’re just neglecting the— or not taking the premises seriously, or each step. If you are really replacing it with more than half as much happiness and doubling the population, you do have more total happiness. And I think people are underestimating, or like, they have this vision in their head of like, that those lives on the last planet being like worthless or negative. Actually, you have to remember, if you set it up like this, as stipulated, they are a little bit happy. Like, they have more total happiness. And I, I don’t think there’s anything wrong with like, Many people with a little happiness.

MAX

[Speaker:Robi] Yeah, fair. Um, you know, I guess if you were behind the veil of ignorance, you’d prefer to be on the, uh, the most small equator, but if you were—.

ROBI

[Speaker:Max] No, no, no, no, okay.

MAX

[Speaker:Robi] Well, because if you knew, you wouldn’t be one of them.

ROBI

[Speaker:Max] No, okay, excuse me. You’re not doing the veil of ignorance properly. So, like, a billion people with— so, Planet A, a million people with, um, 10 happiness, or Planet B, um, a billion people with 1 happiness each. So Planet B has more total happiness, a billion. Planet A has less, 10 million, but there’s— the average happiness is 10. You’re saying veil of ignorance would, um, you would choose— Well, what I was.

MAX

Going to say is that, you know, if you’re behind the veil of ignorance and you’ve kind of already assumed that going to exist.

ROBI

[Speaker:Robi] That’s right, yeah, yeah, that’s where I was going with this. So, um, if you had to choose between being a random person on planet A and a random person on planet B, obviously you pick planet A. But I think the apples-to-apples choice is, um, 1 million chances to draw a ticket where you exist and have 10 happiness, or 999 million to just not exist and have no happiness. In that case, if you set it up fairly, actually you should pick the Plan B.

MAX

[Speaker:Robi] Yep, yeah, nice. Um, have you heard of the Very Repugnant Conclusion?

ROBI

[Speaker:Max] Uh, yes, but remind me what this is.

MAX

[Speaker:Robi] Uh, it’s the— so there’s suffering, so it’s lots of small people, or small happiness people, suffering people versus a happy planet, and then the, the one with suffering is higher total because there’s just so many.

ROBI

[Speaker:Max] Oh yeah.

MAX

Do you have like, I don’t know, I feel like for— for so the, the problem here is something that’s like, ah, like I can bite the bullet.

ROBI

That if you have just a little.

MAX

Bit of happiness, you know, like that’s fine, but now you’ve like also introduced all these people to suffering.

ROBI

This like kind of— yeah, if you were like a negative utilitarianist. Yeah, I think there’s a lot of problems with negative utilitarianism. Um, I, I don’t agree with that. I’m just like a total utility— yeah, uh, I was gonna ask you if.

MAX

You have, uh, hot takes [Speaker:ROBI] Um.

ROBI

Mechanized?

MAX

[Speaker:Max] Yeah, yeah.

ROBI

[Speaker:Robi] Um, yeah, well, um, Tamay and Agee have much longer timelines than I do. Um, Matthew I think has similar timelines to mine, uh, I think past. Um, but, um, I think they all have much lower P2M. And so, um, while I wish they wouldn’t, like, you know, go do capabilities, um, I can’t fault them for it. Like, basically If you think AI is probably almost certainly going to go well and you see this opportunity to earn trillions of dollars by automating the whole economy, and again, you don’t expect to— you think it’s going to go well, um, yeah, that makes sense to do. And I don’t disagree with them on factual premises. They are completely right that there is this opportunity here to automate the economy and earn trillions of dollars. I guess hate the game, not the player.

MAX

Sure.

ROBI

Yeah, um, yeah, but I wish people would stop doing capabilities until we can, like, you know, figure out alignment and whatnot.

MAX

Yeah, so true. Uh, cool. Well, yeah, thank you for donating and for, you know, recording.

ROBI

Yeah, um, happy to save the chickens. I hope, um, I hope maybe a few hundred or a few thousand of them are, you know, not, not suffering.

MAX

Yeah, cool.



Get full access to Aaron's Blog at www.aaronbergman.net/subscribe
...more
View all episodesView all episodes
Download on the App Store

Pigeon HourBy Aaron Bergman