Based Camp | Simone & Malcolm Collins

Is AI Overhyped? Is AI a Bubble? (New Models Don't "Feel" That Much Better, Why?)


Listen Later

In this episode, we delve into the current state of AI and discuss whether it's merely hype or a transformative force in society. The conversation touches on the economic impacts of AI, its advancements in fields like genetics and drug development, and how it's being adopted across various industries. The episode also addresses some criticisms and misconceptions about AI's capabilities and market value. Discussions include insights from industry leaders, practical applications, and the potential for AI to reshape the economy and everyday life.

Malcolm Collins: [00:00:00] Hello Simone. I'm excited to be talking to you today. Today we are going to be talking about whether AI is hype, whether AI is plateauing, whether AI is over. And by this what I mean because you know, I, I had a, the head of our development team, Bruno, who comments a lot on the discord and the comments here.

So fans of the show will know him. He sent me an email that we're gonna go over as part of this sort of being like, okay, so here's some evidence that AI doesn't seem to be making the systems level changes in society that you had predicted it would make in the past, and that many other people are predicting it will make.

And when I, and we're seeing other people say this, when I go out and I interact with AI today. I really struggle to see how having this thing I can chat with, is that useful? It may be fun as like a chat bot or something like that. But I don't see its wider utility yet. Now we'll be going into the arguments around this because I think that there are some strong arguments, like the AI industry is making almost no money [00:01:00] right now.

You know, is this industry not, not almost, but almost no win contracted with the investment that's going into it. Right? And, and the amount that we talk about it and other people talk about it is matter mattering. And then you've gotta think about all of this in the context of Yeah, but like 80 thou, well, oh, sorry.

Around 90,000 people just in the US tech sector had their jobs cut due to AI this year. You know, like, yeah. So come on. Do, did not matter to them. Yeah. But, but so what we're going to be seeing here is, I think the way that the people are looking at AI and expecting AI to transform the economy is different from the way it actually is.

They're looking at how AI is useful to them instead of how AI will replace them. I'd also note here I. To the question because Sam Altman, you know, literally like Sam Altman one of the, the guy who runs one of the largest AI companies has said AI is a bubble right now. Right. And so people will come to me and they'll be like, well, you know, even he's saying it's a bubble.

And I'm like, I would say it's a [00:02:00] bubble right now. It is a

Simone Collins: bubble. It's obviously a bubble. He's a bubble right

Malcolm Collins: now. But the fact that a thing is a bubble doesn't mean it's not gonna transform society. Exactly. So like if you go to, the.com boom, right? Like the.com boom was a bubble, right? But the internet still transformed society.

The companies at the beginning of the.com boom, you know, like they were formed in the middle, like Amazon and Google and stuff like that. Like if you made the right bets on those companies, if, if anything, what? Like, if you wanted to make the best bets possible, wait for the AI bust and then invest in whatever survives.

If there is a traditional bust, you know, keep in mind like what, what I mean by a boom now is a lot of people are investing in AI companies without understanding in the same way, like in the early.com boom, what the internet's actually good for and good at.

Simone Collins: Well, what kind of sucks is, is also the AI companies that I think are coming outta this.

They're not gonna be hopefully traded a big shift in ai. Tech booms, as far as [00:03:00] I see, is that they're not something you're gonna see in a stock market. They're small, they don't have a lot of staff, they're not public. So our ability to participate in the upside is severely limited.

Malcolm Collins: Well, the other thing about AI development and, and you can, you know, back me on this, is we can see all these metrics that say that AI is supposedly getting better and smarter.

And yet when you consider like the latest model of grok versus the last model of gr, you don't, you don't go like. This is like 50% better. Like it doesn't feel that way to you. Same with open AI's model. Same with a lot of these models. You, you interact with the most cutting edge thing and you're like, this is marginally like three or 4% better.

But all the metrics are showing that it's like massively better. So is this a problem? And how we develop ai, how we measure ai, everything like that. I'm gonna be talking about that in this as well. I'm also gonna be talking about the study that Facebook put out saying that you can't get, basically they're saying AI really isn't that smart.

No, no, sorry, not Facebook. I wanna say Apple maybe put this out. But if it was Apple, it, it shows why Apple has not [00:04:00] developed anything good in the AI space because the people they have working on it are just not that bright.

Simone Collins: I do have to say though, I'm really excited about their smart house play.

I think that if they're, they're gonna have any win, them being the ones that make everything that's AI in your smart home connected and work seamlessly and be really pretty, that they are gonna be the ones that are capable of pulling that off.

Malcolm Collins: So we're gonna go into an article. Start, start by going into an article in Future Magazine.

Nice. So you can see that this isn't just Bruno making these scientific claims. And the article was titled scientists are Getting Seriously Worried that they've Already Hit Peak AI Speaking to The New Yorker, Gary Marcus, a neuroscientist and longtime critic of Open AI said what many have been coming to suspect despite many years of development at a staggering cost.

AI doesn't seem to be getting much better throughout gp, though GPT technically performs better on AI industry benchmarks and already unreliable measure of project. As experts have argued, the critic argues that it's used beyond anything other than virtual chat Buddy [00:05:00] remains unlikely. Worse yet, the rate at which new models are growing against the dubious benchmarks appears to be slowing down.

I don't hear a lot of companies using AI saying 2025 models are a lot more useful to them than the 20, 24 models, even though the 2025 models perform better on a bunch of benchmarks. Marcus told the magazine now here, I'd note here when he's like, I don't see AI being used for anything other than one of the things we're gonna get into later in this episode as, as a chat bot, I'm gonna be like, well then it's just because you're not familiar with any industry other than your own.

AI has already invented drugs that are going through the production cycle that are likely to save millions of lives and, and not just like one drug, like countless drugs at this point. Yeah. AI models that are trained on the human genome have already made numerous genetic discoveries. It's the way that you're using AI and we'll go through these discoveries in a bit.

It almost to me feels like with some of these people, like that money, python sketch you know, like, what have the Romans [00:06:00] ever done for us? What has AI ever done for us? And it's like, well, okay, yes, they invent lots of drugs and yes, they help with drug discovery and yes, they help with

Speaker: But apart from the sanitation, the aqueduct and the roads, irrigation medicine, education. Yeah. You alright? Fair enough. And the wine. Yeah. Alright, but apart from the sanitation, the medicine, education, wine, public order, irrigation, rose, a fresh water system and public health, what have the Romans ever done for us? Broke Peace. Oh, peace.

Malcolm Collins: Actually a fun thing here, just if you're like, how, how is a way that I'm not thinking about using ai?

I'll talk about really quickly the way I use AI in running my company. So, and Brita is actually the one who implements This is whenever we assign a task to the one of the programmers we ask an AI about how long this task should take to complete. And then we benchmark how long it takes them to complete the task against how long the AI thinks it will complete the task.

And [00:07:00] we can create weighted averages to see sort of how productive a person is. Obviously this isn't gonna be perfect and AI is gonna overestimate a lot but it does create a relatively accurate benchmark that we can use to normalize who are the, the best performers on our team which is very interesting.

I haven't heard people using AI in this way. \ And this is from Bruno's email. Ed Tron has focused heavily on the financial side. He notes open AI's. Current annualized revenues sits around 5.26 billion and anthropics at around 1.5 billion.

But expenses remains outsized. Opening AI alone may be spending roughly 6 billion annually on servers. 1.5 billion on staff, and another 3 billion on training. Investors have been asked to sign acknowledgements. That profitability may never materialize against that backdrop. Valuations in the hundreds of billions look speculative at best, so.

This is really important. [00:08:00] 5.2 billion and 1.5 billion for two of the major AI companies are laughably small. Yeah. In, in terms of, of what they're making. How, how, how, how are they getting to hundreds of billions in valuation, right? Like, why are, why aren't they making more money? And, and why is this true sort of across the industry?

'cause we are seeing this across the industry. Before I go further, I'm gonna explain this phenomenon because this is actually an important phenomenon. The reason why they're making so little actual money is because their long-term potential is so high.

Simone Collins: Hmm.

Malcolm Collins: This is, well this is

Simone Collins: how it's pretty much for every tech startup over the past.

20 plus years.

Malcolm Collins: Yeah. Yeah. So for people who understand how VC works, so VC comes in, it floods a space that it sinks is gonna be worth a lot in the future. And then because it's flooding the space with so much money, like because open AI and philanthropic are getting so much money to compete against each other, they have to be rock bottom in terms of the prices they're offering [00:09:00] or even offer things for essentially free versus the cost to produce 'em.

And, and they're, people can be like, wait, but then why would they even like do that? Right? Like, they're trying to win in a market where you, the customer are not actually the primary customer where the venture capitalist is actually the primary customer. And where, it, it doesn't like part of, for them, you as a user, you are providing them value.

You know how, like with Google, you provide them value as a user because they get like data from you that they sell to other people and they get like, ads from you. But like the data from you is more important. Like some companies like make their money off of the data they collect from you. Okay? You as a user are actually a data point that these companies are able to trade for cash from venture capitalists.

That is why you actually like, it's, it's actually a fairer deal. It's not like they're cheating you or you're cheating them. They are trading the fact [00:10:00] that you are using them to say, look, I am beating out the other major companies. Hey, you investors know eventually somebody's going to eat this industry.

Okay. So that's, that's important to note. Like this is something you'd actually in expect if things are going well. So. To go back to his email here. And, and, and none of this is a, a stupid thing to know. Like, I'm not like saying Bruno is stupid for, for asking this question, right? Like, it's easy to, why is it valued so much when it's making so little?

Why haven't tons of profits accumulated in this industry yet? To go back to his email for context, compare this with Amazon Web Services. AWS launched in 2006, reached cost revenue parity in just three years, and in its first decade, accumulated roughly 70 billion in costs. By contrast, Amazon itself spent around 105 billion in just the last year on Tron.

So, so, ai, the biggest company in the space is making 5.26 billion a year. Okay? And he's pointing out here that in its first [00:11:00] decade, Amazon Web Services made 70 billion. And then he points out that what Amazon has spent on its ai has been more than what Amazon Web Services has made in the last decade, 105 billion.

Gosh. Right. Zi tron underscores that entire, the entire generative AI field, including open ai, andro, mid journey and others, produces only about 39 billion in revenue. Mm-hmm. That's less than half the size of the mobile gaming market and slightly above the smartwatch market at 32 billion. These comparisons illustrate the scale mismatch between AI valuations and demonstrated ability to generate revenue.

Mm-hmm.

And so, so this is a really AP comparison. It's bringing in about as much as the smart watch market.

Simone Collins: That's, yeah. Wow. Putting that into perspective, that's pretty so boring. The smart watches are so pervasive now, maybe not as sobering as you might initially

Malcolm Collins: think. Yeah. Well, I mean, it should be more, given how afraid people are of it, how much people are talking about it.

Right. [00:12:00] There is also the issue of market making a product market fit. LLM based tools are not meaningfully differentiated from one another. The average user tries, one, thinks this is kind of cool and then stops using it. This raises a concrete question, how would one sell these products in a way that justifies ongoing subscription fees?

How do you think, did they really stop

Simone Collins: using them? Like, here again is where I question it and also like the,

Malcolm Collins: the smartwatch market. No, no. This is, this is factually wrong. I, if we look at usership rates, they are shockingly high.

Simone Collins: Okay. Okay. Because I'm just like, you're not questioning this. And I'm like, ah, wait a second.

Like,

Malcolm Collins: no, but I, I understand how somebody could feel that way if they're just thinking about like, especially if AI hasn't caught you or you haven't found product market fit for AI within your life.

Simone Collins: Yeah.

Malcolm Collins: You're just gonna walk away from it, right? You're gonna be like, what's the point? Right. Yeah. It's also really unfair

Simone Collins: to compare this to the smartwatch market as it is today because [00:13:00] the smartwatch market is in, its now we make money from this era.

So, a lot of people wear ora rings, right? When they first came out. You didn't pay 'em like subscription for them. Now you have to, you can't have one without it. I wear a Fitbit. I don't pay a monthly subscription for it, but I'm constantly upsold on it. So now they've switched into the monetization phase, but at the beginning it was just, no, get this on people's bodies and then try to make money from them.

And right now AI is in, they get this in people. Workflows and lifestyles phase. So of course it's not making that much money.

Malcolm Collins: Yeah. Well, and, and because that's what VCs are trying to do. 'cause they're trying to capture the market and then make the money by getting the, the, the giant company. Yeah. They're, they're not being foolish about this.

Like this is actually makes economic sense. Yeah. I mean it also happened

Simone Collins: with Uber and Lyft. They used to pay drivers really well, only have really nice cars and have really, like they were probably running at a loss. They charged so little, they were running at a huge

Malcolm Collins: loss for a really long Yeah. And

Simone Collins: so like we had this, this, this generation.

Which was so nice where you had this like [00:14:00] VC subsidized luxury lifestyle where you had like super affordable food delivery and Ubers and smartwatches. But that was because of the growth phase. And right now we're enjoying this short period where we don't have to pay a lot for these AI services. I'm

Malcolm Collins: actually going to argue that the VCs here might be making a mistake.

And the mistake that they might be making, and this this depends on stickiness to particular AI models, is they think that they're developing something like the next search engine or the next Uber or Lyft when what they're actually developing is a commodity. By that what I mean is I think the way that most people are going to interact with the very best.

Deployments of AI are going to be through skins, basically through Windows. Like what we're building was our Fab ai. Right. Oh,

Simone Collins: oh. And then, yeah, people aren't necessarily going to be loyal to chat GPT or Grok. They're gonna use a variety of different services that are going to interchange, grok and chat GPT based on, based on

Malcolm Collins: whichever is best and cheapest at the time.

Yeah.

Simone Collins: Kind of how people switch out, at least in the [00:15:00] United States in many places, you can switch out the utility company that you buy from. So you can buy from this utility company or you can buy from this one that only does green energy or whatever. And you choose whichever you like based on values and based on price.

Malcolm Collins: Yeah. And we actually already see this in the data. People switch between like, which AI is the top used one? Changes a lot. Which one is sort of known as the best one? Changes a lot. And you, you're, you know, if for example, like right now the main AI I use is grok, right? If the OpenAI had a model that I just thought was dramatically better, I would switch to them, which has occasionally happened.

Quad used to be my main ai, then OpenAI was my main AI for a while. No, it's the rock. And I may switch again, right? Like I switch all the time. What the primary AI use is. Now another argument he made in this email that I thought was really interesting is he's like the iPod. It's a a thousand songs in your pocket, the iPhone, it's the internet in your pocket.

Like what is ai? AI [00:16:00] in your pocket, right? And. I think that this actually is a big mistake, and it's one of the reasons why people are, are not understanding the value of ai. Hmm. They're thinking about AI's value to them, not AI's value to say genetic researchers or certain groups of programmers, et cetera, right?

Like if AI has replaced a job, we often talk about AI can probably replace, I guess, 25% of law clerks now.

Speaker 3: Hmm.

Malcolm Collins: Now what, it hasn't happened yet, but it definitely has the capacity to do that. And you're like, well, what if it makes mistakes? And I go, then just put it in a chain so that it checks for those mistakes.

This is one of the reasons when people are like, well, what about hallucinations? And I'm like, hallucinations, like literally don't matter. First they don't happen that much in current model ai. Yeah. They, they, I argue somebody was like, oh, well I don't trust you guys 'cause you get some of your information from ai as a, as a fan.

And I was like excuse me, bro. Do you think that the average thing you read from a reporter is gonna be more accurate than the average thing you read from an ai? Yeah, at this

Simone Collins: [00:17:00] point it's not hallucinations, it's, it's sourcing that, like from sources that get it wrong, and we check our sources. Like we check the sources that AI sites, but we can't always take the time to figure out how reliable those sources are.

We're like, well, the New York Times reported about it, like. Right, but the sometimes wrong. The point I'm

Malcolm Collins: making here is that inaccurate information is more likely for me, like twisted information is more likely to come from a New York Times article than a GR four output. And I, and I would bet that this is something you could even look at statistically, right?

Because the, these, these it's, it's not that there isn't like a political bias within ai. It's just less extreme and distinct than the political bias within the reporter class. Mm-hmm. So, and, and then the, the amount you can reduce hallucinations just by doing one pass. By that, what I mean is you have the AI output an answer.

Then you take that answer, you put it into a different AI with the question being like, is anything in this hallucinated or wrong? This is an output from another ai. You do [00:18:00] that and the probability I'd argue is like 0.01, that you're gonna have a hallucination in whatever that output is. As long as you're using like high-end AI models.

But the problem with being like, where is a, you know, if you have like the iPod or the iPhone and you're like, it's this in your pocket, what is AI to you, the end consumer? I think this shows a misunderstanding of AI's role within the market. AI is a tool that at its most productive, replaces human beings.

AI is a simulated intelligence. It's a simulated human being. That's, that's fundamentally where it's most valuable. It's when it can replace an entire call center. You know, that's like a hundred million jobs if you replace that, when it can replace an entire coder by making other coders more efficient.

When it can replace legal clerks, when it can replace. One of the things he asked me in this email as well, you know, when, when does the data come down where you change your mind on how much AI is gonna change the economy? Like, would you need to see things [00:19:00] stop moving as fast in the field? Would you need to see like hurdles begin to come up?

And I'm like, even if, and we'll go over the potential hurdles to continue to AI development, even if one came up, even if AI development completely stopped where it was today, most of my predictions on how much it's going to transform our society would stay there. A by that what I mean is using Multipass ai.

You should be able to replace about 25, 30 5% of the legal profession right now. And yet that hasn't happened yet. You know, you, you should be able to complete replace 25, 30 5% of government bureaucrats right now. And yet that hasn't happened, accountants, and yet that hasn't happened. Right. Copywriters and, and I, and when I point this out, I mean, I pointed out it was in my own life, right?

Like I have seen, you're like, AI does not replace professions. And I'm like, if you are watching this podcast, you are participating in something where AI has replaced professions. Yeah. Because we had an earlier iteration of this podcast, and you go back in our podcast history before Basecamp where we paid for an [00:20:00] editing team and we paid for a, a card creation team for title cards.

Those tests are now both done by ai. Those are, those are two people's jobs who are now done by AI and dramatically better. Well, you still,

Simone Collins: you still use software and you do the editing and I still use. AI image generation, and I do the title cards, but yeah, like AI is an increase. I couldn't

Malcolm Collins: do it without ai.

Simone Collins: Yeah, no. Same. I couldn't. No way.

Malcolm Collins: Yeah. So I think that that's really big. And then keep in mind how much AI transforms the economy if Elon's move to make like robots that work with AI for like factory labor and stuff like that. Mm-hmm. And everyone initially was like, oh, this is so silly looking, et cetera.

But apparently they're having a lot of, success was this, from what I've heard, like through the grapevine friend network, stuff like that. If this works out now, it's every factory job, right?

Simone Collins: Yeah. Well, I've also read that China is investing heavily in [00:21:00] AI enabled hardware as well. So things like robots.

So if, you know, it's not like only one person is trying this. Plus also Boston Dynamics has been at this forever. Yeah. People are, they're going to be major players and absolutely there's gonna be a physical element to this BB,

Malcolm Collins: but Boston Dynamics, I, I actually feel like this is very much like drones come out, right?

And everyone's like, well that's cool, but that's just a toy, right? Oh,

Simone Collins: yeah. Now, like it has completely changed to warfare. Yeah. And everyone's like,

Malcolm Collins: oh gosh, tanks don't work anymore within this new model of warfare and large, and we need to completely change the way we fight wars. Like drones were a toy until they weren't, right.

Yeah. Yeah. Very much the same with AI and where things are going.

Simone Collins: Yes. Absolutely.

Malcolm Collins: And I'd also point out here when you're like, well, okay, but what other industries could AI disrupt other than genetics and science and drug development and copywriting? And well, a big one is my [00:22:00] cousin owns the company that created the movie here, which takes Tom Hanks and then puts him at different ages and in different environments.

And they're using AI to do this. And if you look at the they'll do viral stunts all the time where they'll like create TikTok reels of like, various celebrities and we'll get like millions of views. But it's faked, right? It's faked with their faces. If you could simulate an actor that's a bunch of industries that you've just nuked, right?

Simone Collins: Yeah. And I mean, this is already, I mean, yeah, it's, I was just listening to a podcast on how acting has been disrupted already in that now. Production companies are more making money off of the IP and the concept rather than the actors, which is why we see so many more actors have side hustles and create companies and start investing and do all these commercials and have a clothing line or a phone company.

Because yeah, this, this whole industry has changed. So I think we also aren't, aren't recognizing how much many industries have already [00:23:00] fundamentally changed with only the beginnings of tech enabled industry shifts away from like keyman risk and keyman risk being defined as any company that kind of de depends on, on reliable humans for its financial wellbeing.

People have been trying to use tech to render keyman risk obsolete for a very long time, and AI really handles that well.

Malcolm Collins: Yeah. So I'll note here now, now we're gonna go, well, a keyman risk was in movies and stuff like that.

Simone Collins: With, with, with tons of other things too, though. I mean, we don't

Malcolm Collins: know if it can do it with competence yet.

But I mean, keep in mind what we'll be going over the AI that came at second place in that coding competition and stuff like that, like AI can clearly handle very advanced tasks. Yes. But one of the things that's often hidden if you're colloquially using AI is how rapid recently the adoption of AI within corporations has been.

So Eve, you look at and I'm, I'm gonna put a graph on screen here. AI usage at work continues a remarkable growth trajectory in the past [00:24:00] 12 months alone. So this is for one, one 2025. So like recently, right? Usage has increased 4.6 x. That's in 12 months. 460% increase in usage. Damn. And over the past 24 months, AI usage has grown as an astounding 61 x 61 x in growth in usage.

This represents one of the fastest adoption rates for any workplace technology, substantially outpacing even SaaS adoption, which took years to achieve the similar penetration levels. Now we're gonna go over another graph here. This is showing AI development, and this isn't actual usage of ai, but this is AI in medical devices approved by the FDI.

So you see it is shooting up now, unfortunately, it only goes to 2023 but I doubt this trajectory has slowed down some.

Simone Collins: Yeah.

Malcolm Collins: But I want to look [00:25:00] at now a few metrics where we're looking at like adoption within companies. So, if you look at organizational AI adoption. And this is from the Stanford AI Index.

In 20 23, 50 5% of industries had done it. And by 2025, it had jumped to 71%. Now note here, we're reaching saturation on adoption by many points of ai which is a, a potential problem. But we'll talk about how much AI is. I think what you're seeing is people are adopting it, but they still don't really know how to use it yet.

Right? Like if I say AI won a coding competition, people are like, wait, how could I ever get AI to code like that for me? Right. And I'm actually sending you the, the model that they used. They used a, a, a sort of chained model. And the way the chained model worked is it had multiple models in an an, an engine where you would have one model that asked it to plan what it did next.

Then a model that asked it to code based on it planning. Then a model that would evaluate the code that it had just created. Then a [00:26:00] model that attempted to improve the code. It just created then a model that revealed all that planned again, moved to the next stage. Mm-hmm. And this is something that we are building.

'cause if you were like, well, how would I do this? Our fab.ai is going to allow you to build chained models like this in a very easy way. So just wait. And you'll be able to do this yourself using multiple AI models very near in the future. Now the generative AI adoption. So for 2023, you have 33% of companies using this 75% in, in this year.

And this is Coherent Solutions Trends 2025 McKinsey. If you look at the AI user base we go from a 100 million active user base in 2023 to 378 million users globally. This is Forbes 2025. If you look at job impacts, there were no reported job impacts in 2023. In, in 2024, it looks like 16 million people likely had their jobs automated by ai.

And in this year it looks like [00:27:00] 85 million jobs will be replaced. And, and this is from Demand Sage 2025. AI Jobs Barometer. Now I'd also note here that people are like, well, AI has reached certain and we're gonna go over where AI has sort of plateaued its growth. And this is actually kind of an illusion by the way that we're measuring AI growth.

Speaker 3: Hmm.

Malcolm Collins: But one of the things that we've actually been seeing is significant advancements to the actual underlying model, which leads to jumps in growth within some area. While I will not say I was wrong about ai because I, I don't think I was where I will admit I was wrong was about deep seek not mattering.

Deep has been very diligent in publishing how they do stuff like, like despite being a Chinese company, they've been very sort of open source in how their new model works. So we understand how they basically reinvented the Transformer model in a way that has a lot of advantages. I mean, this is something that's just been having significant bumps even over this last year.

So to go over this. [00:28:00] They invented something called multi-head latent attention. M-L-A-M-L-A is a modified attention mechanism designed to compress the KV cache without sacrificing model performance in a traditional multi-head attention. For the original transformer architecture, the KV catch grows linearly with.

Sequence links and model size limiting scalability for long context tasks. EEG processing a hundred K tokens, MLA introduces low rank compression across all attention heads to reduce this overhead making inference more efficient while maintaining or even improving training dynamics. It basically makes training way cheaper and is how they achieved what they achieved.

Now I'll show here another graph on screen. For people who don't think that we're making advancements this goes only from 2022 to 2024. Okay, so keep in mind this is not like a, I'm going distantly into the past to show like massive improvements, right? This is the smallest AI models scoring above 60% [00:29:00] on the MMLU 2024 to 25.

And you can see here now we're at five three mini, but what's really cool here is when you see the big jump this happened in late 2023 this was with Myst seven B. With R five ai, I've actually found Mytral seven B as astoundingly. Good. Given how inexpensive it is to use. Ooh. We might be able to sort of chain the Mytral beam model.

I, I'm thinking to get responses that are near the quality of ROC four. Even though it costs one 50th to run.

Simone Collins: Wow.

Malcolm Collins: So, yeah. Very fun to see how we might be able to attempt that. Now let's look at how many employees, because I wanna keep this all very recent so people can see like this is, this is, this is actually happening today, so I'm putting a graph on screen here, which is how many employees use AI tools contrasting 2024 was 2025 in financial services.

Just in the last year it went from 4.7% to 26.2%.

Ooh. Okay. In,

in healthcare, 2.3% to [00:30:00] 11.8% in manufacturing, 0.6% to 12%. Where you see big ones retail 1.1% to 26.4%.

Simone Collins: Oh, wow.

Malcolm Collins: And you, and you can look at the others here, but it's, it's huge. Right. So now I'm gonna put up a graph on screen here of different types of AI tasks and how they have jumped in them.

This is from the Stanford Index select AI index of technical Performance Benchmarks versus human performance. And what you will notice here is where human performance is a hundred percent Mark AI have been shooting up in their proficiency across the board. But you also notice here that it appears that AI gets really dumb after it passes the human benchmark, like it stops going up as quickly.

And then here we have AI benchmarks have rapidly saturated over time. So, here we have a number of different AI benchmarks and you can see they all sort of taper off after human. And this creates an illusion for a lot of people that once AI gets smarter than a human, it stops getting smarter after [00:31:00] that.

And what's actually happening is the benchmarks that we creating are SAP saturated because we didn't have to deal with entities this smart. And humans are unfortunately very bad at telling when an entity is significantly smarter than them. Where you can see this really loudly is our open ai. Oh, by the way, any thoughts?

I've been just rattling here, Simone.

Simone Collins: No, I'm, I'm, I'm really enjoying this. But also I, I've, I've had trouble comprehending why people think AI is plateauing so

Malcolm Collins: well. I mean, I do think the perception, like the current model of opening AI I'm using doesn't feel that much better than the previous models. Hmm.

And in some cases even worse. Right. I, I can understand that. I can understand people. Somebody being like, what do you mean this is like 50 or 60% better? It feels three 4% better

Simone Collins: based on how they use it. Sure,

Malcolm Collins: yeah. Based, I understand your snarky remark there, your accurate Simone. But I, I think if you wanna see where [00:32:00] you can see this really loudly you can see this on the difference between the special version of Jet jpt four and Chad JT five, and all of the romance people.

If you watch our episode on like the AI dating people, they're so mad. They're so mad because it no longer talked to them like a dumb romance author. It didn't put a bunch of emojis in things. You can see none of the florid

Simone Collins: poetic language.

Malcolm Collins: Yeah. You see this on the meme where people are making fun of it, where it doesn't like give a bunch of emojis and flowery stuff when somebody gives a baby announcement.

It's just like, congratulations. Have fun. Where the other one used to do, like, you're gonna have welcome to the bipedal moment. Like you're gonna have a little one running around. Oh my God. Oh my gosh. But really, I'm so excited for you. But basically it was acting like an idiot. But unfortunately, your average person's intelligent level, your average, it capped out at GPT-4 0.5.

And so when AI became smarter and more sophisticated, and more, I mean [00:33:00] sophisticated, that's the word, when it became more intellectually sophisticated and understood that this is, you know, not an appropriate way to communicate with your normal person, you don't send them long lavish love poetry, right?

Unless you're prompted to intentionally be cheesy, it stopped doing that and people freak the F out. So in many ways, one of the phenomenons we're seeing here is people stop being able to judge ai, how smart an AI is when the AI is significantly smarter than.

Speaker 3: Hmm.

Malcolm Collins: Now to note here how much we have saturated our benchmarks at this point.

Here I am reading from a Substack post by Ash, k Ari, or something called No AI Progress Is Not Plateauing. And he, he notes here talking about one of the metrics that they were judging on. And to their credit, they created a really difficult benchmark. When they released this benchmark.

Even the smartest AI models were only able to solve 2% of the problems. This was two months ago in November, 2024. So this, this post came out a little bit ago, so in two months ago, I [00:34:00] love it. I, I have to say a little bit ago, this months ago at this point, right? So November, 2024, it could solve 2% of the problems.

And here's a graph of how many it could solve. Great, except so far. Was only a two months time difference. Open a I announces oh three. So keep in mind this was not the oh four model yet their smartest model at coding math later in December, 2024. How did it do? It got 25% right.

Now, I note here that 99.9% of the population cannot solve even 1% of the problems on the Frontier Mass test. Yeah, these are really difficult tests. And here we have an AI that solved 25% of it, though five years ago, the state-of-the-art AI was chat GBT two, which could sometimes write a fully coherent paragraph in English.

And if we look here, we can see another test being saturated here. This is arc a GI, semi-private V one scores over time. And you can see we went from like basically getting [00:35:00] none of it right was GPT-4 in 2023 to a 2025. But when I say none of it, I mean it's getting like two to 3%. It's getting near a hundred percent.

So they have to shut it down and create a new test.

Simone Collins: Yeah. And the benchmark used to just be, Hey, could I not tell the difference between you and a human and conversation? Just keep moving the goalposts.

Malcolm Collins: Yeah. Yeah. So, we are now gonna go to this AI competition for coding. Right. Okay. We talked about like this multimodal model that did really well.

This happened recently, right? So what was this contest that I'm talking about? What happened? So the contest focused on creating good enough heuristics to complex computationally unsolvable problems like optimizing a robot's pass across a 30 30 grid with the fewest moves possible. Under strict rules, no external libraries or documentation, identical hardware for all, and a mandatory five minute cool down between code submissions.

A Polish programmer named p Dubenski known online as Psycho, who at a for opening eye [00:36:00] employee. So no really, really smart people who were competing in this competition took first place after a grueling 10 hour marathon session. The open AI model debuted open IHC finished a close second. With Deepak, edging it out by 9.5%.

Final scores, 1.81 trillion points for the human. Verse 1.65 trillion. For the ai. The AI beat 11 humans in total. So that was the rest of the field right there. The event features the world top 12 human coders as qualifiers was the AI added as an extra competitor. Psycho was the only human to outperform the AI while the other 11 humans placed third or lower.

As for how they ran the AI to make it competitive, it wasn't a standard publicly available model like GBT four or even O one that just spits out code in one go. This was a secret internal opening. AI creation described as a simulated reasoning model. Similar to the O three [00:37:00] series. In advanced successor 2 0 1, it ran on the same code at coder provided hardware as humans to ensure fairness.

But it strength came from its iterative multi-step process. And I, I mentioned how that went, like, plan code, blah, blah, blah, blah, blah. Okay. Right. So now we're gonna talk about a paper that Bruno cited for me in the thing he reached out. This is an Apple research paper titled The Illusion of Reasoning makes the Case That Language Models.

This is Bruno writing here.

Simone Collins: Oh, and other people have asked us to comment on this too. So this is great. Yeah.

Malcolm Collins: Cannot reason. As marketed, the critique dovetails with other signals of caution. Sam Altman has called this field a bubble, as I mentioned. It technically is, and Elon has raised concerns about looming energy constraints, which might happen.

Basically, Elon's big bug boo. His energy is a bigger constraint than chips. He's not saying the industry is overrated. These warnings are not isolated. So you point to structural issues, both technical and economic. Are they point to structural issues? Okay, let's go over this paper. 'cause this paper is ridiculous.

It is actually ridiculous. [00:38:00] So what they did is they gave AI a number of puzzles to do. And the AI outperformed humans by orders of magnitudes at these puzzles, but they didn't like the way it outperformed humans orders of magnitude. Mm-hmm. And said that it could have been more efficient. And I'm just like looking at them like with a GE fall on my face.

Like, how, how can you be this unfair to ai? Like here, AI do this puzzle. It does it at like 10 times the speed of a human or at 10 times an advanced level of what an average human can do. And they're like, just like I noticed you forgot to dot your i's. I guess I'm gonna have to mark you as not sentient.

Imagine if teachers did that. Like, it's like a, a super prejudice teacher. But let's go in, let's go into this. Okay, so we've got the Tower of Hanoi. Okay, so the average human limit on the Tower of Hanoi can solve whi up to three to four [00:39:00] discs seven to 15 moves. Whiz, trial and error. Devil's minutes, mentally whiz whiz physical discs at five to seven discs if you move to, to physical disc.

Okay, so, ai, when did, when, when, when did AI do this? So, models like oh three mini. So not a particularly advanced model here, right? It was able to do it up to 15 moves.

Um, And I'll note here but it did, it did, it did break down. Okay, so, so, let's, let's look at Claude 3.7 sonnet. So we're saying, okay, but we're not looking for how high can it do it. We're looking at how high can it do it? Flawlessly. Okay. Okay. So your average, you know, 95 IQ human or whatever, right?

They're, they're at three to five disk. Claude 3.7 C. Clonic could do it flawlessly. Up to five disks. Okay. Alright. So why did they get mad at the ai?

Simone Collins: Yeah, why? Please [00:40:00] explain this to me.

Malcolm Collins: Taper argues this is an illusion because even at medium end traces show incoherent exploration and effort peaks than drops, eeg, fewer tokens spent, despite budgets indicating no true recursive understanding, just pattern extension until it breaks.

Okay. My, my, my brother in Christ. Did you have an EEG hat on these humans? You don't know how their reasoning was working during this. You don't know that this wasn't happening in the humans. Exactly. But also you didn't even use humans as a, a norm in this. When they did this, they didn't use humans. I'm using other studies to, to look at how humans perform on this.

You just assume that the human brain doesn't work that way. That's what always gets. So when people are like, AI's just a token predictor, and I'm like, a lot of the evidence suggests human brains are a token predictor. Yeah. Our episode on this and more episode evidence has come out since our episode on that that I've got over another episode because it annoys me so much.

There's just like voluminous evidence a [00:41:00] huge chunk of the human brain. It's probably a token predictor. But I just hate so much when they're like, humans don't make these types of mistakes. And I'm like, well, first of all, even if you're considering them a mistake. Note that the AI did better than the humans at its task.

So if the way it did it was a mistake, then clearly it understood its resource allocation and limitations and performed with it in a way that out competed its competitor, right? Or who are you to say that you know better than it about how it can do this? And if it could do it better, why didn't you add that to the token layer?

You could have done that. All right, so next we're gonna go to River Crossing. The average human limit the classic here is three. Is, is solvable through hints. Though the average might need trial and error to avoid constraints. And at number four, 20 plus moves, complexity explodes, most would fail due to tracking multiple states mentally.

All right. So, humans, average human intelligence, three claudes on it. [00:42:00] 3.7 fails beyond a three as well. And errors step up at four or higher, but it says that in humans, four becomes near impossible. Okay? And, and so it collapses at around where humans do. Okay? So, so here AI is performing similar to humans.

So where, why do they say that this proves it is dumber? Well, they highlight to highlight the illusion. They say, despite self-reflection, AI can't consistently apply constraints leading to invalid moves, early proving, no deep understanding of safety rules, just probabilistic guessing, which falters. But you could change the way the AI model works to do this.

If the human brain is a token predictor that evolved, it almost certainly has pathways to check for these types of mistakes. If these are the common mistakes was in token predictors, but you have locked theis that you are using out of doing that.

Simone Collins: Oh my god. I'm sorry. Well, also like the, these are tests, you know how far you get?

I, I, it's not like I, I, I've taken the SAT or [00:43:00] some other standardized tech test and then been told, oh, but you know, you, you took way too long. I'm like, these three problems. Or like, you went back and changed your answer. Like lowering your entire score. Yeah. Like, you're right or you're wrong. You get this many questions answered, or you don't.

The, the fact that people keep not only moving the goalposts, but then going back into these tests and evaluations and nitpicking the methodology used just seems like massive amounts. Of denial. Well, this is how

Malcolm Collins: Apple is explaining why they can't make an ai because AI aren't real. But, okay, so here we go to their blocks.

World test average human intelligence. You can get up to three to five blocks. The AI breakpoint got up to 40 blocks. Was, was but LMS collapse at high, at high ends. AI vastly outperforms humans on this one, but the paper points to an illusion via trace analysis at medium incorrections happen late at high.

[00:44:00] Exploration is random and incoherent, not strategic, showing reliance on brute force patterns. But it's working. Not adaptable planning, but if it's working, it's a good strategy. You, you, you are demanding that it solve it the way that you solve it. So throughout the paper, the way that you

Simone Collins: want to solve it.

Yeah.

Malcolm Collins: Yeah. Basically what they show is that AI exhibits flaws like overthinking, exploring the wrong path unnecessarily, which you could put in the token layer for it not to do or have another model that checks it to stop it from doing this in inconsistent self-correction in a hard cap on effort, collapsing in coherent, at high complexity, much higher than human without adapting.

Unlike humans who might intuitively grasp rules or persist creatively, even slowly, AI doesn't build reusable strategies. It just delays failure and medium regimes. So like when I look at this paper, I'm honestly. I, I read a lot of romance Manus that take place in fantasy worlds, and you'll have the [00:45:00] evil, you know, stepmother or whatever, or concubine who will like, arrange all the tests so her clearly incompetent son can, can beat the clearly much more competent person.

And then the, the, the, the bribed visier will come out and say, well, do you not see that? He took too long on question number five, which sp proves suspiciously unlucky and it's like, come on my friend, what are you doing? Like, clearly you're just begging the question here, right? Like, the AI is outperforming people and you are using its outperformance.

This reminds me of the hilarious test that some people have been like they released this paper saying, oh, well. Yeah, there was this paper done on Claude that showed that it didn't know the logic, the internal logic it had used to get to certain outcomes, right? Like when you could look at this internal

Simone Collins: logic, humans don't know the internal logic they use to get to certain outcomes.

I,

Malcolm Collins: I point a lot of people think that humans know, but if you look, there's been a lot of experiments on this. Look at our A LLM models where we go over all the studies on this. It's [00:46:00] just, so stories s

Simone Collins: they're post, they're adding post hoc reasoning and this Yeah. They add post

Malcolm Collins: hoc reasoning. Basically, you make up how you came to a decision if that decision has changed in front of you.

So a famous example of this is people will think they chose like one woman as the mo most hot from a crowd, and they'll do Leigh of hand and then show you another woman and say, why'd you choose this woman? And people will provide detailed explanations. And they've done this with political opinions.

They've done this with like, this is a well-studied thing in psychology. You have no idea why you make the decisions you make. But they assume because our intuition is that we think we know.

Simone Collins: It is not even that it's our intuition. It's that our minds are token predictors. Like both like on a technical, but also like more philosophically.

And when someone asks us a question, we want to be able to answer it. We see this with our kids all the time. Like last night toasty, our, our son was telling us how Tommy Knockers, which are these like monsters that we

Malcolm Collins: made up for them Yeah. Made up for, to keep them outta the minds.

Simone Collins: Yeah. He was like, Tommy Knockers cannot exist in this house.

And we're like, well, how do you know [00:47:00] that? And he's like, well, it, my granddad said it to me at his house when I was a baby. He's not been at his grandfather's house.

Malcolm Collins: It's a baby. Yeah. Like, this doesn't make sense. This is not a humans,

Simone Collins: but humans like to give answers for things. And I get that. That's totally respectable, but like, it's just he hallucinated,

Malcolm Collins: he literally hallucinated.

Simone Collins: Yeah. Like we do that too. So stop. People, stop. You're embarrassing yourselves.

Malcolm Collins: No, I'm not gonna go too deep in some of the ways AI is being used for medical research, because I don't know if people fully care, but I will at least go into some of the drugs and, and some of the methods where it's been used, it's been used for genome sequencing and analysis.

It's been used for variant detection, disease prediction. It's been used for clinical genetics and diagnostics. It's been used for drug design and target identification. It's been used for predicting interactions and toxicity. It's been used for streamlining the development and clinical trials. Now if we're gonna go into some of the specific ones that have been developed, one called er tib.

I, I'm, [00:48:00] I'm, I. O 18 0 55. You know what? I'm not gonna list these designations for the future ones. But this was developed by instead co medicine using their generative AI platform, pharma ai this small molecule inhibitor targets TNIK for pulmonary fibrosis, a rare lung disease which my family has and has killed multiple family members of mine.

So this, we might have actually funded this research because my family does fund a lot of stuff in that industry. Then another one co-discovered by Exist and Sumito Pharma using AI driven design. This serotonin five HT one A receptor agonist treats, obsessive compulsive disorder. Now note here for this first AI drug development, right?

This could literally save my life one day. My, my, I think my aunt died of this I know my grandfather died of this. He did. My dad has this so I could easily get, like, I'm very like, this is like the number one killer in my family. And AI might [00:49:00] have developed a solution to it. Like, you can't understand when you're like, AI has done nothing meaningful.

It's like, other than this drug that saves people in my family's life. Yeah, like

Simone Collins: maybe, you know, let's say you have like a serious risk of Alzheimer's in your family. You're, you're gonna feel very different about AI once AI cures Alzheimer's. Actually,

Malcolm Collins: by the way, another existential sumo collaboration was dual five HT one a agonist five HT two agonists, which targets Alzheimer's disease.

Simone Collins: Amazing. And, and then they had a, they're really

Malcolm Collins: going for those pithy names. A, the, the same company existential developed a, a cancer treatment, a tumor fighting immune response thing. Okay, so

Simone Collins: point for Simone. 'cause of the cancer's coming from me.

Malcolm Collins: Yeah. And then in terms of DNA stuff, like what's it finding in genetics?

The novel autism linked Mutations in non-coding DNA using deep learning on whole genome sequences from thousands of families. Researchers identified previously undetected mutations in non-coding regions associated with the disorder. Autism rare DNA sequences for gene activation. AI analyzed vast [00:50:00] genomic data to discover custom tailored downstream promo regions DPR sequences active in humans, but not fruit flies and vice versa.

I also think all

Simone Collins: this like better genetic sequencing with autism might actually fix the autism diagnosis problem of like too many different conditions being grouped into autism. Like, you know, we are participating as a family and autism genetic research. No, I didn't. Yeah, but like, our kids don't have any of the, like, genes for autism and that's because they have Asperger's.

Even though they've

Malcolm Collins: all been diagnosed. You've been diagnosed. Do you have any of the genes?

Simone Collins: No. And that's the thing. It's like, I think that when, when AI helps us better understand autism and like the genetic components of it, they're gonna be like, alright, so these are actually super different things and on this technical level we can demonstrate and show it how probably low functioning autism and different forms of autism are gonna be seen as very different from what used to be called Asperger's.

And it's now just called. Yeah,

Malcolm Collins: it would be. But my [00:51:00] point here being is. People are already being fired over this. Right? Yeah. If, if you're looking at AI and, and not just that it's already developing lifesaving drugs. Yeah. It's already developing game, changing scientific developments. Yeah. People

Simone Collins: like, I guess if it doesn't immediately affect them if they are not married to their AI boyfriend, husband.

Now, if they aren't having a personally scary disease, which is happening, by the

Malcolm Collins: way, so a study which surveyed a thousand teams in April and May showed a dramatic rise in AI social interaction was more than 70% of teens having used AI chat companions and 50% using them regularly. 50% of teens are using them regularly.

Simone Collins: I, I can't even wrap my head around that.

Malcolm Collins: No, no. You wanna hear a crazier statistic? Sure. Despite the widespread use, 67% of teens say that talking to people is still more satisfying overall. Wait, wait, wait, wait, wait. So 33% think talking to AI is more satisfying. That's a huge chunk.

Simone Collins: [00:52:00] But no, we've hit a plateau.

It's all a bubble.

Malcolm Collins: It's, it's very,

Simone Collins: okay, guys, have fun being left in the dust. Enjoy it.

Malcolm Collins: But I see, because when people are thinking about a product, they think about it from a consumer level. They think about it like an iPod or a, you know, why isn't this in my pocket? Right.

Simone Collins: It's just, it can also be hard for people to wrap their heads around it.

You know, like when, when cars started being adopted, it was like, oh, this is just a rich person thing. Like, they break down all the time. They, you know, it's just better to have a horse just keep your horse. People couldn't imagine not having horses on the roads. You know, similar to

Malcolm Collins: the hallucination arguments, like, cars break down AI to hallucinate.

Why? How is that gonna transform society?

Simone Collins: Yeah. So buckle up guys. I, you don't have to, but yeah. If you go into this not wearing your seatbelt, this is on you.

Malcolm Collins: Yeah. And I could go into like technical things where, where it looks like parts of AI development have slowed down recently, but in other areas it looks like it's sped [00:53:00] up recently.

Like that's the problem with a lot of this is you can say, well, it's slowed down here. It slowed down here, and then, well, it's sped up here, here, and here. Right. And, and then you'll get some new model, like deep sinks, new model, and they'll be like, oh, and now we have some giant jump. Right? And then we've just been seeing this over and over again.

I, I hope it plateaus. It's gonna be scary if it doesn't plateau, but we're not seeing this yet. We're seeing what is kind of a best case scenario, which is steady, gross, steady, fast, gross, not foaming. Okay. But steady fast growth.

Simone Collins: Mm-hmm. Yep. So, yeah, multiple people actually requested this discussion in the comments of the video we ran today, which was on how geopolitics will be changed after.

The rise of AI and more accelerated demographic apps. So I'm glad that you addressed all this. Yeah, I mean, there, there's a lot more, I mean, we're only just getting started and a lot of people also chimed into the comments. They're like, well, give me specific dates. I need to know, like, you know what? By when we, we can't do that.

[00:54:00] Like we can give you dates, we're gonna be wrong. Like it's, it's really hard to predict how fast things are gonna be and there are so many factors affecting adoption, including regulatory factors, social factors that it just makes it really hard for us to say exactly when things are gonna happen are heuristic with these things.

If you're just trying to be like, well, yeah, but like, how do I know when to start planning? This is your reality now. Just like accept it as reality and live as though it's true. That's how we live our lives. We live our lives under the assumption that this is the new world order. And we don't invest in things that are part of the old world order in terms of our time or dependence.

And we do lean toward things that are part of the new world order, if that makes sense.

Malcolm Collins: Yeah. No, I, I, I absolutely think it makes sense and I'm just I totally understand where people are coming from with this, but my God, are they, they, they, it, it, it's, it's like hearing computers transform society and only thinking about the computers that you use for recreation instead of the computers that are used in a manufacturing plant [00:55:00] and to keep planes connected and to, you know, like the, the utility of this stuff is astronomical.

And even if the development stopped today the amount that the existing technology would transform societies in ways that haven't yet happened is almost incalculable. Like that, that's the thing that gets me, I don't need to see AI doing something more than it's already done today. Like I, I, I don't need to see something more advanced than GR four Okay.

Than opening AI five. I i, with these models, I could replace 30% of people in the legal profession. That that's a big economic thing. Okay.

Simone Collins: Yep. And I mean, again, we, we can't say how fast this is gonna be impactful or not because there are already states in the United States, for example, that are making it illegal, for example, for either psychological AI therapy therapists.

Yeah. Even [00:56:00] though

Malcolm Collins: AI outperforms normal therapists on most benchmarks. Well, it,

Simone Collins: it just to use AI to help themselves. Like, and they're gonna cheat anyway, but like, so people are gonna try to artificially slow things down. In an attempt to protect jobs or protect industries because they don't trust it. So again, things will be artificially slowed down.

Sometimes things will be artificially sped up by countries saying, okay, we're all about this. We need to make it happen. Like China Yeah.

Malcolm Collins: Or Trump. I cannot believe the Democrats have become such Luddites.

Simone Collins: Hmm. Whatever. Anyway yeah, thanks for addressing this and I'm excited. We're in the fun timeline.

Malcolm Collins: Oh, absolutely. I, I often, I watch AMVs from you know, Zo the zombie show about the office worker, the, the Japanese office worker. He Oh, he is like, yay. He's having a blast. We are undergoing like multiple Apocalypses right now, and I'm like, I am here for every one of them. Yeah, totally. This is, this is a fun time to be alive during the, the ai slash [00:57:00] fertility rate apocalypse because I get to do the things that I want to win.

Have lots of kids and work with AI to make it better.

Simone Collins: Yeah. All right.

Malcolm Collins: Great. Love you

Simone Collins: sending you in the next link. Get ready for it.

Malcolm Collins: What do you mean? Make sure it's backed up. Ah, okay. Oh, do I look okay?

Simone Collins: You you,

Malcolm Collins: yeah. Ish.

Simone Collins: I need to cut your hair, but I will. Your

Malcolm Collins: hair. Definitely. You are a lovely wife and I love that you cut my hair. Now it feels so much more contained. The more things we bring into the house, whether it's you making food or cutting my hair it dramatically improves my quality of life because I don't have to go outside or interact with other people.

And I really hadn't expected that. And it's, it's pretty awesome.

Simone Collins: Yeah, I get now why for many people it's a luxury to have. Everyone come to your house to [00:58:00] deliver services. But it's even better if you don't have to talk with someone else and coordinate with someone else and pay someone else and thank someone else.

And not like, I'm not appreciative of what other people do and the services they provide, but it's just additional stress. Like this is the generation of people that can't answer the phone like me included. It's just like, yeah. And so like the anxiety that you have to undergo to like have a transaction with a human is so high.

Even if they're doing a great job and they're happy and you're happy, you still have to like, go through the, oh, thank you so much. And, oh, can I have this? And, well, this isn't quite right. Can I have this adjusted? And like mm-hmm. No, I would rather use my mental processing power to just keep our kids somewhat in order,

Malcolm Collins: somewhat in order.

That's a tall what do people think of the episode today?

Simone Collins: What did they think? They, I think they liked it. I'm trying to think of like if there's any theme. In the comments, a lot of people had small quibbles here or there about birth rates in [00:59:00] certain areas. And I think that's because the data is so all over the place and a lot of people have anchored to old data.

And, and then they're really shocked to see how much the birth rates have changed. Some, I haven't gone deep into it, but some people have questioned why you think like growth in certain areas in populations won't matter due to them sort of being technologically just not online yet and not developed?

Malcolm Collins: Yeah, this, to me, I just find like a comical thing. Like they, they, they think that they're gonna get uh, Wakanda, right? Like, this is not gonna happen. Right? You, you, you can't just, when we've seen populations like jump in technology and industry levels, it happens because of some new form of contact or some new form of technology being imported to the region like we saw in East Asia.

It's very unlikely that you're gonna see something like Somalia, which has good fertility rates, just like suddenly develop. It, it, it doesn't, it's, [01:00:00] it's, and, and we've tried to force it, right? Like this is fundamentally what the US tried to do with Iraq, right? Like we tried to force them to become a modern democracy and a modern economy in the same way we did with like South Korea and Japan and Germany.

And it just didn't work. What do you think

Simone Collins: about the city states that, like Pat's working on in Africa? Couldn't you theoretically create Wakanda's?

Malcolm Collins: You could. You could. I think one of his city states would be most likely to do that, but that's not gonna have an impact on a wide spread of the region, right?

Like Yeah, basically

Simone Collins: just those who can get in.

Malcolm Collins: Yeah,

Simone Collins: will thrive.

Yeah.

Malcolm Collins: So anyway I am going to jump in here.

Speaker 2: Hey, can you tell me about your dream last night? Yeah. No. What happened?

Nothing happened. Was it just black? Wow. That's a very exciting dream. [01:01:00] So you didn't dream about spiders or Tommy knockers or anything else? Just black. Mm.

So just black, no spiders and the tummy knockers. No spiders or tummy knockers. What are tummy knockers? Tummy knockers are put hybrids. Only Naps. Are they dangerous? Yeah. What happens if you get your tummy locker? Yeah. They, because I gone forever. Well, where do they live? In the tunnel. In the tunnel. The so in.

Speaker 4: Stay away from the cave in the tunnel. Right. Stay away from the cave, right? Yeah.

Speaker 2: What happens if you get too close to the cave? I don't have to get too close to the, because otherwise they'll get you. What happens if they get you? Yeah. What [01:02:00] happens if the tummy knockers get you? They tell me not to stand against me. What if they do get you tightened? They do done well, they drag you into the tunnels.

No. What happens? Um, I. You don't know. Did the tummy knocker that tentacles? No. Do octopuses have tentacles? Sorry, mommy. Did you spill a little? Yeah. Oops, that's, I'm sorry. Mommy. You're a concentration face.

Speaker 4: Oh, thanks. And I might the thank you. Oh, under no. Very good at using water. Yeah. Yeah, the water [01:03:00] you wanna hug? Thank you Andy. No, no kiss. Yeah, I do wanna kiss. Love you. I love you too, dad.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit basedcamppodcast.substack.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

Based Camp | Simone & Malcolm CollinsBy Based Camp | Simone & Malcolm Collins

  • 4.5
  • 4.5
  • 4.5
  • 4.5
  • 4.5

4.5

125 ratings


More shows like Based Camp | Simone & Malcolm Collins

View all
The Tom Woods Show by Tom Woods

The Tom Woods Show

3,363 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,424 Listeners

The Pete Quiñones Show by Peter R Quiñones

The Pete Quiñones Show

1,038 Listeners

"YOUR WELCOME" with Michael Malice by PodcastOne

"YOUR WELCOME" with Michael Malice

2,144 Listeners

Walk-Ins Welcome with Bridget Phetasy by Conversations with people from all walks of life.

Walk-Ins Welcome with Bridget Phetasy

1,245 Listeners

The American Mind Podcast by The Claremont Institute

The American Mind Podcast

1,234 Listeners

Calmversations by Benjamin Boyce

Calmversations

362 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

90 Listeners

Razib Khan's Unsupervised Learning by Razib Khan

Razib Khan's Unsupervised Learning

200 Listeners

The J. Burden Show by J. Burden

The J. Burden Show

128 Listeners

The Auron MacIntyre Show by Blaze Podcast Network

The Auron MacIntyre Show

465 Listeners

"Moment of Zen" by Erik Torenberg, Dan Romero, Antonio Garcia Martinez

"Moment of Zen"

91 Listeners

Maiden Mother Matriarch with Louise Perry by Louise Perry

Maiden Mother Matriarch with Louise Perry

277 Listeners

History 102 with WhatifAltHist's Rudyard Lynch and Austin Padgett by Turpentine

History 102 with WhatifAltHist's Rudyard Lynch and Austin Padgett

202 Listeners

"WhatifAlthist" | World History, Philosophy, Culture by Rudyard Lynch

"WhatifAlthist" | World History, Philosophy, Culture

72 Listeners