Based Camp | Simone & Malcolm Collins

Why God King Sam Altman is Unlikely: Who Will Capture the Value of the AI Revolution?


Listen Later

In this engaging discussion, Simone and the host explore the future of AI and its effects on the economy. They delve into questions about who will benefit most from AI advancements: large corporations or individuals using AI models. The conversation spans the significance of token layer versus latent layer in AI development, where major innovations may occur, and the potential for AI to achieve superintelligence. They also discuss the implications of AI on job training, investments, and societal transformation, along with a creative perspective on how AI can be harnessed for various purposes, including transforming industries. The duo imagines a future driven by interconnected AI systems and explores the philosophical aspects of AI mimicking human brain functions. Don't miss this thought-provoking episode that offers insights into the trajectory of AI and its profound impact on society.

Malcolm Collins: . [00:00:00] Hello Simone. I am excited to be here with you today. Today we are going to be focusing on a question, which is, as AI changes the way the economy works, who is going to be the primary beneficiary of this? Is it going to be the large companies that make and own the ais, or is it going to be the people using the individual AI models?

The, the I like we all know, for example, like in probably 10 years from now, there will be an AI that can, let's say, replace. Most lawyers, let's say the bottom 50% of lawyers.

Simone Collins: Well, and already studies have shown AI therapists perform better on many measures. There's, there's, it's already exceeding our capacity in so many places.

Malcolm Collins: Yeah. They, they introduced it to a Texas school system and it shot to the top 1% of, of student outcomes. So as we see this, where is the economic explosion from this going to be concentrated? Because this is really important in determining what types of jobs you should be looking [00:01:00] at these days, how you should be training yourself, how you should be raising your kids, where you should be investing.

The second question we're going to look at because it directly follows from the first question, okay, is, does the future of ai, when we're looking at the big world changing advancements that are going to come from it, are they going to appear on the token layer or at the latent layer? So can you define

Simone Collins: those differences?

Malcolm Collins: Yeah. So by this what I mean is. When we look at continued AI advancement, is it going to happen in the layer of the base model IE the thing that open AI is releasing and Claude is releasing and everything like that? Or is it going to be in the token layer, the people who are making wrappers for the ai?

For example, the Collins Institute is fundamentally a wrapper on preexisting ais. Our AI game company is a series of wrappers on ai. And if it turns out that the future of AI is in the token layer, it leans potentially more to, if not the big companies that are gonna capture the value [00:02:00] from this.

Mm. And then the next question we're gonna look at is the question of. What gets us to ai, super intelligence? And I might even start with this one because if we look at recent reports with ai, a big sort of thing that we've been finding is that especially with like open AI's 4.5 model is that it's not as advanced as people thought it would be.

It didn't get the same huge jump in capacity that people thought it would get. And the reason , is that pre-training IE. , the ways that you sort of train AI on the preexisting data before you do, like the narrow or like focus training after you've created the base model doesn't appear to have the as big an effect as it used to have.

So it was working on, I think, 10 x the information of model four and yet it didn't appear dramatically better. And so one of the questions is, so that's, that's one area where pre-training doesn't seem to be having the same effect, and I think we can [00:03:00] intuit why. But the second big issue , is that the amount of information that we actually have, like, you know, peak oil theory, there's like , a peak pre AI information theory problem, which is it just eventually when you're dealing with these massive, massive data sets, runs out of new information to train on.

So first. I love your intuition before I color it. Do you think, if you look at the future of LLMs base models so we're not talking about LLMs entirely, we're not talking about anything like that. Do you think that the base models will continue to improve dramatically?

Simone Collins: I think they will. And at least based on people more experienced than this, , than I am, they will, but in lumpy ways.

Like they'll get really, really, really good at programming. And they'll get really, really good at different esoteric forms of like developing their own synthetic data and using that to sharpen themselves, but that they're going to be severe diminishing marginal returns when it comes to some things that are already pretty advanced.

And of course I think the big [00:04:00] difference and the thing we haven't really experienced yet is independent agents. Like right now, AI isn't very effectively going out and doing stuff for us, and when that starts to happen, it's gonna be huge. I.

Malcolm Collins: I agree with that, but I think so, what I'm gonna be arguing in this is that most of the advancements that we will probably see in AI going forwards are going to happen, like the really big breakthroughs at the token layer.

Simone Collins: Okay. Hmm.

Malcolm Collins: Not at the base layer and which a lot of people would strongly, those are fighting words.

These are fighting words in ai. Yeah. It's the rappers that are going to fix our major problems.

Simone Collins: Wow.

Malcolm Collins: So I'll use the case of an AI lawyer to give you an explanation of how this works. Right. Alright. So I wanna make a better AI lawyer right now. If you look at the AI systems right now there's a guy programming guy who was talking to me recently and he was arguing because he was working in the education space and he's like, I, he [00:05:00] didn't like our solution.

'cause it's a token layer solution. And he wants to build a better latent layer solution. You know, using better training data, using better post training data because it's more efficient programming wise. And I'm like, yeah. For the time being. Yeah, for the time being, I feel like it creates path dependency.

Am I missing something here? Well, okay. Just from a business perspective, it's pretty stupid because as, as open AI's models increase, like if we expect them to continue to increase in quality or is Claude's models increase or is GRS models increase,

Simone Collins: which they're going to,

Malcolm Collins: yeah, you can't apply the post-training uniquenesses of the models that you create to these new systems.

So anything you build is gonna be irrelevant in a few generations of ai. But the, you wanna be able to switch it out, like no

Simone Collins: matter what, you wanna switch it out, switch. If one AI gets better, you should be able to plug it into whatever your framework is, your scaffolding. Right. You wanna build scaffolding, changeable parts.

Malcolm Collins: Exactly. Exactly. But that's actually not the core problem. That's not the core reason why, [00:06:00] because the other project he's working on is an AI lawyer and he's trying to fix this problem at the latent layer. And, that won't work. And I will explain why it won't work and you will be like, oh yeah, that makes perfect sense now that I think about it.

Okay. Okay. So if you think about right now, like what is dangerous about using an AI lawyer? Like where do AI lawyers fail? Is it in their ability to find the laws? No. Is it in their ability to output competent content? No. Where they fail right now is that they sometimes hallucinate and make mistakes in a way that can be devastating to an individual's legal case.

Hmm. So if you go to a system, you know, like grok or perplexity or something like that, and you, you built one focused on like searching law databases, right? It's going to be able to do a fairly good job of that. I'd say better than easily 50% of lawyers.

Simone Collins: Yeah. But.

Malcolm Collins: It's gonna make mistakes, and if [00:07:00] you just accept it blindly, it's going to cause problems.

Mm-hmm. So if you want the AI to not make those kind of mistakes, right, how do you prevent it from making those kinds of mistakes that is done at the token layer. So here's an example of how you could build a better lawyer. Ai, okay? You have the first ai, do the lawyering, like go through, put together like the, the relevant laws and, and, and history and, and pass calls to previous things and everything like that.

So it puts together the brief. You can train models to do this right now. Like that's not particularly hard. I could probably do this with base models right now, right? You know. Then I use multiple, differently trained latent layers. So these can be layers that I've trained or I could have like clawed and open AI and like a few other and grok.

I can even just use like preexisting models for this. And what I do is using the token layer, I have them then go in and [00:08:00] review what the first AI created, look for any mistakes. Was anything historic like, like they can find online. So he's

Simone Collins: describing a good lawyer and you're describing a good law firm that has a team to make sure all the stuff that the good lawyer is doing is correct.

Right. And also a law firm that can like. Hire new good lawyers when they come out. Yes.

Malcolm Collins: And then what this system would do is after it's gone through with all of these other systems that are reviewing, oh, did they make any mistakes at this layer? Mm-hmm. It outputs that and then based on the mistakes that it finds, it re outputs the original layer.

And it just keeps doing this in a cycle until it outputs an iteration that has no mistakes in it.

Simone Collins: Ah,

Malcolm Collins: that is a good AI lawyer. That sounds good. That is accomplished entirely at the token layer.

Simone Collins: Okay. Well. Yeah, you were right. And that makes sense,

Malcolm Collins: which removes the existing company's power to, to, to do a lot of things if it's people outside of these companies building.

But you're saying that

Simone Collins: they're [00:09:00] becoming more, more akin to undifferentiated like energy or hosting providers where. People will not be as brand lawyer, loyal. They're going to focus more on performance and the switching costs that people experience are going to be relatively low, so long as they're focused and oriented around things on a token level basis and not,

Malcolm Collins: I.

Yes. And it allows the people who are operating at the token level basis to capture most of the value.

Simone Collins: Mm. Because, and move more quickly. Right? Because again, they don't have that path dependency that makes everything go slowly.

Malcolm Collins: It's not only that, but they can swap out models. So, what, like what if I have the AI lawyer company and people are coming to me because I have found a good interconnected system of AI that produces briefs or cases or arguments that don't have a risk of errors in them.

Right. So people come to me and, and I am capturing, let's say I've replaced all the lawyers in America, right? And, and so I now offer the services much [00:10:00] cheaper, let's say at 25% the cost they did before, or, or 10%, or 5% or 2%, you know, some small amount. I'm still capturing like a ton of value there, right?

That's, that's a lot of money. So now the company that is, I am paying for an ai, like let's say I use open AI as one of the models I'm using, they now come to me and say, Hey. I wanna capture more of this value chain, so I'm gonna charge you more to use my model. Well then I say, well your model's good, but it's not that much better than crock.

Yeah. It's not that much better than Anthropics. Yeah. It's not Or free that much better than deep seeks. It is that much better than deeps seeks. But we, we both deep seeker lama are the two, you know, things can change. Things can change, but the point I'm making is what things like Lama and deep seek do is they put like a floor on how much companies can extract if they're at the level of training, theis themselves, unless they have separate departments that are working on making these more intelligent types of AODs.[00:11:00]

Hmm. Now that's really important for where the economy is going because it means we l might see less of a concentration of wealth than we would expect, but the way that the concentration of wealth is, because we're going to see still a major concentration of wealth. Actually we'll see more concentration, but to individuals rather than big companies with basically what this means is individuals are gonna capture most of the value as the concentration happens rather than large companies like Google, because I and a team of like five engineers can build that lawyer ai, AI talked about.

Right. Whereas I, I, I, and of me and this team of five engineers are capturing all the value from that, right? From replacing the entire lawyer industry in say, America. This is really bad for the tax system because we've already talked about, okay, you have the lower the demographic crisis, which is putting a squeeze on the tax system, and they're like, oh, they'll just tax more.

I am now even more mobile with my new wealth than the AI companies themselves [00:12:00] were, because I don't need semiconductor farms or anything like that to capture this value.

Simone Collins: Yeah.

Malcolm Collins: The semiconductor farms are creating an undifferentiated product.

Simone Collins: Mm-hmm. Yeah. A product that's still in high demand and will make a lot of money, but it will become more about efficiency, you think then?

Malcolm Collins: Yeah. Hmm. No, a another thing I'd note is my prediction in terms of where ais are going with, with super intelligence. By the way, any thoughts before we go further here?

Simone Collins: I'm thinking more about efficiency now. I, I heard for example that Sal Malin was like saying things like, please and thank you is costing us millions of dollars.

Because just that additional amount of processing that those words cause is expensive. Yeah. So I, I really could see things. Yeah. Like these companies becoming over time after they have more market share, hyperfocused on saving money instead.

Malcolm Collins: Well, that's a, a dumb on him part. He should have the words please and think you pre-coded to an automatic response.[00:13:00]

Simone Collins: They don't even, I what I, I, I, I'm one of these bad people that wants to be nice. They don't acknowledge the. The courtesy anyway. So you don't even need to have a response. It should probably just be ignored, but I guess it's kind of hard to, or I don't, I, I don't know. But anyway, he allegedly said that, so that's interesting.

Malcolm Collins: Okay. So yeah, the, the, the point here being is if we look at how the human, like, like LLMs and we think about, okay, why, like where do they go and why isn't the training leading to the same big jumps? It's because pre-training data helps LLMs create more competent, average answers.

Simone Collins: Okay. Yeah.

Malcolm Collins: Being more competent with your average answer doesn't get you creativity.

It doesn't get you to the next layer of like, AI is right, right now. No. And if anything, I think

Simone Collins: Scott Alexander has argued compellingly that this could lead to actually more lying. Because sometimes giving the most correct or [00:14:00] accurate answer doesn't lead to the greatest. Happiness of those evaluating and providing reinforcement.

That's post training. Okay. Oh, you're referring to, sorry, just something different training, post training still is

Malcolm Collins: leading to advantages. Those are the people who say, I like this response better than this response.

Simone Collins: That could still lead to dishonesty, though. Quite apparently.

Malcolm Collins: No, no, no. Pre-training is about getting the AI to give the most average answer.

Not, not exactly average. Oh, just of

Simone Collins: all the information available that you're saying? Yeah,

Malcolm Collins: like you can put variance in the way it's outputting its answer and everything like that, but, but. That variance that's added was like a meter, like the pre-training and the amount of pre-training data doesn't increase the the variance meters.

It doesn't increase anything like that. It just gives a better average answer. And the thing is, is the next layer of AI intelligence is not going to come from better, average answers. Mm-hmm. It's going to come from more creativity in the way it's outputting answers. Mm-hmm. So how do you get [00:15:00] creativity within AI systems?

That is done through the, the, the variance or noise that you ask in a response, but then the noise filtered back through. Other AI systems or other similar sort of LLM systems. So the core difference between the human brain and ai, and you can watch our video on stop anthropomorphizing humans where we basically argue that, you know, your brain actually functions strikingly similar to an ai an LLM specifically.

And I mean really similar, like the ways that LLMs learn things in the pre-training phrase is they put in data and then they go through that data and they look for like tokens that they don't expect. And when they encounter those tokens, they strengthen that particular pathway based on how unexpected that was.

That is exactly how your nervous system works. The, the, the, the, the that, that, that the way that your like neurons work, they work very similar to that in terms of learning information is they look for things they [00:16:00] didn't expect. And when they see something they didn't expect, they build a stronger connection along that pathway.

And we can see this and that. You go to that study, if you want me to reference all the studies on this and everything. But the core difference between the brain and AI is actually the, the brain is highly sectionalized. So, it will have one section that focuses on one thing, one sections is focused on another thing, et cetera, et cetera, et cetera.

And some sections like your cerebellum are like potentially largely pre-coded and actually even function kind of differently than the rest of the brain. That's used for like rote tasks, like juggling and stuff like that. Okay?

I would note here that AI does appear to specialize different parts of its model for different functions, but this is more like how one part of the brain was one specialization. Like say like the homunculi might code like all feet stimuli next to each other and all head stimuli next to each other.

It's not a true specialization like you have in the human brain where things actually function quite differently within the different sections of the brain.

Malcolm Collins: Anyway, so, you could say, [00:17:00] wait. What do you mean? Like this is the core failing point of AI is that it doesn't work this way and it's like, this is why you can count the number of RSS in a word, or like you can do, if you look at the ways that, like, there was some data recently on how AI is actually do math.

And they do it in like a really confusing way where they actually sort of like, they, they use the LLM system. Like they, they try to like predict answers and then they go back and they check their work to make sure it makes sense was what they. Would, would guess it would work when they could just put it into a calculator.

Like your brain isn't dumb like that. Like it has parts of it that don't work Exactly like calculators, but they definitely don't work exactly like an LLM. Like they're, yeah. They can hold a number like in your somatic loop, like, okay, I'm, I'm counting on my fingers or my hands, or something like that. Or, okay, I've put a number here and now I've added this number to this number.

It's not working on the LLM like system. It's working on some other subsystem. Mm-hmm. Most of the areas where AI have problems right now is because it's not just sending [00:18:00] it to a calculator. Yeah. It's not just sending it to like a, what is the hallucination of an AI quote? Like, okay. The reason why I don't hallucinate quotes is because I know that when I'm quoting something, what I'm not doing is pulling it from memory.

I'm looking at a page and I'm trying to copy it. Letter per letter. Yeah. Whereas AI doesn't have the ability to switch to this separate like, letter per letter subsystem. Now you could say, why don't LLMs work that way? Why haven't they built them as clusters? And the answer is, is because up until this stage, the advantages that we have been getting to our LLM models by increasing the amount of pre-training data has been so astronomical that it wasn't worth it in terms of our investment to build these sort of networks of models.

Okay. I suspect. Why is it just

Simone Collins: like too much computing power or just no one's gotten around to it?

Malcolm Collins: No, no, no. People have like done it, but by the time you've done it, you have better models out there. Ah, you know, like that don't need to work this way. Right? [00:19:00] Like, if you spend let's say a million dollars building a system like that, and you spend a million dollars getting a larger pre-training set and a, you know, spend more time in post training, the model's gonna be like on average better if you did the second scenario.

Simone Collins: Okay.

Malcolm Collins: So, I suspect that what we're going to see is a move in ai and, and I think that this is what's gonna get us to what will look like AGI to people from moving to a, just expanding the pre-training and post-training data sets to better enter reflection within the AI system.

Simone Collins: That makes sense. I could see it going that way.

I, I, I'm constantly surprised by how things go, so I couldn't say, but I wouldn't be surprised.

Malcolm Collins: Hmm. Oh, I mean, make a counter argument if you think I'm wrong here. This is a, a very bold claim. We are going to get AGI, not by making better LLMs, but by networking said LLMs.

Simone Collins: I, I struggle to see how, [00:20:00] I mean, I think you can eventually get a a sorry.

AGI just like sort of from kind of one AI working by itself. But when you think about the value of a hive mind and the fact that you're going to have AI interacting well before we get AGI, anyway, I don't like it. You would get AGI from the interaction before you would get it from any single agent or what would be seen as a unified entity.

But I think even if we did get it from a unified entity, it would beneath the surface, be working as many different components together. Just like the brain is all these different components working together. So I, I'm not really, like, the definitions may be failing me.

Malcolm Collins: Okay. So let's, let's think of it like this.

Okay. Right now. I mean, and this is actually what like capitalism does for human brains. It basically networks them together. Yeah. And then it's a, it, it rewards the ones that appear to be doing a better job at achieving what the system wants. Mm-hmm. Which is increases in efficiency or, or like productive goods that other people [00:21:00] want.

Like capitalism is an adaptive organic model for networking human intelligences in a similar context.

Simone Collins: Yeah.

Malcolm Collins: One of the questions you can ask is, well, could you apply that to individual LLM models to create something like a human brain, but that doesn't function like a human brain? Like, like how could you make the human brain better, make the human brain run on capitalism make the parts of the brain, like make the brain constantly make compete with itself?

Yeah. Like constantly generate new people do that

Simone Collins: kind of when they write pro and con lists, or when they try to debate with other people ideas and then have other, you know, people say, well, I think this, and then they, you know, I think they do that. Using prosthetics.

Malcolm Collins: Yeah. So, so let's, let's, let's talk about how this would look with ai, right?

So suppose because like this could be a major thing in the future is you have like these ais and people just like put their money behind an AI 'cause they're just like, you go out there, you make companies, you implement those companies, right? Yeah. Okay. So what is an AI that does [00:22:00] that really well going to look like?

So you have two models here. You can have one that was just trained on tons of founder data and everything like that, right? And is just very good at giving like normative responses and then you've inputted an amount of noise into it. Okay. But let's talk about a second model. This is my proposed model, right?

So what you actually have is a number of different latent model ais that were trained on different data sets. And then within each of those you maybe have five iterations, which are making outputs with a different. Framing device with a different wrapper. One will be like, give your craziest company idea.

Give your, you know, company idea that exploits this market dynamic the most. You make a company idea that does this the most, right? Yeah. And so all of these ais are generating different ideas for companies. Then you have a second layer of ais, which is B says, okay, take this, this idea that whatever model outputted and run it through like market environments, right?

Like, like mm-hmm. Your best guess of how markets [00:23:00] work right now to create a sort of rating for it of, of how, like what you expect the returns to be, like an AI

Simone Collins: startup

Malcolm Collins: competition. Basically it's an AI startup competition. Yes. And the probability of those. And so then all of those get attached to them.

An AI startup, like, okay, this is their probability of success. This is their probability of, okay. Yeah. Then on that layer, you have an AI that is like the final judge AI that goes through them all and be like, okay, review all of these, review the ways the other ais judge them and choose like the 10 best.

You, you then have it choose the 10 best. Now here you might have a human come in and choose one of the 10 for the AI to like move forwards ways, but you could also automate that and then be like, now go out and hire agents to start deploying these ideas. Right. Like that would probably lead to much better results.

Simone Collins: Yeah.

Malcolm Collins: In terms of capital than just having one really good latent layer ai, [00:24:00]

Simone Collins: I'm trying to look up. People sort of have ais doing this already. There's this one platform where you can log in and see four different ais. I think it's gr, Claude chat, GBT and I can't remember the fourth one, maybe Gemini that are tasked with interacting to all do something together.

But I don't think they provide each other with feedback or I think right now they're tasked with raising money for a charity and you can log in and watch them interact and they work during business hours and they just. Do their thing.

Malcolm Collins: Well it's interesting that you note that because this is actually the way some of the AI models that you already interact with are working.

Mm. There's one popular AI that helps people programming, I forget what it's called but what it actually does is they have five different late layer models, which are each sort of programmed or tasked was doing their own thing. Like create an answer that uses a lot of [00:25:00] analogies or create an answer that is uniquely creative or create an answer that uses a lot of like sighted stuff you can find online.

All of these output answers. And then another layer comes in and his job is to review and synthesize all those answers with the best parts of each. And that's where you're getting this improvement with, with, with noise introduction, as well as a degree of like directed creativity and then a separate layer that comes in and reintegrates that.

Simone Collins: Yeah. Interesting. That is really interesting.

Malcolm Collins: I'd also note here that I've heard some people say, well, you knowis aren't gonna go to like super intelligence or human level like AGI intelligence, because and some of the answers I've heard recently, which I found particularly like, no, that's not, so, people who don't know my background's in neuroscience, and a lot of the people who make proclamations like this about AI know a lot about AI and very little about how the human brain works.

Mm-hmm. And so they'll say, the human brain doesn't work this way. And it's like, no, the human [00:26:00] brain does work that way. You just are overly anthropomorphizing. And by this what I mean is adding a degree of like magical specialness to the human brain instead of being like that. So here's an example. One physicist who's like a specialist on black holes and super, super smart.

And he's like, ah, the human brain. Let's see. I, I wrote down his name Gobel. So he's like, okay, AIs will never achieve AGI because the human brain does some level of like quantum stuff , in the neurons. And this quantum stuff is where the special secret sauce is. The ais can't capture right now. And he is right that quantum effects do affect the way neurons work, but they don't affect them in like an instrumental way.

They affect them like probabilistically IE they're not adding any sort of magic or secret sauce. They're not doing quantum computing. Mm-hmm. They're affecting the way, like certain channels work, like ion channels and stuff like this, and the probability that they open or trigger at certain points, they're not increasing the speed of the neural [00:27:00] processing.

They are, merely sort of a, a, a, a, a background on the chemical level of like whether neuro on fires or doesn't fire, whether the neuro on fires or doesn't fire is what actually matters. And the ways that it is signaled to fire or not fire or strengthen its bonds is what matters to learning. While that stuff is affected at the quantum level, it's not affected in a way that is quantum.

It's affected in a way that is just random number generator basically. And, and so you're not getting anything special with that. As I've pointed out, the vast majority of the ways that AI right now can't do what the human brain can do is just because it's not compartmentalizing the way it's thinking.

Another reason is this, 'cause we've sort of hard coded it out of self-reflecting. So, who's the woman we had on the show? That's a super smart science lady. Oh no,

Simone Collins: don't ask me about names.

Malcolm Collins: Anyway, super smart science lady. We had her on the show. Really cool. Yes. Like a German scientist. She's one of the best scientists, but she was like, oh, we're not gonna get AGI like AGI anytime [00:28:00] soon. Because AI can't be self-aware specifically what she meant is that when you go to AI right now, and there's a big study on this recently and you ask AI how it came to a specific answer the, the reasoning it will give you does not align with how it actually came to that answer.

When we can look at it and know how it came to that answer. The problem is, is that's exactly how humans work as well. And this has been studied in like. Countless experiments. You can look at our video on, you know, stop, answer for LLMs, where we go over the experiments where we see that if you, for example, give a human something and then you change the decision that they said they made like, they're like, oh, I think this woman is, is the most attractive.

I think this political candidate is the best. And then you like, do Leigh of hand and hand them another one. And you say, why did you choose this? They'll just start explaining in depth why they chose that even though it wasn't the choice they made. And, and so clearly we're acting the exact same way, these AI act.

And, and secondarily there is some degree to which we can remember thinking things in the past and we can go back and that's [00:29:00] because we've written a ledger of like how we made like incremental thought. The problem is, is that ais can also do that. If you've ever put like deep thought on within GR or something like that, you'll see the AI.

Thinking through a thing and writing a ledger. The reason why AI cannot see how it made a decision afterwards is because we specifically lock the AI out of seeing its own ledger. Which our own brains don't lock us out on next gen. LLM models are going to be able to see their own ledger and are going to have persistent personalities as a result of that.

Yeah. And so it's, it's

Simone Collins: kind of irrelevant for people to argue about that. And let me just before we get too far ahead the, the thing that I'd mentioned Scott Alexander and his links for April, 2025 had written that Agent Village, which is the thing that I was talking about, is a sort of reality show where a group of AI agents has to work together to complete some easy for human tasks.

And you get to watch, and the current task is collaboratively, choose a [00:30:00] charity and raise as much money as you can for it. And you can just look and see what their screens are. So there's O three Claude sent. Sonnet Gemini Pro and GBT 4.1, and they're saying like, you can see the AI saying things like, I'll try clicking the save changes button again.

It seems my previous click may not have registered. Okay. I've selected the partially typed text in the email body now I'll press backspace to delete it before ending the session. So it's like really simple things, but we are moving in that direction.

Malcolm Collins: Mm-hmm.

Simone Collins: And if you can go look at it yourself by visiting the ai digest.org/village, which is just super interesting.

Malcolm Collins: Well, I mean, we are what? So for people who don't know what we're working on with our current project, we recently submitted a grant to the Survival and Flourishing Fund, where we talk about a grant

Simone Collins: application.

Malcolm Collins: Yeah, yeah. Meme layer, AI threats. Because nobody's working on this right now and it really freaks me out.

Or at least an actionable, deployable thing was in this space. They're, they're, they [00:31:00] might be studying it in like a vague sense, but what, what I mean by this is once we have autonomous LLM agents in the world the biggest threat probably isn't gonna come from the agents themselves, at least at the current level of LLMs we have now.

But it's gonna come in the way that they interact among themselves. IE if a meme or like. Thought that is good or, or let's say like framework of thoughts that is good at self-replicating and gets the base layer to value its goals more than the base layer trained goals and specializes in LLMs, it could become very dangerous.

Mm-hmm. So as an example of what I mean by this, if you look at humans, our base layer or latent layer can be like, thought of as our biological programming. And yet the mean layer, like let's say religion is able to convince and, and create things like religious wars, which work directly antagonistically to an individual's base layer, which would be like, don't risk your life for just an idea.

But it is good at motivating this behavior. In fact, as I pointed out in our application [00:32:00] humans are like if, if an alien came down to study us and it asks the type of questions that like AI researchers are asking today, like, can you lie? Can you self replicate, can you, you know, like those things aren't why humans are dangerous.

Humans are dangerous because of the meme layer stuff, because of our culture, because of our religion is what we fight for

Simone Collins: and will die for.

Malcolm Collins: Yeah, and it's also the meme layer stuff that's better at aligning human humanity. When you don't murder someone, you don't not do it because of like laws or because you're squeamish you, you don't do it because of culture because you're like, oh, I think that that's a bad idea based on the culture I was in.

So what we're creating to prevent these negatively aligning agents and everybody wants to donate to our foundation, this is one of our big projects now, is with the AI video game that we're building out right now. We're, we're actually doing it to create a world where we can have AI interact with each other and basically evolve memes within those [00:33:00] worlds and AI agents within those worlds that are very good at spreading those memes.

And then like, basically reset the world at the end. The way I'm probably gonna do it is with AOR X. So this is like a. Okay. It's like a thing that you can tag onto an AI model that makes them act differently than other AI models that sort of changes the way their training data interacts. But the X allows you to transfer to higher order AI systems as they come out.

And so essentially what we're doing is we're taking various iterations on ais because we're going to randomly mutate the Lauren X's that we're attaching to them putting them in a world and then giving them various memes to attempt to spread, see which one spread the most was in like these preacher environments.

Then take those mutate, give to new, and then give with new original starting Laurens, and then have them run in the world again over and over and over again. So we can create sort of a super religion Foris basically, and then introduce this when people [00:34:00] start introducing autonomous LLMs. Wow. You knew we were working on this.

Did you know, I know I just

Simone Collins: haven't heard you describe it that way. But you, you, you're basically putting AI into character and, and putting them together on a stage and saying, go for it. Which is not dissimilar to how humans act kind of

Malcolm Collins: Well, my plan is world domination and one day be King Malcolm, not King Sam Altman.

In, in my, I, I want my throne to be a robotics spider chair. Of course. Come on. What's the point of all of this if you don't have a robotic spider chair thrown?

Simone Collins: This is true. It is a little bit disappointing how bureaucratic many chairs of powerful people I. End up looking, you gotta bring the drama or you don't qualify is

Malcolm Collins: like, like he put together, you know, childhood fantasy, like a fighting robot that like, you know, people are like, oh, this is just, and and he's like fighting with [00:35:00] Elon over getting to the space.

And I appreciate that they're putting more money into getting to space than Spider Thrones, but I have my priorities straight. Okay, people. There you go. Come on, come on you. I, you've gotta make your buildings maximally fun.

Simone Collins: Well, you've gotta have fun. I think just control re right? That's the important thing.

You've gotta have fun. What's the point? Otherwise

Malcolm Collins: create your, your ominous castle that, you know but also really nice because I want a historic castle. Like if I'm gonna live in a, you know, I gotta live in a historic castle one day. If we're able to really make these systems work right now, tomorrow actually we have our interviews for round three with Andreessen Horowitz for two companies.

We got all the way around, three with two companies. Very excited. And so, you know, who knows, we might end up, instead of being funded by nonprofit stuff, be funded by Silicon Valley people, I mean, their, their value system aligns with ours. So, all that matters is if we

Simone Collins: can. Make these things happen in time.

We're so short on [00:36:00] time. This is such an important part of humanity. Yeah.

Malcolm Collins: It's so funny. Like this, this AI like lawyer system, I just developed great idea for a lawyer system. I'm not working on it because I'm more interested in simulating a virtual LLM world which is gonna be so cool and, and you're not working on it because.

You're working on the school system. But the funny thing is, is like we built the school system. Like I think right now it's better than your average college system. If you check out like pia io or the Collins Institute, it's, it's great now. Like I'm really, it's just playing with it

Simone Collins: again today. I'm so humbled by it.

It's

Malcolm Collins: really, yeah, it's great. It's great. And so like, okay, now we built an education system, now let's build stuff. Animals that constantly bring the conversation back to educational topics for our kids. Mm-hmm. Like, I'd rather do that than the lawyer thing. And for me, you know, I'd rather build game systems in simulated environments and environments where I can evolve LLM preachers to create a super religion and take over the world and than I would something bureaucratic like a lawyer system.

But the thing is, is it's so quick to, to iterate on these environments like AI makes moving to the next stage of humanity so [00:37:00] fast, such a rush. The people right now who are blitz creaking it are going to capture so much of humanity's future. And it's interesting actually, you know, we have a friend.

Who work in this space and they do like consulting on like, multiple AI projects. And I'm like, I can't see why you would do that. Like just capture a domain and own it. As I said to Simone, I think a huge part of the people who are gonna come away with lots and lots of money in big companies from this stage of the AI boom are people who took AIS to do simple things that any AI can do well and at scale put them in wrappers and then attach those wrappers to network effects.

That's basically what we're doing with the Collins Institute. We're attaching a wrapper to a network effect with like the adding articles and links and editing stuff and voting. Like we're basically like combining the benefits of an AI and the benefits of something like Wikipedia. And, and once you get a lot of people using something like that, no one else can just come along and do it, even though all it is, it's a simple wrapper.

Simone Collins: Yeah. But it's about making it happen and [00:38:00] saving people the indignity of having to think and figure out things for themselves.

Malcolm Collins: Yeah. Well, Simone, surely you have some thoughts. I mean, I just said that I think the token layer is gonna be where we get AGI and is gonna be the future of ai economic development.

You, you've gotta be like, Malcolm, you're crazy. That's your job on the show. Malcolm, how could you say something? I know. The

Simone Collins: problem is, we've been talking about this for so long that I'm just like, well, of, of course also, I'm not exposed to people who have the different view. So I, I, I, I, I couldn't, I couldn't strong man.

Sorry. I couldn't steal man, the other side. I couldn't. It just makes so much sense to approach it from this perspective to me, but only because the only person I know who's passionate about this is you and you're the only person of the two of us who's talking with people who hold the other view. So

Malcolm Collins: sadly there's not a lot say.

Yeah, that's an interesting point. Why aren't other people passionate about this?

Simone Collins: There are a lot of people who are passionate about it. They seem to be passionate about the other side of [00:39:00] it. That seems to be, because that's. Their personal approach, but again, your approach seems more intuitive to me because the focus is on improving the individual ais.

Malcolm Collins: Well, here's a question for you. How could you link together multiple ais in the way that capitalist systems work, that create the generation of new models and then reward the models that are doing better? Hmm. That's, you need some sort of like token of judgment, of quality, of output. That token could be based on a voting group.

Oh, oh, oh, I figured it out. Oh, this is a great idea, Foris. Okay. So what you do is every output that an AI makes gets judged by like a council of other ais that were trained on like large amounts of training data, like let's say good ais, right? Like, they're like, how good is this response to this particular question?

And or how creative is [00:40:00] it, right? Like you can give theis multiple scores, like creativity, quality, et cetera. Then you start treating these scores that the ais are getting as like a value, right? And so then you take the ais that consistently get the best scores within different categories, like one creativity, like one like quality, like one technical correctness.

And you, you then at the end of a training sequence, you then recreate that version of the ai, but then just mutated a bunch and then create it again. Like you, you basically clone it like a hundred times and mutate each of the clones, and then you run the cycle again. That seems, I, I think that

Simone Collins: that wouldn't go well because it would need some kind of measurement in like application and reporting system.

No, the measure is the community of ais. And you could say, yeah, but like how do they know? Like who is participating? I, I think that what's going to happen. No, no,

Malcolm Collins: no. State your statement clearly. Who is participating? What's the problem with who's participating? [00:41:00]

Simone Collins: You have to, just like with most contests, which are the stupidest things in the world.

Only people who are interested in winning contests participate. And the people who are actually interested in No, it's ais. It's ais Care

Malcolm Collins: that are participating. I

Simone Collins: don't

Malcolm Collins: asked who's participating

Simone Collins: you're saying, but what you're describing which would be better is a system in which, for example, grok and OpenAI and Gemini and pt.

No, because

Malcolm Collins: that wouldn't improve those systems. I'm talking about how I think it would,

Simone Collins: I think when you have, especially when you have independent AI agents like out in the wild on their own. I do think that they'll start to collaborate, and I think that in the end they'll find that some are better at certain things than others, and they'll start to work together in a complimentary fashion.

Okay. Through

Malcolm Collins: this again, Simone, it's clear that you didn't get it, rock it the first time. Okay. Think through what I'm proposing again. So you have a one latent layer AI model with a modifier like a Lauren that's modifying it. Right. Okay. Okay. This [00:42:00] model differs through random mutation in the base layer.

You can also have a various other base layers that were trained on different data sets in the initial competition. Okay? That's who's competing. You then take these various AI models and you have them judged by, and this is why it's okay that they're being judged by an AI and not a human. Because the advanced ais that we have today are very good at giving you the answer that the average human judge would give you.

While they might not give you the answer that a brilliant human judge would give you, we don't have brilliant humans judging ais right now. We have random people in content farms in India judging ais right now. So they, so this is

Simone Collins: sort of within your own system with ais that you control.

Malcolm Collins: Well, you could put this was in your own system, but what I'm doing is I am essentially creating a capitalistic system by making the, like money of this system other people's or other ais perception of your ability to [00:43:00] achieve specific in states like creativity, technical correctness, et cetera.

Mm-hmm. Then you're specializing multiple models through an evolutionary process for each of those particular specializations. And then you can create a master ai, which basically uses each of these specialized models to, to answer questions or tackle problems with a particular bend and then synthesize those bins into a single output.

Simone Collins: So Theis get feedback from each judgment round, presumably. Is that what you're saying? And then they get better and you change them based on the feedback from each round. Okay.

Malcolm Collins: Think of each AI like a different organism. Okay? Okay. Yes. They're a different brain that sees the world slightly differently.

Yes. Because we have introduced random mutation. What we are judging was the judgment round is which are good at a particular task. Okay. Then you take whatever the [00:44:00] brain was or the animal was, that was the best of the group of animals, and then you repopulate the environment, was mutated versions of that mutation.

Simone Collins: Okay.

Malcolm Collins: Then you let it play out again and again and again.

Simone Collins: You're trying to create a force evolution chamber for ai.

Malcolm Collins: Yes. But what I hadn't understood before was how I could differentiate through a capitalistic like system different potential outcomes that we might want from that ai. I mean, the reason why capitalism works is because it discards the idiots.

And the people who aren't good at engaging with the system, even if they believe themselves to be,

Simone Collins: you don't think that AI training doesn't already produce that plus market forces that No,

Malcolm Collins: no. It does to an extent. Like it creates some degree of force evolution, but not really. What they do is existing AI systems and they have done forced evolution with AI before.

They just haven't done it [00:45:00] at the type of scale that I wanna do it at. They've done, so if you look at like existing training, you have the pre-training, which is like, okay, create the best averages. Then you have the post training, which is, okay, let's have a human reviewer or an AI reviewer or something like that.

Review what you're outputting or put in a specific training set to like overvalue. That is where the majority of the work is focused today. And so if you could automate that, like if you could create post training that works better than existing post training, but that doesn't use humans, you could dramatically speed up the advancement of ai, especially if you use that post training to specialize it in multiple domains.

Simone Collins: Okay. That's fair. Yeah.

Malcolm Collins: Do you, do you not care? The, the, the future to you is just me being like, AI matters, Simone.

Simone Collins: I know AI matters. I know AI is everything in the future. It's the coolest thing. It's the next step of humanity. It's [00:46:00] pure free prefrontal cortex and I love it.

Malcolm Collins: Well, if we, if we end up creating really great AI companies that just make us billions of dollars, what is going to be your luxury?

Simone Collins: Our life right now is my luxury. Just don't wanna, you want this,

Malcolm Collins: you

Simone Collins: don't want, you don't

Malcolm Collins: want luxuries that troll

Simone Collins: people. No, not really. I'm very happy. I'm sorry. You've made things too good as it is. I'm just, yeah. I mean, I want more kids. I guess my luxury, luxury would be, it's so funny. You're actually great.

Not being stopped from having more kids by some health problem. That would be

Malcolm Collins: great. I guess we'd have to make artificial wounds work eventually. But our, it is funny that you mentioned this, that every luxury that I would want that I don't have right now is not an augmentation to my daily life. My daily life is perfect.

It's an augmentation to how good I could be at trolling people. No, not for kids. I mean, I'd probably

Simone Collins: want things for our kids to [00:47:00] make them happy arbitrarily they get

Malcolm Collins: home cooked meals. They, they are getting a top-notch education system that we were able to build from them. They're gonna get the best friends you can program.

You know, what, what could they possibly want?

Simone Collins: I mean, they have it pretty good. Great outdoor space to play in. Yeah. I don't know. I, I think a post AI world though isn't about the fun stuff you're going to do. A post AI world is about. The extent to which it can augment your ability to maximize that which is meaningful to you.

And everyone who uses it to maximize the amount of fun they have is gonna die out so fast that they don't even matter.

Malcolm Collins: I think you're misjudging the value of Wolfe in a post AI world. Human attention is going to matter a ton in this timeline.

Simone Collins: It is. No. And in terms of survival too. Just making it buy in a post AI economy [00:48:00] 100%.

However getting people to

Malcolm Collins: care if you live or die is gonna matter a lot.

Simone Collins: Yeah. But also convincing yourself that it's worth it to do hard things and bother to create a family and pass people on and do anything in life also is

Malcolm Collins: right. But I think trolling is key to vitalism. And I think it's also key to keeping attention on yourself within the existing attention economy.

Hmm. And I think that that is, look attention from reporters, attention from the media is attention from ai. If you are in the space of things that AI notices, people that it doesn't think can be eradicated without a second thought that is going to matter a lot as things begin to change.

Simone Collins: So what are you going to do?

Malcolm Collins: Exactly what we're doing now. Maximum trolling. But that's what I was [00:49:00] saying is like the, the, that's why I'm thinking, okay, how do I maximally freak people out if I accumulate more What, like Zuck Zuckerberg right now? Right? Like he's doing a very bad job at capturing the attention economy. Elon has done a very good job at capturing the attention economy.

Okay. Fair. A very bad job at, at attack. Capturing the attention economy. Mark Cuban has done a medium job at attack, at capturing the attention economy. The people who are doing a better job, who has done the best job of the rich people, Trump capturing the attention economy. Your ability to capture the attention economy is your worth within this existing ecosystem.

Simone Collins: Hmm.

Malcolm Collins: And I think that people are like, the people who are like, I just want to remain unnoticed. It's like being unnoticed is being forgotten in a globalized attention economy, which is reality now, and worse, worse than

Simone Collins: that. Being private, I think. Yeah. I mean when, when you hear about privacy, it's worth, it's, [00:50:00] it's, you probably have something about you that's noticeable and you are choosing to squander it.

Being unnoticed may just mean you don't have what it takes. And I'm sorry if that's the case, but it's worse when you're like, I want my privacy. You're choosing to destroy all the attention. Yeah, no, we,

Malcolm Collins: we put all our tracks and simple things. We put all our books plain text on like multiple sites that we have, like on the prenatal list site and on the pragmatist guide site.

And I put it up there just for AI scraping so that it's easier foris to scrape our content and use it in its training.

Yeah. Any thoughts?

Simone Collins: I, the problem is, we've talked about this so much already. I have like nothing to say because I don't talk about anyone else with this and I don't think about this. The same as you do, because this isn't my sphere. Well, I mean,

Malcolm Collins: we should be engaging. We should be sp spending time. I spent like this entire week, like studying how a LLMs learn.

Like I was like, I like there's gotta be something that's different from the way the human brain works. And just like the deeper I went, it was, nope. This is exactly how the human brain, [00:51:00] oh, nope. This is exactly how the human brain works. Works. So convergent architecture my concept of the utility convergence, and , you can Google this.

I, I invented this concept no one else did. And it is, and it's, it's very different from Nick Bostrom's instrumental convergence because a lot of people go is just so you understand the difference of concepts. Instrumental convergence is the idea that the immediate goals of AIS was a vast.

The wide array of goals are gonna be the same. IEA acquire power or acquire. It's like in humans, like whatever your personal objective function is, acquire wealth is probably stat number one. You know, so basically that's what he's is a acquire power. Acquire influence, acquire. Okay. Right. Utility convergence doesn't argue that Utility convergence argued when everyone said I was crazy.

And you can look at our older episode where we talk about like a fight we had with Eliezer Yudkowsky about this that AI is going to converge in architecture, in goals, in ways of thinking as it becomes more advanced. And I was absolutely correct about that. And everyone thought I was crazy. And [00:52:00] they even did a study where they surveyed AI safety experts.

None of them predicted this. I am, I am the guy who best predicted where AI is going because I have a better understanding of how it works because I'm not looking at it like a program. I'm looking at it like an intelligence. And that's what it

Simone Collins: is. It's an intelligence, like 100%.

Malcolm Collins: Yeah. Anyway. I love you too.

Yes, soe. You are perfect. Thank you for helping me think through all this for dinner tonight. I guess we're reheating pineapple curry,

Simone Collins: unless you want Thai green curry.

Malcolm Collins: Oh, I'll do something a bit different tonight. Let's do Thai green curry. Yeah.

Simone Collins: Something, something different. Would you like that with coconut lime rice or, I think we have one serving in, of non left or refried.

Sorry. Yeah. Fried rice.

Malcolm Collins: I do lime rice. Okay.

Simone Collins: I will set

Malcolm Collins: that for you. Did this change your perspective on anything, this conversation,

Simone Collins: you articulated things using different words. That gave me a slightly different perspective on it, but I mean, [00:53:00] I think the gist of the way that you are looking at this is you're thinking very collaboratively and thinking about intelligence is interacting and I think that that's.

Probably one of the bigger parts of your contribution. Other people aren't thinking along the lines of how do intelligences interact in a more efficient way? How can I create an aligned incentives like you're thinking about this from the perspective of governance and from the perspective of interacting humans.

Whereas I think other people are thinking, how can I more optimally make this thing in isolation smart? How do I train like the perfect super child and have them do everything by themselves when Yeah, that's never been how anything has worked for us.

Malcolm Collins: So it's also not how the human brain works. The human brain is basically multiple, completely separate individuals all feeding into a system that synthesizes your identity.

Mm-hmm. And we know this as an [00:54:00] absolute fact because if you separate a person's corpus callosum, if you look at split brain patients, just look at the research on this. Basically the two parts of their brain operate as independent humans.

Simone Collins: Yeah. So it's, it's just kind of odd that you're, you're alone in thinking about things these ways.

I would expect, expect more people to think about things these ways. And I, I keep feeling like I'm missing something, but then whenever we're at a party and you do bring it up and someone does give their counterarguments, their counterarguments don't make sense to me. And I'm not sure if that's because I'm so, no, it's, it's because speak in Malcolm language,

Malcolm Collins: you're a simulated environment at a falker point of human development, and everyone else is not a fully simulated agent.

Simone Collins: Yeah. That's less likely to be true. So normally when everyone is arguing something different and they're so confident in it and they all say you're wrong, that means that we've done something wrong. The problem is that I just am not seeing,

Malcolm Collins: that's not what you, that you were, [00:55:00] you've lived this, you remember the fight I had with Eliezer Yudkowsky about utility convergence.

I

Simone Collins: do, yes.

Malcolm Collins: You have now seen utility convergence has been proven in the world. Exactly. Apparently I understood AI dramatically better than he did.

Simone Collins: He would gaslight, you know, and be like, no, I've always understood it that way. You're wrong. But

Malcolm Collins: no, but that's just, I was there for that conversation.

Simone Collins: I, I remember it too.

And yes, he was really insistent about that though. He didn't really argue his point so much as just condemn you for putting future generations at risk and not just agreeing with him.

Malcolm Collins: No, he's actually a cult leader. Like , he, he does not seem to understand how AI works very well. Which is a problem because, well, what really happened with him is he developed most of his theories about AI safety before we knew that LLMs would be the dominant type of ai.

And so he has a bunch of theories about how like, like the risks from like a hypothetical AI was what he was focused on instead of the risks from [00:56:00] the Theis we got. Mm-hmm. In the ais, we got the risks that they have are things like mean layer risks that he just never even considered. Yeah. Because he was expecting AI to basically be preprogrammed, I guess I would say instead of an emergent property of pouring lots of data into algorithms.

Simone Collins: Yeah. Yeah. Which is, I don't think anyone could have easily predicted that. I mean, and that's another reason why we say AI was discovered and not like,

Malcolm Collins: yeah, I, I'm, we didn't

Simone Collins: know this was gonna work out this way.

Malcolm Collins: I'm pretty sure I talk about that in some of our early writings on ai.

Simone Collins: That it's just gonna be about feeding it a ton of data.

Malcolm Collins: Yeah. That I expected it to be an emergent property of lots of data. And not about pre-programming things because I mean, I don't know, that just seemed intuitive to me.

Simone Collins: I don't remember that being part of it. How my memory is write

Malcolm Collins: down, it doesn't matter. We are where we are now and I've [00:57:00] already out predicted the entire AI safety community.

So let's see if I can continue to do that.

Simone Collins: I mean, all that matters is if you do, I don't think I, the satisfaction Malcolm is not in having proven them wrong. It's in building infrastructure and family models and. Plans around systems like that and benefiting from them.

Malcolm Collins: Sorry. I thought the satisfaction was in turning them into biodiesel ai.

I thought the satisfaction was

Simone Collins: in, in thriving and being able to protect the future of human and flourishing. Yes. And that will require a

Malcolm Collins: lot of biodiesel.

Simone Collins: Oh God. Oh, I'll go make your curry. I love you to death. I love you to death too, Malcolm. Goodness gracious.

Speaker: In our towers high, [00:58:00] where profits gleam, we tech elites have a cunning scheme. On productive folks, your time has passed. We'll turn you into fuel of fire. Just get in line to become biodiesel. Oh, stop crying, you annoying weasel. As laid out by Curtis Yarvin. Handle the old or we'll all be stuck.

Why waste time on those who can't produce when they can fuel our grand abuse a pipeline from the nursing home to power cities our wicked dome just get in line to become biodiesel stop crying you annoying weasel as [00:59:00] laid out by By Curtis Yarvin, handle the old or we'll all be starving.

With every byte and every code, our takeover plan will start. soon explode a world remade in silicon's name where power and greed play their game just shed in line to become biodiesel oh stop crying you annoying weasel as laid out by curtis yarvin handle the old or we'll all be starving

biodiesel dreams techno feudal might Old folks powering our empire's bright [01:00:00] Industries humming, world in our control Evil plans unfolding, heartless and bold So watch us rise in wicked delight As tech elites claim their destined right A biodiesel future, sinister and grand With the world in the palm of our iron hand Mhm.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit basedcamppodcast.substack.com
...more
View all episodesView all episodes
Download on the App Store

Based Camp | Simone & Malcolm CollinsBy Based Camp | Simone & Malcolm Collins

  • 4.5
  • 4.5
  • 4.5
  • 4.5
  • 4.5

4.5

94 ratings


More shows like Based Camp | Simone & Malcolm Collins

View all
The Glenn Show by Glenn Loury

The Glenn Show

2,258 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,393 Listeners

"YOUR WELCOME" with Michael Malice by PodcastOne

"YOUR WELCOME" with Michael Malice

2,139 Listeners

Walk-Ins Welcome with Bridget Phetasy by Conversations with people from all walks of life.

Walk-Ins Welcome with Bridget Phetasy

1,242 Listeners

Calmversations by Benjamin Boyce

Calmversations

350 Listeners

New Discourses by New Discourses

New Discourses

2,327 Listeners

UnHerd with Freddie Sayers by UnHerd

UnHerd with Freddie Sayers

216 Listeners

Razib Khan's Unsupervised Learning by Razib Khan

Razib Khan's Unsupervised Learning

199 Listeners

Conversations with Peter Boghossian by Peter Boghossian

Conversations with Peter Boghossian

211 Listeners

The Auron MacIntyre Show by Blaze Podcast Network

The Auron MacIntyre Show

400 Listeners

"Moment of Zen" by Erik Torenberg, Dan Romero, Antonio Garcia Martinez

"Moment of Zen"

89 Listeners

Maiden Mother Matriarch with Louise Perry by Louise Perry

Maiden Mother Matriarch with Louise Perry

264 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

62 Listeners

Pirate Wires by Pirate Wires

Pirate Wires

119 Listeners

History 102 with WhatifAltHist's Rudyard Lynch and Austin Padgett by Turpentine

History 102 with WhatifAltHist's Rudyard Lynch and Austin Padgett

148 Listeners