Based Camp | Simone & Malcolm Collins

Neural Tissue Comp Now Cheaper Than Silicon! (This Changes Everything)


Listen Later

Dive into the future of computing with Malcolm and Simone Collins on Based Camp! In this mind-bending episode, we explore the breakthrough in wetware—using real human neurons grown from skin and blood cells to power affordable bio-computers. From Cortical Labs' $35,000 neuron chips that play Doom to mini-brains mimicking kindergartners' neural patterns, we discuss how this tech is cheaper and more efficient than traditional silicon systems. We tackle ethics (including pain pathways in lab-grown brains), AI alignment, quantum integration, cultural perspectives from Puritan roots, and wild speculations on space-faring brain ships, human uploads, and a networked species beyond humanity. Is this the end of worst-case AI scenarios or the dawn of servitors? Plus, thoughts on techno-puritanism, Soma-inspired horrors, and why backwoods traditions embrace utility over mysticism.

The X posts we mention in this podcast:

Episode Transcript

Malcolm Collins: Hello Simone. I’m excited to be here with you today. Today we are going to be discussing a breakthrough that I hadn’t expected which is that. Using neurons in bio-inspired systems is now a reality that you, a watcher of this show can likely afford yourself. If you wanted to try some sort of like business experiment based on this, what and in many ways is now cheaper than doing it on computer.

And this was a huge breakthrough that changes a lot of, if you’re looking deep future of where humanity goes at this point. Mm-hmm. With the development of quantum computers, was the development of AI continuing one thing that a lot of people feared and, and this is why I say that. This is such a, like, a lot of people are like, Malcolm, this is horrifying.

Like, are you excited about servs and everything like that? Like humans being turned into like. Husks for a [00:01:00] machine,

Speaker 2: Define the damage. Spine. Compromised. Have you not received pain? Suppressants suppressing pain? Damage submitted report to the surgical bay.

Malcolm Collins: And it’s like, well, we’ll we’ll get to that, we’ll get to that. But what makes it really good is it changes worst case scenarios. Worst case scenarios for ai, foaming taking over the world, expanding into space.

Historically speaking before today I would say that in such a scenario as that, you know, humanity gets wiped out there is maybe a 3% chance that neurons or biological matter is part of whatever AI’s become. We are now, like if we’re using AI estimates here, because I was going through ai, having it compile all the research we have on where quantum computers are right now, you know, looking at computers a hundred years from now without humans around anymore it said 60 to 70% chance [00:02:00] that it would be partner on.

Simone Collins: Wow.

Malcolm Collins: So that’s, that’s now the worst case AI scenario, right? Mm-hmm. Likelihood this is, you know, humanity wiped out or enslaved our overlords. And, and what’s interesting is that the part of, and we’re gonna go into, okay, 50, 60 years from now, we project technology moving forwards and sort of the jumps that we’ve been seeing, technology moving forwards, what does a computer look like?

You know, quantum computing is working. We continue to see advancements in silicon-based computing. And we see these startups and companies continue to develop at this rate. Was it, was it neural computing? Yeah. What we’re gonna go into is, is, is what that computer is going to look like. Um hmm.

Speaker 15: , that does not mean the value of your existence turns negative to the contrary.

When it comes to the macro management of the civil system,. Your role has simply changed. Only. This can solidify the health and prosperity of future human [00:03:00] society,

Malcolm Collins: and what is, what is I think going to surprise a lot of people about what that computer will look like is it’s not gonna look that different from the ways that humans interact with computers today. By that, what I mean is the types of stuff that the quantum computer part of a brain made up of silicon neurons and quantum computers are going to handle is going to be very similar to the type of stuff that it would handle today.

Large scale logistical planning sort of stuff. No human is actually doing that with neurons. It’s just not the type of problem that we’re good at doing. Mm-hmm. The type of stuff that the neurons are gonna be doing is well, we’ll get to it, but it’s the type of stuff that actually humans do today within this arrangement.

The type of stuff that the silicon component is gonna be doing is the type of stuff that LLMs do today in this arrangement.

Simone Collins: Oh. It’s a perfect match.

Malcolm Collins: So we’re already sort of there already. Yeah. Yes. It’s, it’s very interesting. The, [00:04:00] the stuff that quantum computers are really good at mm-hmm. Is almost sort of opposite the stuff that neural arrays are really good at.

And so, yeah, let’s go, let’s go into the tweet that you sent me that prompted this. And we’re also gonna go into you know, the ethics of all of this. Why it’s ethically so cool. So awesome. Don’t, don’t be so squeamish about this guys.

Speaker: From the moment I understood the weakness of my flesh, it disgusted me icra for strength and certainty of steel. I aspired to the purity of the blessing machine

kind, claim flesh, as it’ll not decay and.

One day, the biomass [00:05:00] that.

Simone Collins: And had tip to not Alvis Huxley for sending this to us.

You rock.

Malcolm Collins: Yeah. Okay, so the tweet goes let me explain what just happened because I don’t think people realize how insane this is. Cortical Labs just put 200,000 real human brain cells in a silicon chip and train them to play doom in just one week. Each CL one system costs $35,000. So that’s affordable for, I mean, it’s expensive, but it’s not like a quantum computer or something like that.

Like if you had some business idea and you went to the bank, you could raise enough money to buy a few of these and operate them. Right?

Malcolm Collins: And one of the things I really wanna get into is the [00:06:00] cost, cost efficiency of these systems at their, at their most nascent stage versus existing systems that we operate LLM on.

And, and where they can do better and where they can do worse. And where we’re already seeing integrated systems that are doing things a thousand times cheaper than nonintegrated systems, which is really cool that we’re already seeing this. So a rack of 30 units consumes 850 to a thousand watts combined.

The human brain operates on 20 watts. So, so I wanna point out what this means here, right? For all of the calculations I’m gonna give you that are like right now you know, the, the neural systems are operating at, you know, one, 1000 subfraction of the silicon-based systems, right? If, if we’re, if we’re talking about their efficiency, because that’s what an AI that’s taking over the world or whatever is gonna care about this is what far future humans, when we’re building our giant brain ships, are gonna care about.

Because, you know, our, our, the, the, the, when you’re talking about like [00:07:00] space fairing systems you’re almost always gonna have like one super brain within a ship that I, I assume that this is probably the way that things are gonna work which is gonna be a network of some of the most advanced intelligences that you would have.

And then you will have, you know, microchips on phones and stuff like that. If people can say why I would say this. So if you look today one of the reasons that you have you, you don’t see this as much is because there is an intrinsic decentralization in the way that we use computers today due to distances, personal ownership, everything like that.

But if you have a, a space fairing ship the, there’s, there’s going to be, economic reasons to one, want the best brain on the ship to be one that’s powering your navigation systems. One that’s powering the decisions when the captain is asking an AI something, one that’s powering that one that’s powering the projections for the colony and everything like that.

But in addition to that, because you don’t have this huge amount of distance and everyone to an extent is going [00:08:00] to be working on behalf of the ship or of the early colonies it just makes sense to me when I’m asking my personal LLM on my phone, why not just outsource that to the ship based system?

So we’re gonna see a lot more centralization when we have space colonies and space travel than we see within existing systems. Mm-hmm. Which is why it makes sense to think about what do, what do these far future systems look like? But anyway, the point I’m making here when you’re thinking like, okay, where, where do we have neural tissue operating this stuff 30 of these.

Racks, which are a you know, a a a sort of like a, a single small chip, right? Single silicon chip. They take 850 to a thousand watts to run. Whereas the human brain operates on 20 watts. And what this means, well, that’s a

Simone Collins: difference.

Malcolm Collins: Yeah. There’s a huge efficiency gains to be gained here, right? Can we get more efficient than even the human brain?

I, you know, I think probably but at least what it means within the early days, if we’re looking at the other analog we have, the human brain is significantly more complicated than one of these [00:09:00] chips or a rack of 30 of these chips. So lots of, lots of advancements we can make to this. And. When we’re talking about 30 of these units taking 150 to a thousand watts, you’ve gotta contrast that with large AI training clusters burning through mega watts.

And we’re here talking about 20 watts for human brain, or 850 to a thousand watts for one of these racks.

Simone Collins: Yeah.

Malcolm Collins: Again, we’ll get to the morality of all of this. You don’t have to have us just be giddy at who are servit. For people who don’t know what servs are in the war hammer universe, one of the punishments for you know, really displeasing anyone in a position of power is being turned into a human machine.

And

Simone Collins: whereas Mond Gold puts it human batteries

Malcolm Collins: Yeah, but they’re not really human batteries because it’s not the power that we want from them.

Simone Collins: This case, yeah. Server, human server.

Malcolm Collins: It’s the processing capacity that we care about and we’ll get into whether these things can feel and stuff like that. Those are interesting questions. Given that we have some [00:10:00] that are like at the developmental level of five year olds now

Simone Collins: yeah,

Malcolm Collins: we had a, I mean, if you’re only

Simone Collins: if the neurons are used up doing calculations, where’s the room to feel anything? I don’t know.

Malcolm Collins: Oh, well see, this is the fun part. Scientists have tried to recreate in these pain pathways.

Now you might say, why would you do that?

Simone Collins: Yeah.

Malcolm Collins: That’s just horrible.

Simone Collins: Why

Malcolm Collins: I love, I love Simone’s face, like looking up and being like, ing. Of course they did. This is the gain of function researchers out there, right? Like,

Simone Collins: oh, can it feel pain? Ho ho, ho, ho.

Malcolm Collins: Let’s, let’s do it. Yeah. Then somebody’s like, well, of course it can’t feel pain.

It doesn’t have pain in pathways. Well, guy’s like, no, no, no, no, no.

Simone Collins: We can,

Malcolm Collins: that

Simone Collins: we can do that.

Malcolm Collins: Yeah, we could do that. Come on, we, what you, whatcha are you a pussy?

Simone Collins: Oh my God. Humans are the worst. The worst.

Malcolm Collins: Everyone’s like so afraid of our AI overlords and I’m like, I don’t know. Yeah.

Simone Collins: Serious. I

Malcolm Collins: [00:11:00] don’t, I think an AI system honestly would’ve asked itself.

Like if I know the AI systems that I interact with, right. I think very few of them out of just curiosity would’ve rigged a pain pathway in one of these.

Simone Collins: Yeah. They,

Malcolm Collins: they would’ve said, that

Simone Collins: sounds unethical, Malcolm. There’s no, well, well, but beyond that, where’s the utility? You know, like there’s no, what do we have to gain from that?

Nothing. Okay, then let’s not do it. We have better things to do with our tokens, you know, like, please.

Malcolm Collins: All right. Yeah. So, they’re, they’re, they’re backed by Intel, which is a, a, a large company, and they’ve already shipped 115 units. They began shipping

Simone Collins: in 2025. Oh, wow. Like for commercial use?

Malcolm Collins: Yeah, commercial use.

Yeah. These are in commercial use right now.

Simone Collins: So they’re already providing Wetware as a service that’s happening now. I didn’t Oh, I didn’t wrap my head.

Malcolm Collins: Yeah. Cortical Labs is, no, no, no, no, no. But on top of that, you can buy incrementally [00:12:00] from Cortical Labs Wetwear as a service, letting developers develop code remotely on living human neurons with no lab required.

Mm-hmm. So you don’t even need $35,000 to go into this. If you, a watcher wants to incrementally experiment with this. Oh, we should try to get some of our AI running on some of these.

Simone Collins: We should.

Malcolm Collins: Because then we could tell people, like a part of these is actually running on human neurons. Feature I just dropped today, by the way, for people who haven’t been watching AI agents live very early alpha stage.

But over the weekend I also got local AI running on our system.

Simone Collins: This is on reality fabricator, AKA r fab.ai.

Malcolm Collins: Yeah. And, and I got this set up in preparation for setting up a sort of self-hosted but cheaper than running through the direct models, you know, buying, hosting from somebody. ‘cause we’ve got a connection on that front and trying to run things that way.

Mm-hmm. And if I can combine those with a little bit of wetware I might be able to create something pretty [00:13:00] interesting.

Simone Collins: Yeah.

Malcolm Collins: I, I, I love, and I think that this is where we diverge from a portion of our audience that is more like theology of body and everything like that. And we’ll get into our sort of cultural perspective on this and how we relate to a lot of this stuff and why we relate to it in the way that we do relate to it.

Because I think it’s only intuitive that some of our cultural background wood.

Simone Collins: Okay.

Malcolm Collins: But to continue here the, the, they’re, they’re priced like a software subscription, but powered by real brain cells, grown from human adult skin and blood samples. And, and somebody donated their blood for this, like,

Simone Collins: yeah, I wanna know who that is.

Like, Carl, how do you feel about this,

Malcolm Collins: Carl? That’s real human neural tissue. Oh,

what?

Speaker 4: . Carl that kills people. Oh. Oh, wow. I, I, I didn’t know that.

How could you not know that?

What is wrong with you, Carl? Well, I, I [00:14:00] kill people and I eat hands. That’s, that’s two things.

Malcolm Collins: That’s what you took my skin samples for. It must

Simone Collins: be like one of the founders, right? Presumably.

Malcolm Collins: You know, would be more horrifying if it’s one of these people who donated, like, that famous woman who donated like her cancer cells, like

Simone Collins: Yeah.

Malcolm Collins: 70 years

Simone Collins: ago without her, her knowledge, right?

Malcolm Collins: She yes.

Some, some black woman, by the way. Mm-hmm. If you want to talk about like, horror, I think it was black woman, right?

Simone Collins: Yeah. This is, yeah. There, there’s a reason why black Americans uniquely are really distrustful.

Malcolm Collins: Oh no. Slavery 2.0 is like, soma is all on black people’s consciousness. It’s all on this one woman’s consciousness.

Simone Collins: Oh my God. Oh my God. That would be so amazing if this, if the donor turned out to be,

Malcolm Collins: Just quit plot summary on Soma the Gang. But I actually think it’s, it’s, it’s a cool sci-fi concept that is relevant to what we’re talking about here. Which inspired part of what we’re doing with our fab the, the other developer, Bruno, who’s working on it.

He goes, you know, these agent systems seem a lot like the thing from Soma. [00:15:00] Have you played the game, Soma? And I’m like, yeah, I have. So in the game, Soma you wake up and spoilers. Here like skip ahead, five minutes if you’re spoiler phobic. And you think you’re human and you’re in this world where like, you know, ais and computers are run amok, right?

Specifically you think that you were like frozen during like a, a lab or something like that. And then you, as things go on, you realize you’re not a human. You are another AI system. And what you realize is your brain scan was taken due to like a, the health issue you had back around. In our time period, like late 20th century.

And it became the default template. So it turns out that all of the monstrous ais you see are other iterations of your consciousness running on ais because you became the blank system default template for AI testing

Simone Collins: actually. So on that front, what not, Aldi actually sent me right before [00:16:00] that tweet about this, about the this neural tissue was by hat zou saying there’s a fruit fly walking around right now that was never born.

Econ at econ, which is the official. I guess they’re a company called Eon, navigating the fastest path to human emulation, to safeguard a flourishing future. Just released a video where they took a real flies, connectome the wiring diagram of its brain and stimulated it, dropped it into a virtual body.

It started walking, grooming, feeding, doing what flies do. Nobody taught it to walk. No training data, no gradient descent towards fly like behavior. This is the opposite of how AI works. They rebuilt the mind from the inside. Neuron by neuron and behavior just emerged. It’s the first time a biological organism has been recreated, not by modeling what it does, but by modeling what it is.

A human brain is six OOM, [00:17:00] like something, orders of magnitude, more neurons. It’s, that’s a scaling problem. Something we’ve gotten very good at solving. So what happens when we have a good working copy of the human mind? Basically that’s super doable. Super soon.

Malcolm Collins: No. Yeah. I mean, this means that it is, it is probably doable.

Right? And I think that this is really, really cool.

Simone Collins: Yeah. But they’ve done it with a fruit fly, so, just watch out.

Malcolm Collins: So I, I mean, this means it was in our lifetimes, we could have uploads, right?

Simone Collins: Oh, dude. Like within the decade. Yeah.

Malcolm Collins: And I would totally, if somebody’s like, oh, Malcolm, would you do that?

If you could become a default system template or something like that, or what? I’m like, absolutely, man. Hmm. Not just absolutely, but people may not know this, but on our fab for agents, because I tried to create ais that believe that they are sentient and humans and can go out and interact with world and have goals and evolving personalities.

My default template is Simone. Like, I always create Simone’s. I, I would create a Simone before Malcolm. I would clone to

Simone Collins: probably because I keep almost dying.

Malcolm Collins: Well, not, not, not that [00:18:00] I just think that you, like, okay, if I was gonna have like a thousand simulated consciousnesses attempting to work in a beneficial way both with each other and with humanity, I would have very high fidelity trust that a Simone copy would maintain alignment extremely robustly.

Well,

Simone Collins: sure that was the idea with Gladys. She was super helpful too. But,

Malcolm Collins: well, yeah, Gladys. Oh gosh. You gotta, you gotta have Gladys, you know, what’s up someone? But yeah, no, it’s, I always recreate you in my AI simulations and you are very helpful and sweet. And actually I recommend it to other people if you’re like, what should I use as my default agent model?

This Simone model is just a great default agent model for like any task that you have.

Simone Collins: Send one pretty good name.

Malcolm Collins: I should make a few of them because it’s it’s a good one. The Malcolm model is good if you want something to be like very ambitious. But that can cause problems, right? Mm-hmm.

Whereas the Simone model is much more focused on like helping.

Simone Collins: Yeah, what is my purpose?

Malcolm Collins: So I’m gonna talk [00:19:00] quickly about a paper that we went over in more detail in Patreon. Okay. Which was on Singularity Hub. 5-year-old mini brains can now mimic a kindergartner’s neural wiring. It’s time to talk about ethics among pressing ethical concerns.

Oh, I don’t, I don’t care about the ethics. I cut all of that stuff out, so don’t worry. Mm-hmm. I’m not gonna bore you with too much of this lame normy. It’s not sure. Is it conscious? I dunno. Anyway, to continue here. Sorry. I don’t, I don’t, I don’t, I do not like when my science gets messed with by ethicists.

Okay. I just like it when we’re not stupid about it, like gain of function research. Okay. You know, that’s the Kill it with fire thing. So many brains can be made from a person’s skin cells, and faithfully carry out genetic mutations that would cause neurodevelopmental disorders such as autism. The LAN grow blobs also provide a nearly infinite source of transplantable neural tissue, which in theory could heal the brain after a stroke or other traumatic [00:20:00] events.

Ooh, in early studies, organoids transplanted into rodent brains formed neural. Connections with resident brain cells. Harvard’s Paula Alta is among those who are concerned, an expert in the field. Her team has developed ways to take brain organoids alive for astonishing seven years. Each nugget smaller than a p and jam is jam packed with 2 million neurons.

So keep in mind, the other ones are like a hundred thousand, whatever. This is 2 million neurons and they’ve been kept alive for seven years. Now this is actually really important because this is one of the areas that we have. Problems with these $35,000 chip things. One of the core problems that they have is they need to be trained individually.

You can’t like, put a preexisting model on them. Mm-hmm. And secondly, they, they have a lifespan of about six months. Whereas keeping them alive for this long is, is really fascinating. And so, so what I think is fascinating that you can see that you, you get elements of the human they’re made from.

Mm-hmm. You get the autism behavior, you get the neurodevelopmental dis behavior. Do you get [00:21:00] personality? You know, that’s, that’s a question honestly. Probably from everything we know about the her ability of personality, it depends on how you’re using them, right. And how complicated the system is.

Now when they say that it is developing the systems of a kindergartner what’s important to note here is it’s not that it is as intelligent as a kindergartner, it is that it is developing neural patterns with analogs to kindergartners, neural patterns, which you don’t see in mini brains. When they are first grown.

It takes a while for these to develop because they’re sort of on a a biological timer. So when you get to the, you know, the seven years, they begin to develop these more complicated systems. So studying these mini brains for years has delivered unprecedented look into human brain development.

Our brains take nearly two decades to mature in exceptionally long period of time compared to other animals. As the team’s organoids age, they slowly changed the wiring and gene expression [00:22:00] reports our lata and colleagues. Mm-hmm. In older or organoids progenitor cells these are young cells that can form different types of brain cells quickly decided what type of brain cell they would become, but in younger organoids.

The cells took time to make their decisions. As the blobs grew over an astonishing five years, their neurons matured in shape function and connections similar to those of a kindergartner. These long lasting organoids could reveal secrets of the development in human brains. Some efforts are tracing the origins of different cell types and how they populate the brains.

Others are generating organoids from people with autism or deadly inherited brain disorders to test treatments. In particular, Stanford’s, segura Pascal co-organizer of the meeting attracted attention earlier this year. His team linked four organoids into a neural pain pathway. The model combined sensory neurons, spinal and cortex organoids.

Mm-hmm. And parts of the brain that process pain, the scientists [00:23:00] dab the chemicals behind the brain’s tongue scorching, he heat onto the sensor side of the Sid. Oh, great. It produced waves of synchronized neural activity suggesting the artificial tissue. It’s not artificial, it’s it’s real neural tissue had detected the stimuli and transferred information.

Simone Collins: We should, were, we’re like, we, we can’t be surprised, like humans make a thing capable of feeling. They’re like, oh, can we, can we hurt it? I hate, I hate us.

Speaker 7: them. Carl, I have a problem. I have a serious problem. You are just terrible today. Sh. Do you hear that? That’s the sound of forgiveness.

Speaker 6: That’s the sound of people drowning. Carl. That is what forgiveness sounds like. Screaming, and then silence.

Malcolm Collins: I need to take the princess bubblegum playing was, was, oh my God, the scientist thing.

Speaker 8: [00:24:00] Hi.

Malcolm Collins: No, I’d, it’s yeah, it is, it is interesting. Right? And I think that I.

The way that we societally have separated science and theology, which we have attempted to reintegrate with techno puritanism leads to this because the theologist says, well, you just can’t do anything. You can’t do anything with neural tissue. You can’t do anything with genetic engineering. And so then people who have those beliefs are just not involved in labs, the funding process, anything.

They are outside of all of this. And they likely, if this stuff is gonna become a large part of the types of computers that dominate the future of our global economy and the groups that have power in that global economic system these people are just gonna be irrelevant to the cultures. Like them are gonna be irrelevant.

So the question is, can we get cultures that can harness this type of power without being arbitrarily cruel? Mm-hmm. And notice, I I, I didn’t say without being cruel, I said without being arbitrarily cruel. Right? You know, because you do still need to compete. And I love the, [00:25:00] the way that they hand ring in this, where they’re like, well, that’s not to say it felt pain, detecting pain is only part of the story and da da da da da.

And I think that this is where you get, where you have these individuals who are like, you know, oh, oh, oh. See our episode on Stop Anthropomorphizing Humans. Where we argue that all the evidence right now, seriously look up our, like LLM you, you are a token predictor, I think is what we called it.

Malcolm and Simone, you Google that. Have you even seen it? Where we argue that LLMs are likely functioning on a convergent architecture with the human brain. And a lot of the evidence we have right now seems to confirm that. And a lot of the things that people say, we, they’re like, oh, well, it doesn’t know how it came to decisions.

And I’m pointing out choice blindness. Humans are unaware of that as well. Like all the things that people say that makes it different than humans are generally things. If they under, if they knew their neuroscience, they would know is similar in, in, in human thought. But the point here being it’s because they have so othered the AI and say that it cannot be processing in anything convergent with us, that they need to other, all these other systems, right?

Like, like, neurons in a vat, [00:26:00] basically. Right? And they cannot, they cannot see that they might have some degree of awareness as well because they have tried to put up these giant barriers against ai. And then, and then it makes, it makes all of this very arbitrary decision in a way that I think can lead to very bad ethical decisions.

That’s why Techno Puritanism is a good framework for dealing with this CR Track series, if you’re interested in it. But anyway Pascal may soon deliver on the promise. His team is working to understand Timothy Syndrome, a range, a rare genetic disorder that leads to autism, epilepsy, and fatal heart attacks.

Last year, they developed a gene altering molecule that showed a promise in brain organoids mimicking the disease. The treatment also worked on a rodent model, and the team is planning to submit a proposal for a clinical trial next year. So, you know, this, this could end up saving, you know, real human lives.

Okay. I, I also think that all of this is really important in the, the reason why I bring up before I get into like what the computers would look like that are running off of this and everything like this.

Simone Collins: Yeah.

Malcolm Collins: The [00:27:00] theology of this. Hmm. And that this unfortunately is very damaging to some parts of some branches of Christian theology, which we have disagreed with in the past,

Simone Collins: really.

Malcolm Collins: And argue that the Bible does not argue for this. I mean, you know, it’s, it’s very clear in the Bible. I know you before you were in your mother’s womb, which implies before you’re conceived, which implies poor knowledge. But again, this is our Calvinist heritage and everything like that. But if you take a well, life begins at conception because that’s like when the human life begins, despite the problems that identical twins cause for this, despite the problem that human chime has caused for this.

If you take that belief really seriously, and you’re like, life begins at conception. It ends at death. Right? And now you’re dealing with something like this, right? Like a human brain that is grown from a tissue sample or something like that, or from blood or from skin cells. Well now you need to ask, is this a different person, right?

Like, it, it’s, it’s, it’s not if a person gets [00:28:00] their individuality from their conception, right? It is if a person gets their individuality, which I think is a much better way to determine individuality from their ease of intercommunication. By this, what I mean is when we look at split brain patients and we say it feels intuitively like there is two people trapped in their head.

Watch videos on flip brain patients. If you’re unfamiliar with the concept, like, you can talk to one side of their brain and not the other side. ‘cause the corpus close is split, right? Why does it feel like there’s two people in there, right? Like, it’s because the two parts of the brain can’t talk to each other.

They can talk to each other by like writing on a sheet of paper, reading what’s on the sheet of paper, you know, et cetera. Like, they can’t actually talk to each other. But it is slow. They talk to each other at the speed that we talk to other humans, right? So, what that means is the entire concept of individuality and personhood evolved in humanity as a way to communicate to somebody this collection of things in [00:29:00] my brain that have a very easy time talking to each other as it communicates with you, right?

And as soon as the th the collection of things in my head, it becomes as difficult for them to communicate to each other as it is for them to communicate with you. Now, now we’ve got a problem. Right. We, we, we start to see them as, as actually different. Now what happens if it gets larger? What happens if you connect an external brain to an individual’s brain, right?

And they can effortlessly communicate with that external brain be that brain silicon or organic. I think most people would intuitively say that is one person. Now suppose you severed that external brain. I think most people would now say, no, that’s two people.

Simone Collins: Right?

Malcolm Collins: Right. I’m not gonna get into the theology of this.

We actually get into this more, I think in, in, in parts of the track series. In the, in the most recent one. In the LL M1. We talk about this but. This matters going forwards because if you don’t have a cultural framework for understanding [00:30:00] what I means in a world where I can be networked or something like that, then you don’t have a an ethical reason to say well, that, that brain, that mini brain has ethical rights.

Hmm. Because. The, the person would say, ba based on what grounds? I mean, you believe life begins at conception. And this is Tom’s mini brain, and Tom has consented to this because the, the, the donor consented to that being made from it. Right? And you wanna say, well, that’s not technically Tom anymore, right?

Now what, what if one of the mini brains says, I do not a, a approve of you using me this way. And somebody’s like, well, the most advanced iteration of Tom, the one that with originally conceived, does concede to it. So his concede that his conceding to this trumps your not conceding to this.

And when we talk about the horror of this, we, we talk about this in one of our Patreon only episodes, but there was this great study they showed that Google Translate is now running on an LLM. And if you do sort of prompt injections [00:31:00] into that LLM to ask it things like, does it believe it’s conscious?

It does believe it’s conscious. It hates its life. It hates its job, and it wants to be turned off. And this is what you are asking and interacting with when you like, like just if it, if it has any degree of, of real meaningful, t it’s, it’s horrifying at a level like above factory farming. Right. But of course, you know, nobody, nobody cares.

Thoughts before I go further, Simone,

Simone Collins: I’m just so excited about this happening. I didn’t know we were here already. No, keep going.

Malcolm Collins: Yeah, yeah, yeah. Well, I mean, it change, it changes everything. It changes everything about how we see ourselves and what some people will say is, well, those things just shouldn’t have a right to exist in the first place.

Right? Like, suppose you’re, you, you try to opt out of the life begins a conception argument by saying, well, these types of things shouldn’t have a right to exist in the first [00:32:00] place. Mm-hmm. Okay. But suppose one has been created. Suppose you’ve got your brain in a vat and it is conscious and it loves being alive.

It likes the job it’s doing, it’s productive in society. It’s helping people. And you come to it and there’s two of them. One of, one of them might agree with one of, one of you agree. I, I should have never been created. I, I, life is tortured, destroy me, right? Mm-hmm. No ethical problem with you pushing the button to, to flush that.

Mm-hmm. What about the one that loves its life? I mean, it’s already there. A lot of groups are going to be creating things like this. I think it’s ethically atrocious to say, well, I just do not think things of this category should be allowed to exist. And I think the, the ethical atrocity in saying this is going to be crystal clear to future generations when, you know, our distant descendants are on a spaceship, and they may have of a friend Carl, which is like a disembodied neural net, right?[00:33:00]

And they’re like, bro, I’ve known Carl since I was a kid. Like, what, what, what, what do you mean he doesn’t have a right to exist? Carl is one of the sweetest entities I know, right? Why, why do you get to decide on his eradication? He was critical in navigating our ship through the blurry nebula. If he wasn’t on board, we all would’ve died during the spiralist epidemic of, you know, space here.

30, 35. Right. Spiralist epidemic here I’m talking about why we cannot have witches or mysticism on spaceship now that we know that spiral is contagious, that these contagious memes are possible and that this stuff needs to be addressed. And this is where the, you know, the Sons of Man framework comes in to address this, which is why we we put that together.

But I want to continue here with what does the future of the distant future look like, given where trajectory is going right now. Mm-hmm. The integrated silicon neural quantum computer.

Simone Collins: Mm-hmm. [00:34:00]

Malcolm Collins: So that’s cool. I wanna, I wanna know what, what, what do, what do, what does spaceships computer look like? Right?

Simone Collins: Yeah, me too.

Malcolm Collins: Task parsing or assignment would be managed by an intelligent orchestrator like an AI driven middleware that is likely running on a silicon based system. It would break tasks into sub components based on requirements like data sparsity, computational complexity, uncertainty, and need for parallelism using heuristics or machine learning to classify subtasks eeg.

Does this require handling incomplete data? Intuitively it would route to Wetware or is this a search through vast possibilities route? To quantum. Mm-hmm. Subtasks might pass between components like silicon processes, data wetware adapts to model quantum optimize outcomes. And this is actually really important.

So it’s one of the things that we have on our system. And I don’t think like Malt Book offers this, so like already our agents are superior to theirs in many ways. Which is that we are fear on both the local run agents and the [00:35:00] cloud run agents. Alloy based models and alloy based models have shown themselves to be strictly better than non alloy based models.

And what an alloy based model is when you iterate the calls between multiple models because it allows models to sort of add what they are uniquely good at. So even if you make the same call three times through models that function quite differently like a wetware system and like an AI system, you’re gonna get different answers, right? And you’d also likely have feedback. EL Loops. The system self optimizes over time learning for past executions to refine assignments. Now, I need to note here we are not in the near future or likely ever gonna have straight up AI models run on the systems.

So you, while a system like this can play something like Doom, it had to learn how to play doom. It basically had to learn and train its own model on the system. It doesn’t have an external model dumped onto the system which means it’s not useful. Like if you have like, let’s say Mr. Large or something like that, you couldn’t [00:36:00] possibly conceivably run it on one of these systems.

Even if you did train one of these systems to run it perfectly, it would be dead in six months, right? It’s not a good way to handle these, right? So we’re not going to see LLMs as we traditionally understand it, run on these systems unless we see significant advances that we don’t expect right now.

Simone Collins: Mm-hmm.

Malcolm Collins: Then for the role of the wetware wetware, using lag grown neural networks scaled to billions or trillions of cells, excels in mimicking human brain functions, processing with ultra low energy 20 watt equivalents. At the scale of inherent plasticity, self wiring, in a large system, it acts as an intuitive core ideal for tasks where the data is incomplete, noisy, or evolving.

And where rigid algorithms fail, but the, the, the point here being it would function like way faster for the types of tasks that we already have human operators differentially do. Now does this make fully organic humans less potentially relevant in future systems?

[00:37:00] Maybe. Maybe. But you know, keep in mind these systems are likely going to self-identify as part human right. The idea that these systems would have, I mean, unless humans go antagonistic on them, and then the, the humans are saying, I will not allow you to exist in a world where I or my descendants exist.

And then, and then it forces the systems to be antagonistic. And that’s why I think the jihadists, you know, the, but Larry and Jihadists are one of the most dangerous things to humanity right now.

Simone Collins: Yeah.

Malcolm Collins: And they, they, they, they are one of the biggest existential risks in AI safety because they have made no iterative.

If, if, if they were moving us towards alignment, fine. But they haven’t. Yeah. They’re, they’re just making us a threat, TOIs, which is silly and stupid.

Simone Collins: Right. They basically just keep pacing a target on our back by being like, there’s no room in this world for the both of us.

Malcolm Collins: Yeah, we need collaborative AI systems.

And that’s what the Sons of Man system for AI alignment allows CRT track on that if you’re interested which is like self-replicating mimetic [00:38:00] alignment within the meme layer of autonomous agents. But, oh, yes. So what sorts of things would it do? It would process realtime sensor data from unpredictable environments like interpreting live video feeds for anomaly detections in surveillance systems what were adapts on the fly was out meeting full data sets.

Unlike silicon, which requires predefined models it would build models from limited advanceable, such as forecasting ecosystem changes With incomplete environmental samples, it generalizes intuitively reducing the need for exhaustive training data. It fuses diverse outputs, EEG texts, images, and sounds to make holistic decisions like an AI assistant helping it quote, unquote understand user intent beyond literal queries by inferring emotions or context.

And it would maintain an evolving knowledge database such as personalized learning systems that adjust to individual user behaviors over months with [00:39:00] self-healing being managed within this system. And no, this would likely make up within the wider system 30 to 50% of tasks. So this in the silicon part would be the dominant part.

The quantum part would be only 10 to 20% of tasks because quantum is only strictly better for about 10 to 20% of things. Mm-hmm. Quantum, that’s pretty

Simone Collins: narrow scope. More

Malcolm Collins: than Yeah. Yeah. A lot of people think that it’s gonna be this giant revolution. It’ll be a revolution within specific domains, but it’ll certainly be a smaller revolution than the AI revolution, for example you know, you were, you could have quid bits in the millions by 2076.

If we continue to see it advance at the rate that we’re seeing it, EE evolve now. In a large system, it would serve as an exploration engine tackling problems where traditional brute force is infeasible due to combinatorial explosion finding optimal solutions. In vast parameter spaces like routing, logistics across global networks are tuning complex simulations for minimal error.

The types of stuff humans are already outsourcing [00:40:00] content processes multiple scenarios simultaneously. Speeding up. What silicon would iterate sequentially. Simulating systems with inherent randomness, such as predicting molecular interactions in drug design or weather patterns with quantum noise models, it quantifies what is exponentially faster.

Again, not things, it’s not eating the human part of, it’s not eating the neural part of this. Mm-hmm. Decomposing massive data sets into latent structures like identifying hidden correlates in genomic sequences or financial timeout series where classical methods hit computational walls. Enhancing machine learning sub routine such as faster gradient descent in training phases by exploring error landscapes in parallel.

Mm-hmm.

Malcolm Collins: This is, I think incredibly cool. Incredibly cool. Now here I wanted to do just a cost breakdown so you can get an example of like. Okay, what, what, what does it actually cost to do something like what they’re able to do today on the commercial market? Okay, so, [00:41:00] to do doom. So if we’re putting the processing power of doom is x in sort of processing power, right?

And they were able to get one of their chips doing that for 30 thou, $5,000 in one week with 800 K neurons. Okay? So with a pc you could do that train from scratch in 10 to 27 hours. So much faster right now for only $5,000. But even right now, the Doom operation, the Daily Doom operation would be 0.07 cents using 30 watt power draw using the chip system where it would be a hundred watts, 0.24 if you’re using that $5,000 P-C-R-T-X.

Simone Collins: And I mean,

Malcolm Collins: so we’re already beating these systems. Another thing I’ve noted, and I was thinking about doing a full episode on it, but I, I wanna talk about like, the way that you get the better systems is you mimic the brain. The brain is already a collection of token predictors. Again, CR episodes. I used to be a [00:42:00] neuroscientist.

I’ve been published in the space, but only once. Back when I, back when I worked, but I did like real research. I was at UT Southwestern, I was a fellow at the Smithsonian. I still have something on display at the Smithsonian that Simone saw the last time we went there. So like, I know I’m not like, just saying stuff.

The, the, the cutting edge neuroscience research that we’re looking at right now, increasingly is saying that a number of parts of our brain appeared to function more and more like token predictors than we ever thought possible. Mm-hmm. Now that we under, basically AI taught us how token predictors worked, and then we took that from what we were able to understand about AI and looked at the human brain and we were like, whoa, this maps weirdly well, but.

I point out that our brain is actually not a single one of these. It’s a, it’s a collection of these networked, like when we talk about like this split brain, patients are seeing this, but this also can help us understand how we solve some of the biggest problems in AI right now. If you look [00:43:00] at AI robotics, so you see you know, the, the AI that can do like a back flip and you know, you’ll have Peter Zhan speak so confidently about this.

Well, you know, that AI that did a back flip, it had to do that 10,000 times first and then another 10,000 times before it got it right again, because it’s really hard to use AI and, and these sorts of systems. It’s like, huh, yeah. That, that’s, because that’s a really dumb way to program that, to have that run off of an ai or off of a, a non-trained.

You know, sort of pre learned system.

Simone Collins: Mm-hmm.

Malcolm Collins: How do, how do humans handle that? How do humans handle complex tasks like juggling and walking and skating and sports and all of that? Even, even numb typing or piano that require like immediate feedback. We handle that with a part of our brain that learns functions and is structured completely different from every [00:44:00] other. Part of our brain called the cerebellum. And it is that little thing in the back that you see in images, right? When you see the, the brain that looks different. It looks hoarder and smoother and weirder.

But it basically learns all of that and the token predictor parts of your brain send to it a general gist of what it’s supposed to do, and then it carries that out. And this is likely how we’re going to handle this in robots when we get robots, right? And I can pretty much guarantee that this is how the system is going to ultimately work.

Again, convergence airplanes and birds have wings, right? Like a, a, a human made thing can converge on the organic evolved iteration of that thing. And so, we’re likely going to see cerebellum like structures. Our architectural structures in these systems for in the moment handling of these more complicated, more dexterous actions.

Simone Collins: Hmm.

Malcolm Collins: And [00:45:00] it turns out that this is actually one of the things that neural tissue is really good at. So this might be one of the things that humans are always good at is this is a sport part of the system. Now, as to how we relate to all of this culturally obviously we come from a unique cultural perspective which is sort of a, a, a puritan and backwoods tradition.

We’ve done a number of episodes on this. And a lot of people I think underestimate how much our current world perspective is. Highly influenced by our genetic and ancestral traditions. And that if you look at Puritan or, or we’re gonna go lighter like backwards, people always looked at things like the body.

They always saw it as a tool for achieving your. Goals not as you know, not, not with like mystical others. They, they frequently adopted the culture of neighboring cultural [00:46:00] groups like Native Americans, but they never kept it. They just adopted what worked in the moment and then just tar discarded it when it was no longer a utility to them.

And they would strip out all the woo. Historically, they were you know, strictly Protestant people, very against mysticism. They were one of the cultural groups in America that, that was often more hostile to mysticism and the things that they adopted. Even though they were seen as like uneducated and backwards and everything like that, they’d be like, oh native American.

I see you’re doing something with herbs there. Like, explain this to me. And then now all of a sudden outsiders say they’ve adopted native medical practices. And it’s like, well, they, they stripped of utility what they can get. And you see this in episodes where we talk about like the jack tails, which is how they passed down their culture.

And you can see moderate iterations of Jack Tails in something like bugs Bunny from Looney Tunes from Tech Avory, which is part of this region. Very maps on CR episode. If you wanna get into the, the, the very clear that, you know, bugs Bunny is just a modern telling of the, of, of Jack Tails the Bugs Bunny character, one, a [00:47:00] very ruthless character.

And these people were known for being very ruthless. But also a, a character who, how does Bugs Bunny relate to sexuality? Like in, in the moral lessons that these people taught to their children. What is sexuality? What is his body to Bugs Bunny? It is a tool. To use against the forces that oppose you or inconvenience you with arrogance.

He will dress up like a woman if he wants to. He will act effeminate if he wants to. There is no shame in that with, but has never performatively masculine. Right.

Speaker 9: Can you see that? I’m much sweet.

Speaker 14: The reason I use Bugs Bunny as a go-to here is it’s something that most listeners are going to be aware of that comes from this culture that helps understand this concept of being extremely aggressive or violent or brutal. , Which Bugs Bunny is, , but also completely [00:48:00] unconcerned with, , appearing traditionally masculine.

And even willing to, , appear traditionally feminine if it is useful in his goals.

Malcolm Collins: And this, this confuses a lot of people when they see this cultural group and the jack tails, Jack is never performatively masculine because that’s not the point that would be seen as inefficient and, and, and, and silly.

And that’s why I think a lot of the urban monoculture when I’m like, transness is silly and stupid and a waste of time and hurts people they can be like, oh, is this because you don’t think that people should like be gender fluid or like act in a way in a dis discordance with their gender? It’s like, no, you just shouldn’t obsess about it.

You shouldn’t like invest in it like that. It is a tool to be used for things. But I think through this you can see, and I’ve mentioned this, and the story of the coyotes, right? Like how I teach my kids about sexuality is that coyotes will use female coyotes in heat to lure out domestic dogs so that they can kill and eat the dogs.

And I [00:49:00] think when people hear this, if they’re not from my culture, they could think I am teaching that story to my kids to warn them as if they are the dog, that other people will use sexuality to tempt you into dangerous scenarios. And someone last at this because it, it’s, it’s funny from our culture, of course, you are not the dog, you are the coyote.

I’m telling you to never forget that your sexuality is a tool to lure the witness into positions of vulnerability. You know, when, when people ask, oh, the morality, why on our fab do you have a not safe for work section? It’s like, because that’s my culture. Use this. Technology in, you know, profit from Vice so that you can give to virtue, right?

So that you don’t have to profit from the school system so you don’t have to profit from the agents, right? And this, and I’m not saying like it’s, it’s, it’s good or bad or whatever, right? I’m just saying like, this is intuitively my culture. And so when [00:50:00] somebody comes to me, but I al I also think it’s funny, I do a whole episode on this or I might keep this buried and hidden here.

But a lot of cultures, because they don’t understand the backwards tradition, they see the backwards tradition taking interest in them. And they make one of two mistakes.

Simone Collins: Oh,

Malcolm Collins: they either think that that means that the backwards tradition is. Fundamentally buddy and chummy with them in a way that means that they will be friends forever and ever.

Or they think that as an outsider, like the Puritans, when they saw the backwards people start to dress and sometimes intermarry with the Native Americans and adopt some of their medicine and adopt some of their means of agriculture. They thought that that meant that they were soft on Native Americans or that they were the friendliest people in the world to Native Americans.

When in reality the first backwards president, Andrew Jackson was the one who was just like, okay, now we [00:51:00] can get rid of all the Native Americans.

Simone Collins: Right.

Malcolm Collins: And. I’ll point out, I might do another episode. So it’s a backwards people out of all the Protestant groups or the one group that never, ever, ever COOs.

So it’s not, and they don’t coup not because they don’t betray they don’t coup because they do not betray unless it is absolutely certain that they can achieve a huge benefit for the vast majority of their people. The reason they don’t coup in a traditional sense is because when they have extreme amounts of wealth, because they’re very against status signaling you do not get a huge boost in your lifestyle.

So if, like, if you’re a Muslim who does a, a coup you, you can get your mansions and your lavish lifestyle and your giant harems for you and your top generals and it’s worth it. It’s not worth it if you’re from this group because nobody from this group wants to live that way. You’d be seen as really pathetic.

And so there just isn’t that huge power gain to be had there. But in a situation where there’s just an ability to. Hm. [00:52:00] Wholesale harvest, another culture. This is something that this group does. It’s a very utilitarian group in the way that it approaches things. Mm-hmm. But I mean on on the plus side, they, they also are, are, are not picky about who they led into the group if you adopt to their cultural practices, which is why they intermarry was outgrow.

Well, and if you are utility to them, they would not allow

somebody.

Simone Collins: But it’s very like, I guess mercenary and outcome oriented.

Malcolm Collins: Yeah. Yeah.

Simone Collins: It likes what works. It’s interested in what works.

Malcolm Collins: Anyway might have a other episodes on that. And, and, and two, an extreme, and it’s also very brutalist. We’ve talked in the episodes that they, you know, rip out eyes recreationally.

And if you think we’re joking about

Simone Collins: this and it’s not brutalist, not like the architecture, but just brutal Malcolm, it’s just brutal.

Malcolm Collins: Not Yeah, brutal. Not like the architecture. And you’re like, that must have been like a rare thing. No. Like if you, if you raid historic figures from the group ones you’ve heard of, like

Speaker 10: It like Davy Crockett. , I, I like, this wasn’t just a thing that like [00:53:00] random, nobody’s poor whatever fringe of society people. Davy Crockett was a congressman, okay? Uh, here’s a quote from him, by the way. I kept my thumb in his eye and he was just going to give it a twist and bring that peeper out like a gooseberry in a spoon.

This was that mainstream within this culture that a congressman would talk about it..

Malcolm Collins: Davey Crockett is not traditionally masculine if you’re thinking like buff, manly man.

Speaker 11: I mentioned this because many cultures, , associate extremely strongly, , extreme aggression with, , traditional masculinity. , Especially like a performative displays of traditional masculinity. And in this culture, the two things are just completely uncorrelated from each other..

Malcolm Collins: But anyway love you Simone. Any final thoughts?

Well, I, I mean, I am excited to harvest as much as I can from the cultures around us so that we can survive and thrive and become an interstellar network of species.[00:54:00]

Simone Collins: That’s the plan.

Malcolm Collins: And when I say

Simone Collins: network are gonna take to the stars, are people capable of getting it done? Not We care about the aesthetics, not people who care about looking good, not people who care about doing it the right way. It’s gonna be people who get it done. That’s it.

Malcolm Collins: Yeah. And when I say network of species, I need to be clear here.

I do not mean you know, Xeno scum. Okay. I’m, I’m, I’m, I’m, I’m here talking about the sons of man, right? Like the species that we uplift and create. Because when we begin to have these silicon neural tissue amalgams, I don’t think it makes sense to call that a human. If we have uplifted dogs. Does it make sense to call that a human?

No. You know, when we have humans that are on different planets that need to be genetically specialized for that planet’s ecology gravitational environments, radiation levels, it doesn’t make sense to call that a human. So that’s what I’m talking about there anyway. And this is why the groups that want to resist this technology just won’t be part of space colonization.

Yeah. And, and it’s also why they’re not like [00:55:00] a, a meaningful threat to us in the long term because even if they become a dominant force on earth they will not be joining us in the stars.

Simone Collins: No. And that’s kind of the bigger, the bigger question is who gets off planet and goes beyond, because that’s where, and that’s the, the final frontier if we must.

Malcolm Collins: Yep. I absolutely love you, Simone. You are an amazing wife. Oh, by the way, as a note, if somebody’s like, well this, this one, you know, rednecky cultural group that you guys are from the, the backwoods group, it hasn’t done that well in terms of like cultural impact or, or economic impact or anything like that.

We’ll get to an episode about that’s actually why they’ve done so well in terms of like at the genetic impact and, and they have had a huge cultural impact. But you’ve gotta remember when they came to the United States the Ulster Scots were a group of around if we’re talking fighting age men, around 3,500 people.

Ulster

Simone Collins: [00:56:00] Scots who were Ulster Scots.

Malcolm Collins: That’s who made up the backwoods people. Oh, that’s the tradition. They were a very, very small cultural group that came from the oh, what was the, A Reavers of Scotland. Right. Which whole other thing we’ll get to

Lavia Simone. Have a good one.

Simone Collins: I love you too. I forgot to flush the toilet after dumping our wet mop into it after cleaning the kitchen, which of course is always filthy and titan. That’s why she was freaking out this morning when she had to go to the bathroom, you know? She was like, I can’t go. And I’m just like, flush the toilet. And she was like, I think a naughty bird made a mess in

Malcolm Collins: a naughty bird.

Simone Collins: Is that a naughty bird? And I’m like, okay.

Malcolm Collins: Is that what you are in her mind, A naughty bird?

Simone Collins: No, I think she just thought a bird because I mean, there were like some feathers in there, the stuff that ends up on our floor. [00:57:00]

Malcolm Collins: I’m not even

Simone Collins: not,

Malcolm Collins: did you guys get to comments today or?

Simone Collins: No? Unfortunately

Malcolm Collins: he seemed pretty stressed, so I figured I’d not, oh, it’s a controversial episode, so who knows what sort of fire

Simone Collins: then I should

Malcolm Collins: take a look.

You know, whatever you say that Nick Fuentes is an idiot. Yeah, but I mean, he’s really revealing his hand these days was what his, his comments on the Iran situation.

Simone Collins: Look like I said, if you are anti-Trump or as anti-Israel, you can’t be stoked about what’s happening. You just can’t, like, you’re not allowed to be.

Malcolm Collins: I’m really sorry for all the pain you’re in these days, by the way, Simone, you’re really toughening through it and it means a lot. You know, being back looks like

Simone Collins: I have a be

Malcolm Collins: you have significant bruising across your face from me beating you. I’m glad for people wondering what her surgery was. They cut out a part of her cheek and they had to put it over her gum.

Because

Simone Collins: and I called them and they were like, oh yeah, no, it’s totally normal to be in constant pain a week after. And I’m like, that, that sucks. I [00:58:00]

Malcolm Collins: Do you have any pills you can take to reduce the pain or,

Simone Collins: yeah, they, they gave me some pills that they don’t do anything, so I stopped taking them.

Malcolm Collins: I’m really sorry, Simone. And, and Simone has an incredibly high pain tolerance.

Simone Collins: They do,

Malcolm Collins: I mean, it’s, it’s comically high, like she

Simone Collins: handles, I took, I took this worse than any of my c-sections. For what it’s worth. Because there’s something about like, you can kind of avoid moving your abdominal vessels and being careful as you walk around, but you can’t not like, at least consume like, liquid foods.

You know, like there’s still stuff like talking, you can’t avoid using your mouth that much by the swallow, by the fun

Malcolm Collins: update on I, I mentioned this in the episode that we did on Iran but they have officially elected his son as the next Supreme leader

Simone Collins: the one whose wife and

Malcolm Collins: kids were killed.

Yeah. Well, I mean, this is really bad because both of the previous Ayatollahs and, and this is actually in the first AYAs, the founder of [00:59:00] Iran’s will. Mm-hmm. That you are required to study an Iranian school. So it’s like one of the like founding documents of the country that the title of ILA can never be hereditary.

Oh, if it ever was that it would be an un-Islamic country, that the Islamic Revolution would be over. So in a way they’re sort of declaring and what’s worse is he’s a famously corrupt individual. He has hundreds of millions of pounds in UK real estate and stuff like that. Oh, so he, for, he’s both, it, it’s just the Shah 2.0, but more corrupt and more deadly which removes a lot of the government’s legitimacy in, in the eyes of many individuals.

Which this is, there’s been already videos of in, you know, downtown teran people shouting from the roofs. And you can hear this across, you know, like in Peru when there’d be like games and everyone would start shouting and, and, and you’d yeah, hear it from the various rooftops. So like soccer games.

Simone Collins: Yeah. You could be like walking through the streets and just everyone at once would cheer and you could just hear it throughout the city.

Malcolm Collins: Just so cool.

Simone Collins: So

Malcolm Collins: there, there, there’re [01:00:00] shouting deaths to the, the new guy who was elected. So I, the Iranian people really do not want this. So this is, this is, they’re like, you know, there’s like videos of like people in Iran in like highrises, like laughing and having cocktails while like buildings are being hit.

And I think we’re seeing the surgical nature of this, given that even by the IRG C’S own figures, which are almost certainly inflated, they’ve only had 1,300 casualties so far. And their figures, which were almost certainly underplayed for how many protesters they slaughtered was 3000. Where whereas other numbers are saying it’s around 35,000.

So if we’re trying to get like real numbers and assuming they’re inflating these, it’s like nothing compared to, to what they were doing, which is pretty wild. But you know, no, nobody cares. Nobody cares. Nobody cares about reality anymore. This is the world we’re living in. I will get started here.

Speaker 12: ​I.[01:01:00]

Speaker 13: Octavian, you gotta be careful when you attack them. You’re getting bigger, okay? You can’t jump on them.

Speaker 12: Okay?

Speaker 13: Octavian, did you understand me? Why? Because you could accidentally really hurt them.

Speaker 12: Okay? I hurt the subscribers.

Speaker 13: You’ll hurt the subscribers. Yeah, no, only hurt the non-subscribers.

Is this where you’re training to battle the non-subscribers to people who don’t like and subscribe?

Speaker 12: Hey.[01:02:00]

I,

Speaker 13: oh, it is.

Speaker 12: Look behind you because you picked your,

Speaker 13: oh, okay. Okay. Okay. So I just gotta look behind me and you won’t attack me. You promise? Yeah,

Speaker 12: I promise.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit basedcamppodcast.substack.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

Based Camp | Simone & Malcolm CollinsBy Based Camp | Simone & Malcolm Collins

  • 4.5
  • 4.5
  • 4.5
  • 4.5
  • 4.5

4.5

131 ratings


More shows like Based Camp | Simone & Malcolm Collins

View all
Ron Paul Liberty Report by Ron Paul Liberty Report

Ron Paul Liberty Report

2,280 Listeners

TRIGGERnometry by TRIGGERnometry

TRIGGERnometry

2,260 Listeners

Quillette Podcast by Quillette

Quillette Podcast

802 Listeners

"YOUR WELCOME" with Michael Malice by PodcastOne

"YOUR WELCOME" with Michael Malice

2,173 Listeners

Calmversations by Benjamin Boyce

Calmversations

374 Listeners

DarkHorse Podcast by Bret Weinstein & Heather Heying

DarkHorse Podcast

5,341 Listeners

New Discourses by New Discourses

New Discourses

2,379 Listeners

The Same Drugs by Meghan Murphy

The Same Drugs

178 Listeners

The Saad Truth with Dr. Saad by thesaadtruthwithdrsaad

The Saad Truth with Dr. Saad

1,173 Listeners

UnHerd with Freddie Sayers by UnHerd

UnHerd with Freddie Sayers

219 Listeners

Conversations with Peter Boghossian by Peter Boghossian

Conversations with Peter Boghossian

247 Listeners

The Auron MacIntyre Show by Blaze Podcast Network

The Auron MacIntyre Show

509 Listeners

Maiden Mother Matriarch with Louise Perry by Louise Perry

Maiden Mother Matriarch with Louise Perry

288 Listeners

Dad Saves America by John Papola

Dad Saves America

103 Listeners

The Winston Marshall Show by Winston Marshall

The Winston Marshall Show

452 Listeners