
Sign up to save your podcasts
Or
After a six-week hiatus, Cameron and Steve return for a sprawling, charged conversation about AI, politics, ethics, and the future of civilization. Steve reveals he’s been 3D printing buildings for TV, while Cam unveils his bold new concept: _Chapter 3_, a movement to engineer the next phase of humanity before AI and robots rewrite society by default. They dig into Mirror World drift, political alignment tools, and why Australia isn’t even remotely ready for the revolution already underway. There’s talk of AI-led political parties, the death of Google search, capitalist collapse, and even starting a cult. Welcome to the next chapter.
[00:00:00]
Cameron: This is futuristic, episode 39, recorded on the 16th of May, 2025. Our first show in six weeks. Steve Sammartino
Steve: I’m so sorry. I didn’t know it was that long, but we’re back and Cameron’s in the house ready to learn as good, including English and grammar.
Cameron: Well, look, there’s been a whole lot of things going on, um, in the world of tech and AI in the last six weeks since we’ve been busy doing other stuff. Steve, do you wanna gimme a quick burst of, uh, what you are proudest of tech-wise in this period, but since we last spoke?
Steve: Yes, so I have, uh, been doing 3D printing for a national TV show printed. Five buildings in five days. I can’t say who it is, but its initials are the block. So that is
Cameron: So it’s not your TV show. I thought this was your TV show. [00:01:00] You,
Steve: mine.
Cameron: doing it for
Steve: Yeah. Look, I, I think I can tell people I can’t show anyone anything, but, uh,
Cameron: five buildings.
Steve: Yep. In five
Cameron: This is with, uh, what’s the name of your
Steve: A
Cameron: building, c.
Steve: 3D with Tommy
Cameron: That’s right.
Steve: named after him because I’m not an egocentric guy. And, uh, this could be the breakthrough we’ve been looking for. ’cause we’ve, uh, we, uh,
Cameron: O 3D doesn’t sound as good. Sam O 3D isn’t as good as macro 3D.
Steve: real good. I
Cameron: It does, yeah. Yeah, yeah.
Steve: so that’s that. And the other thing is I’ve been thinking a lot about Mirror World drift, and I just posted, uh, a blog on that and I had a
Cameron: Explain.
Steve: was awesome.
Well, I think that we’ve created this mirror world, which has been explored by people like Kevin Kelly, where we create a proxy for the world that we live in. But increasingly this proxy, which used to be just the digital version of us, increasingly it’s not us. It starts out with us using AI as tools and then agents and then proxies, and then the ais talk to the ais, and then they [00:02:00] develop language and conversations where we just drift out of this mirror world because it’s no longer relevant to us or for us, and it becomes this almost a new sphere.
Uh, which was something that was popularized in the early, uh, 20th century, uh, where we kind of opt out and it becomes almost a, a new species like an ocean where we just dip our toes in there. But there’s a whole lot of species in there. We don’t understand what’s spawned them. We can’t talk to them, we don’t know.
But like another big ecosystem, it has a huge impact on our lives, but it becomes this other world that we are not really associated with, even though we built it.
Cameron: Yeah, I, I, look, I think that’s kind of inevitable, um, not just Kevin Kelly, but I know that, um, Eric, um, fuck, what was his name?
Steve: Ler.
Cameron: No, no, no. The former CEO of Google for a long time,
Steve: Schmidt.
Cameron: Eric Schmidt’s been talking a lot about this for the last year or two, [00:03:00] how will start to develop their own language that’s more efficient, and then they’ll start talking to each other and he says, that’s when we need to pull the plug on the whole thing.
But that’s not gonna happen.
Steve: No.
Cameron: Um, yeah, that’s, I think that’s inevitable and it’s very Philip k Dickey. Uh, just this whole idea of human intelligence spawning a new kind of intelligence, which is, becomes so vastly different to our own intelligence that we, you know, I actually, of the show notes I had, one of the things that I watched a couple of weeks ago was a YouTube. Interview mostly between Ben Gerel and Hugo de Garris. guys I know a little bit. Hugo and I were on stage together at a Singularity conference about nine or 10 years ago down in Melbourne. I. Um, but one of the, they were just sort of talking about, they’ve both been AI researchers for decades and they were talking about where things are at, but they were, uh, Hugo was talking about alignment.
You know, you hear the AI researchers talk about alignment, which is to make [00:04:00] sure that the AI’s values are aligned to human values. And Ben I think it’s kinda like squirrels at Yellowstone National Park. Like talking about are human values aligned with squirrel’s values? I guess at some level, you know, we both rely on oxygen. We both rely on the climate not getting too hot. You know, we, we value certain things, but really, I. know, don’t, you know, we, we look at squirrels, we find them cute and interesting, and generally speaking, we don’t wanna harm them. We don’t wanna hurt them. We want them to run around and do their thing, but we don’t really think about them on a day-to-day basis unless you’re a park ranger.
Steve: they’re outside of the consideration set unless you’re specifically working in ecosystems and the maintenance and the importance. And I think Bostrom talked about that too when he did his first, uh, artificial super intelligence thing. He said, if, if we want to build a highway, look, we don’t want to hurt the ants, but if you’re in the way, the highway’s going in, it doesn’t matter.[00:05:00]
Cameron: Mm-hmm. Yeah, so I, and they were basically saying, and I think this is right, that if we have a super intelligence, its relationship to us will be like our relationship to squirrels or ants and. Anyway. Listen, I wanna tell you what I’ve been talking, what I’ve been thinking a lot about since you and I last spoke, and I’ve been dying to speak to you because you are the guy I want to talk to about these sorts of things, right?
Steve: you.
Cameron: You are the, you’re the only
Steve: Someone
Cameron: someone wants to talk to you. You are the only guy I can have a serious conversation with this stuff about. It is politics. So we just had a federal election here of course my number one political issue right now apart from legalization of cannabis is what are we doing to prepare our society for the AI robot revolution that’s gonna hit in the next couple of years?
Uh, Steve’s doing a selfie. I’ve gotta put my gang sign up,
Steve: we got a, yeah,
Cameron: gang sign.
Steve: we go.
Cameron: Um, [00:06:00] and, uh, no political party that I was aware of was even talking about it. What we’re gonna do about the AI robot revolution in the next five years. Not even on the fucking agenda anywhere I.
Steve: so, of course, not on the agenda at all, but think of it like this. They didn’t even have the courage to truly talk about the property crisis, and that’s already here and people are living in it and can’t get out while not living in it. Sorry, no pun intended there. And they wouldn’t even talk about that.
Like you talk about, talk about the wall, the one we’re already in, there’re just ignoring you, pretending it’s not there because they’re too scared. They’ll get voted out, let alone something they barely understand. mean, disappointment, but not surprised.
Cameron: I think the Labor party, you know, to give elbow is due, did say they’re gonna invest in building 23 houses in the next 10 years or something like that. So, you know,
Steve: go.
Cameron: for big fucking vision elbow,
Steve: Oh,
Cameron: you’re really killing it mate. Killing it. [00:07:00] Killing it. Um, so you know that that sort of didn’t happen, that wasn’t on the agenda, but. I’ve been thinking a lot about the future of our society and what we need to do to engineer it. And we’ve talked a little bit about this before, but been working on this idea. I’m calling chapter three. And chapter three is a movement that I wanna start with you and I call it chapter three because I basically figured we’re moving into the third chapter of humanity. And the way I think about it, the first chapter was everything that happened up until the Industrial Revolution.
Steve: Yeah.
Cameron: 10,000, maybe we could say a hundred thousand years. That was the first chapter of humanity, and it was basically manual labor. That was the first chapter of humanity. Right? Then we got to chapter two, which was the industrial Revolution through to today, [00:08:00] everything that’s happened in the last 250 years, let’s say roughly 300 years. Uh, chapter three is the AI robot. Chapter where, what it means to be. It’s the singularity chapter, right? What it means to be human, we live, how we interact, how we survive is gonna be as vastly different from the of the late Industrial Revolution as the late industrial revolution is from people lived in the Dark Ages or the Middle Ages, right? And a big believer in the fact that we need to engineer that as much as we can. We need to be thinking now about what do we want, what’s important to us? What are we willing to sacrifice? What do we most desperately wanna protect? What do we need to engineer to get the best? Possible outcome as far it is, is it is [00:09:00] within our ability to engineer it.
Because a lot of things when we have a super intelligence, obviously are gonna be completely out of our domain and our control. So I’m pulling a chapter three and I, I want to get together the best thinkers, the best minds starting locally and then maybe going internationally to start to think about what does this look like?
Where is this dialogue happening? I know that there are. Definitely dialogues like this happening in certain rooms, maybe at Davos, maybe in Silicon Valley enclaves where your billionaires are gathering.
Steve: there. Look.
Cameron: Yeah, right. Um, as opposed to dross, who’s probably at Davos. Dross is probably at Davos. Um, he comes out of, I think that’s the backstory of
Steve: If
Cameron: He came outta Davos. Yeah, he’s Elon Musk in a wheelchair, and, uh, after he did too much plastic surgery and then he becomes dross and creates a dials. Anyway, Dr. Huan, uh, moving on. So one of the things that I’ve done [00:10:00] recently is go into Chachi PT and ask it to help me build my political profile. I did it sort of with a view towards the election, but it said after I did this, basically it said, dude, you don’t fit anywhere in the Australian political
Steve: yet.
Cameron: completely, you’re completely off the fucking tree. and I got it to, you know, have you used Vote Compass? Right. ABC’s Vote Compass. It’s,
Steve: used it.
Cameron: oh. ABC’s got their own version of this compass tool, which is being used around the world today. It basically asks you high level questions. How do you feel about orcas?
How do you feel about housing prices? How do you feel about climate change? How do you feel about L-G-B-T-Q rights? How do you feel about this? And then it tells you where you sit
Steve: Mm.
Cameron: a political axis and which parties are closest to you. I do it. I come out. Left of she Guevara. Right. So it’s like, dude, you need to move to Cuba.
Steve: And you don’t. You do. And you don’t. But anyway, we’ll
Cameron: Yes. Well, that’s what GPT said. I could, I should read you [00:11:00] GP T’S analysis. Like, dude, you’re like all over the fucking map. Um, but it was a, i, I got GPT to build a profiling tool for me that asked me. A 50, way more in depth questions, then it analyzed my responses and told me, you know, kind of where I fit in terms of my political profile.
Help me. Help me. I. Coalesce. What is important to me, what my morals and values and ethics are when it comes to politics, right? And again, it said, dude, you, you’re like, there’s no one, there’s no political party in Australia that even comes close to any of this kind of stuff, right? Little bit of this, little bit of that, but mostly it’s off the radar. So what I’m trying to figure out as part of chapter three. how do we build a tool that enables everyone to go through this kind of exercise? To [00:12:00] really think deeply about what’s important to them politically. ’cause that’s really the question of do you want society to be? Like, whenever I talk about the fact that I’m a sort of, um, a, a communist on my investing show, I get emails from members saying, how can you be talking about value investing and talking about buffet?
And at the same time say you’re a communist. I say, well, in my brain, there’s two sides to it, right? Where do I want society to be? I want society to move towards a place where everyone’s needs are taken care of. Everyone is looked after, regardless of your earning capacity or how pretty you are, or how many Instagram followers you have, I want everyone to feel happy, safe, fulfilled, be able to eat, have shelter protection. Get fulfillment, have an education, have healthcare, all those sorts of things. The question that I have is how do we get there from here? And to me, communism is the only political philosophy I’ve ever come [00:13:00] across that really has a plan for that. It’s like, okay, well it has a vision for that kind of a world.
The question then is, how do we execute that? What are the, how do we execute that in a way that’s. Moral and ethical, and viable economically, et cetera, et cetera, and these are just problems to be solved, but it has a vision for where society could be if we figured out answers to all of those problems. Capitalism doesn’t.
Steve: that. No, wait a minute. Adam Smith talked about that. As in the terms of human flourishing. And he talked about the idea of allocating resources more effectively using the market to do that. But he also talked about the idea of government setting boundaries and guess minimums in the, in the capitalistic sense that it’s what, what is that bare minimum?
And I imagine your minimum threshold of what that flourishing looks like with all the things that you mentioned. And the mixed economy tries to do that, to provide a. Benchmark or baseline of access to [00:14:00] resources for self-improvement and to look after our most vulnerable Anyway, keep going.
Cameron: No, you’re right. And Adam Smith, as, as you and I know, but I’m not sure everyone knows this, you know, wasn’t an economist, he was a moral philosopher,
Steve: Mm-hmm.
Cameron: he was talking about what was a, a, a good moral way that would a better society. What we know now that he didn’t know, but we know, is that capitalism, laissez fair capitalism. Fails, it doesn’t work in the United States as the longest running experiment in laissez-faire capitalism is, you know, classic ca is the example now that it doesn’t work.
Steve: Wow.
Cameron: it creates a shit load of money, but it also creates such an endemic level of oligarchic corruption that it needs
Steve: uh,
Cameron: to be very heavy.
Steve: too. Smith said that the one fatal flaw of capitalism is that power begets power. And every now
Cameron: Yeah.
Steve: you need to have a redistribution of power, because monopolies like almost, or, or duopolies or [00:15:00] very concentrated economic systems and wealth is inevitable. So then you have to recalibrate every now and again.
And, uh, I would argue we’re at a stage now where we need that recalibration.
Cameron: Yeah, but those recalibrations are extremely violent usually and painful.
Steve: they, become difficult to get. But, and now at this point in time, we’ve got media capture to, to a greater extent than we had during the age
Cameron: I.
Steve: um, yeah. The, the, what was the, the, the Gilded Age? I.
Cameron: We’ve look, there have been plenty of studies done on this. There’s a book I read on this not that long ago that looked at historical oligarchies and it said, across history, going right back to Athens, places like that. There are only four ways that oligarchies have ever ended in history. A civil war, a major international war, some other form of a major famine. Or a plague or some other form of societal collapse.
Steve: Yeah,
Cameron: Right. The of climate change, something like that. There’s only four ways that oligarchs ever end and [00:16:00] none of them are good. Right. And it’s a major civilizational reset. is either permanent in some cases
Steve: Hmm.
Cameron: you know, generational recovery. It’s not pretty, but you know, that’s, that’s not a vision.
We’re not working towards a vision. We don’t have a Department of Vision in Australia that’s down. You know, I’ve always said this,
Steve: We need the ideas department.
Cameron: I wrote about this in the psychopath epidemic where I said, you know, economics. supposedly a subfield of morality of ethics, right? You econ, economics sits under ethics.
So we have a department of, we have a treasury, we have a department, we have a department of economics. We should have a department of ethics that says, okay, you, you can’t have economics without ethics. Ethics be, should govern how economics works. And in the Westminster system, we don’t really have. Department of ethics. We have precedents, we have legal precedents, and we have a [00:17:00] constitution, and we have these sorts of things, but we don’t sit down as a people and go, where do we want to be as a nation? Ethically? How do we want to be living 10, 20, 50 years from now? And what do we need to do to get there? Capitalism doesn’t really encourage that it, it just throws it to the wind and say, we’ll figure it out as we go.
Steve: but, but for a long period of time. And it was a good measure way back, uh, when was the dollar or money was a good proxy for wellbeing because so few people had access to resources and clean water and all of these things, and, and that’s why GDP was a functional measure because we’re increasing the absolute wealth and relative wealth.
Of societies which created infrastructure and resources, access to food, transportation, healthcare, all of these things. And that dollar was, was a pretty good proxy. But in a global economy where things are worth different things in different markets, and you have all of this arbitrage that wasn’t possible before, that dollar is no longer, uh, a good measure for general wellbeing.
So there’s now[00:18:00]
Cameron: And it never really was either because we destroyed the environment in the process of increasing GDP,
Steve: not
Cameron: we fucked the,
Steve: A lot of things
Cameron: yeah,
Steve: in that, in
Cameron: exactly.
Steve: were not costed, which we’ve even had attempts to try and cost these externalities. Everything from carbon credits or whatever, and it always gets kiboshed by the oligarchy.
Cameron: So getting back to chapter three and political profiling. You know, trying to figure out how do we build these sorts of tools that get people to think more deeply about what do we want our society to look like 10 years from now, and what do we need to put in place to. Have the best possible chance of achieving that as opposed to just winging it, which is what I feel we are doing right now.
And hoping that Sam Altman and Elon Musk and Denise Hassabis and all of these guys don’t fucking land us in a huge ditch, which, let’s be honest, at least most of them are probably gonna do [00:19:00] that. Most of them are gonna handle it really badly. Um, if you look at. Zuck and Musk. Uh, not sure. I want them to have any power over the future.
Uh, Sam, Demi a little bit. I trust them a little bit more than Zuck and, but. You know, we, I don’t trust the future of humanity. I into the hands of a handful of billionaires and Trump, but that’s kinda what we’re doing right now. We’re just going, I don’t know. sort of work it out as we get there.
No, there’s no one, we are not coming together as a society, and I don’t want to hear that. There’s three people in Canberra that are sitting down thinking about what our AI policy is gonna be. That’s not what I’m talking about. I’m talking about at a societal level in Australia, coming together and going. Okay, what are we gonna do about this seriously? Like if there’s a non-negative, a non-zero chance that a substantial amount of the population are gonna lose their jobs to advanced AI and, and, uh, humanoid robots [00:20:00] in the next five to 10 years. We should be fucking talking about that right now and starting to plan what that’s gonna look like.
We should be coming together as a people and going, uh, what are we gonna do? And that’s not happening. And I think it’s gonna be up to us. If it’s gonna happen, it’s gonna be up to literally you and me. We’re the only people, Steve.
Steve: Look,
Cameron: Not that I have a Messiah complex, Steve, but
Steve: I’ve been saying that for some time now. Cameron and I.
Cameron: I’m the only person I trust and you we’re the only people I trust to do this fucking thing. Mm.
Steve: I have always wanted to start a cult, and I know that I could be a good cult leader. And if there’s
Cameron: I know.
Steve: sounds cultish, it is this a new world by ai, we’ve got the perfect audience. A disinfected entire society of gamers in their parents’ basements who can’t afford a house. They need cult leaders.
you and I and we’re gonna need a place to build this new
Cameron: Why don’t I have a sex cult? Steve, I’ve always wondered this. What’s wrong with me? Look, I’ve got long hair and a [00:21:00] ponytail.
Steve: I think if
Cameron: I should have. I
Steve: if you, if you wanna create
Cameron: step one, get a ponytail. I.
Steve: no, is cult is you gotta start a cult
Cameron: Well, to get a cult, you need a ponytail, I think. Yeah.
Steve: fine. We are starting
Cameron: Okay. Back to, in fact,
Steve: and
Cameron: chapter three, back start. A Cult. Cult.
Steve: No,
Cameron: Well, if you wrote it down, I know it’s serious.
Steve: I’m, I’m, I am serious, I think to create serious change, and most things are kind of cults. They really are. They, it becomes a non cult once it’s accepted in society. I mean, let’s say you started Catholicism Today, they go, that’s a cult,
Cameron: Chrisy, and I always chrisy and I always refer to our Kung Fu school as a cult because you start off going one day a week before you know you’re going six days a week. And if you’re not there one night, everyone’s texting you going, dude, where are you? How come you’re not here? What’s going on? Like, you’re not. It’s a total, it’s a total cult. No one’s allowed to leave. You’ve gotta stay, you’ve gotta do kung [00:22:00] fu
Steve: and,
Cameron: with every waking hour of your day.
Steve: the the, the, song that we can have is, the theme today is, uh,
Cameron: Don’t play songs we’ll get, we’ll get pinged. We’ll have to take it out.
Steve: not gonna
Cameron: Uh, sing it.
Steve: gonna I.
Cameron: Just sing it.
Steve: it’s living color. The cult of personality. I mean, you know,
Cameron: Cult the personality. Cult the personality.
Steve: that’s it.
Cameron: Okay. Speaking of which, moving on. ’cause we don’t have, we don’t have time. We don’t have time. We don’t have time.
Steve: no, we’ve
Cameron: The question I’ve got for you, Steve, thinking about elections is when we gonna see our first AI political candidate? And before that, when are we gonna have our first human as a proxy for AI candidate, as in saying I’m a human running as the candidate, but I will have all of my policies created by an AI and will use an AI to guide. My how I vote, uh, in every [00:23:00] situation, every policy, I will run it through ai, chew it up, good, bad, figure out how to, how to, you know, what’s the most ethical, moral, logical, rational way to process this. When do you think we’re gonna start to see ai, not just politicians talking about it, or political parties advocating?
We need to do more about it, which, let’s face it, now is the time we need an ai. political party, which is gonna deal with, we’re gonna figure out what, how we’re gonna deal with AI and robots, and we’re going to use AI to help us navigate this whole process in as we do it. When do we have the first AI political party?
Political candidate? Steve, your, I want you to put money on the table right now. Is it? When is it what? Gimme a gimme a year. When we’re gonna see that in Australia. Steve Santino, Australia’s leading futurist. Make a prediction right now. Put your career on the line.
Steve: The next election, will have
Cameron: State, local, federal, [00:24:00] what?
Steve: and now federal, federal. Guided by AI in the next election within four years, because there’s gonna be such radical change between now and then. It’s gonna have such an influence on society. They’re gonna have to tap into it that they won’t have a choice.
This is not a choice thing. We’ll have it by the next election because I just think with the, the level of recursion and how fast things are changing, four years is a very, very long time. Four years is like 40 years, a hundred years, 300 years. But in this election. I think they’ve already been doing it, except they haven’t been putting in the prompts that you would desire, the prompts that they’ve been putting in.
How do I develop a policy which keeps Gina Reinhardt happy? Pretends that I’m actually gonna make housing affordable and avoids any of the climate issues while not accepting royalties from all of the foreign companies who dig up our fossil fuels and send them. I mean, they were doing it. Every policy that is written, every
Cameron: It’s just,
Steve: written are all, they’re
Cameron: just,
Steve: AI to prompt it right now, but they’re just,
Cameron: they’re just doing what they’ve been doing for the last, just [00:25:00] doing what they’ve been doing since Howard Man, isn’t it? They
Steve: exactly,
Cameron: need, they. That’s the prompt to ai. Okay. Imagine you’re John Howard. What would you do in this situation? And that’s what elbow does too
Steve: yeah,
Cameron: well, talking about, you know, um, the, the normalization of this.
So on the front page of the Financial Reviews website today.
Steve: Alright, better
Cameron: It’s headline article. AI is starting to work. The Trump drama could look like a sideshow lost among the Trump turmoil is the disruption caused by the AI revolution? It’s happening. And Australian investors, politicians, and business leaders are not ready. James Thompson Calmness. March 16th, 2025. For the past few days, some of Australia’s top chief executives, including Commonwealth Banks, Matt Koman, nabs, Andrew Irvine, and Telstra Brady. Have been bunkered down in the US city of Seattle for one of Microsoft’s most exclusive and influential events. By the way, I used to help organize those events.
I took the CEO of [00:26:00] Telstra and all the senior executives of Telstra to Seattle for those events. Many times, back in my Microsoft days, 25 years ago, the tech Giants annual CEO Summit has an exclusive guest list, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah. and it’s talking about ai. Basically the, his biggest message is talking about Komen. His biggest message is that the AI revolution is moving faster than ever, and Australia may not be ready for many consumers. It may seem that the initial hype that accompanied the release of chat PT has faded. And generative AI models are simply better versions of existing tools, a smarter way to search the web, for example, or a suit virtual assistant. But inside some of the world’s big businesses, things are changing and fast. Also in the financial view today, there’s an article in the opinion section. How This Teenager Uses AI will surprise you. It’s by Elaine Moore because they are less enmeshed in existing structures. Teenagers tend to be more willing to play around with new technology, finding their own shortcuts and uses.
When 16-year-old Jra Lara [00:27:00] Gly checks her phone in the morning, she scrolls through the messages of friends have sent her, and then. If there’s something on her mind, she opens chat. GPT asks a question out loud and listens to the answer. Sometimes I ask things I was thinking about overnight. She says, just random thoughts.
Or if I had an interesting dream, I might ask about that. Lara uses chat, GPT every day, multiple times a day. Everyone she knows at school does the same according to the last poll by off comm. out of five 13 to 17 year olds in the UK are using generative ai. Early attempts to ban the technology in schools have given way to acceptance or resignation that this is an inescapable part of the world that students are growing up in.
Steve: I, I’m actually not surprised. One little bit, and I’m not surprised that she asked that. I did. That’s what I do all the time. That’s what you and I have been talking about for two or three years since we’ve been doing this. We
Cameron: My point isn’t that they’re doing it. My point is that the financial review got headlines about AI is now [00:28:00] changing everything. Like for the last couple of years it’s been, oh, look at this crazy stuff that the kids are doing and how bad it is and how it’s not as good as this, and not as good as that, and not as good as the other when their financial review that all the fucking business leaders and political leaders, et cetera, et cetera, read in this country starts going, holy shit, this stuff is serious. Um, people are gonna start to take, those sorts of people are gonna start to take more notice. But Sam Altman was recently at a Sequoia Capital event and I watched the YouTube chat that he gave and they asked him, why aren’t big businesses making more use of generative ai? And he said, look, this is the same in every technology revolution.
Big businesses are just way, way slower to move. And it’s the startups that move quickly and they get the advantage. . He says people in their thirties and forties are using it as a Google replacement. Uh, people in their twenties are using it. Uh, late twenties are using it first, basically. Personal and career advice. Um, what do I, you know, what do [00:29:00] I do here? What do I do there? College age people, people, late teens, college, he says they use it as an operating system.
Steve: Yeah. Yep.
Cameron: It’s basically the underlying thing that runs everything, which is how I use it. Chrissy’s starting to get there as well. it’s just everything. Now it’s my default.
Steve: It’s my quasi desktop and it’s the, it’s
Cameron: Yeah.
Steve: fulcrum of all of the other pieces that go into it now.
Cameron: Yeah, me too.
Steve: And,
Cameron: And.
Steve: thing I noted noted in that article as well, which for me was interesting from a business perspective, and this is what the. The financial review readers should pick up.
Laura’s preference is chat, GBT. With Google, you have to click on websites and you have cookies and adverts. It’s annoying like we’ve spoken before about I. Google searches, we know it being replaced in serp and I’m actually surprised that Google’s results were so good. I dunno if they brought it forward or the world hasn’t caught up yet.
But just cannot see a world where a page of links is, is even [00:30:00] gonna survive. Put aside the whole dead internet theory and if no one gets links, no one gets published. And we have that whole synthetic data problem. But gee, I tell you now, search has to be having a rapid decline, has to be.
Cameron: Not only search, so the CEO of Fiverr recently sent a an email out to all of his employees. Um, Hey team, I’ve always believed in radical candor and despise those who sugarcoat reality. To avoid stating the unpleasant truth, the very basis for radical candor is care, blah, blah, blah. So here is the unpleasant truth.
AI is coming for your jobs. Heck, it’s coming for my job too. This is a wake up call. It does not matter if you’re a programmer, designer, product manager, data scientist, lawyer, customer support rep, sales person, or a. Finance person, AI is coming for you. You must understand that one. What was once considered easy tasks will no longer exist. What was [00:31:00] considered hard tasks will be the new easy and what was considered impossible. Tasks will be the new hard. If you do not become an exceptional talent at what you do, a master, you will face the need for a career change in a matter of months. I’m not trying to scare you. I’m not. About your job at Fiverr.
I’m talking about your ability to stay in your profession in the industry.
Steve: Wow, that’s. It’s big, big statement in a matter of months. I feel like
Cameron: Yeah.
Steve: that a bit. I don’t think months because of the lag that you see with big corporates is the, is the main reason.
Cameron: Yeah.
Steve: that, that for me was interesting and that was, uh, last week’s post, uh, that I did, I spoke about.
That AI can do everything. The one thing it seems to me that it can’t kind of do yet is nothing in particular. [00:32:00] It’s actually just moving between tasks, if that makes sense. So AI can kind of do it all, but the, the thing that it doesn’t do, and I, I wrote, why hasn’t AI taken your job yet? And as we know it can elite law exams, better essays than grad students, all of that kind of stuff.
there hasn’t been a tidal wave or a terminator just while disruption with jobs. It’s kind of a paradox. Uh, and, and the thing for me is that it can, it doesn’t fail the tasks that are intellectually complex. It fails when workflow is messy. is what AI can’t do yet, and I don’t think agents are gonna be able to do it either.
If a job requires juggling fragmented tasks, shifting priorities and ambiguity, being a, a manager on a Monday, AI’s gonna struggle because you’re gonna be moving from a warehouse to a boardroom to, uh, a meeting offsite. Uh. [00:33:00] And it’s all of these tasks in between the tasks in different geographical and physical contexts that it, that it struggles with.
And it, and it might be that an AI bot. Then takes over and does that because it doesn’t bring its knowledge with it. It’s sort of trapped in the machine. It needs to be released from the machine in in some way. So for me, it’s the context switching and geographic switching. The more of that you have in the job, the less at risk you are, because even if there’s a lot of tasks that AI does, you’ll get your direct AI to do those pieces, and then you’re taking this piece to the next place.
Cameron: I dunno if I agree with that. I.
Steve: Tell me, tell me why you disagree, but it’s
Cameron: Well, I think it’s, I think it’s very good at context switching because you know, a one minute I’ll be talking like I have today at what not One minute I’ll be talking to AI about, you know, deep, deep politics, domestic politics. Next I’ll be talking to about which, which brand of matcha green tea has the highest [00:34:00] ceremonial grade qualities.
Steve: lemme refine context switching, not the context of the topic. Intellectually, it’s probably physical space, right? So you’re in a warehouse doing something. I. Like, how does the AI get from the boardroom to the warehouse? Like that sounds, I know that sounds like a throwaway statement, but I I genuinely mean that like our meat bodies takers from here to there.
Does,
Cameron: Well, when we have humanoid robots.
Steve: move between, or is it
Cameron: Yeah.
Steve: like is it a fluid movement of an
Cameron: It’s a bit of both.
Steve: ai or is it an
Cameron: Well, what’s it doing?
Steve: humanoid.
Cameron: it doing in the warehouse?
Steve: I don’t
Cameron: What’s its function?
Steve: meet Billy and Billy’s talking about this, and I, I don’t know. And it’s
Cameron: Yeah, so it’s a, it’s a, a console on the wall. Hey Billy, what are you doing? Hey, um. One final thing I need to talk to you about before we run outta time. I did a very interesting experiment that I’m not sure if you’ve done this yet, but it’s worth a try. If you [00:35:00] haven’t,
Steve: I’ll do it tonight. What is it?
Cameron: take a, take a really complex topic, particularly involving geopolitics and uh, a research project and throw it into deep research in Jet GPT. The one that I did a while, a couple of weeks ago was the history of the World Trade Organization and why it’s broken right now. Um, and it did a very, very damning deep research piece on why the United States has broken the World Trade Organization over the last eight years by refusing to appoint any judges to the Apple at court deliberately because the WTO was ruling against the US. And then the US would just do an appeal to the apple at court, but no judges sitting on the apple at court. So it just went into the void and no one could judge on it. And the, the ruling just goes into the void. I took it, I took the, uh, Chachi PTs output, which was very good and very detailed, like a 20 page report. I gave it to. Gemini, [00:36:00] I gave it to Deep Seek and I gave it to Grok and I said, I want you to fact check this. And I also want you to sanity check the interpretation of the facts and GI gimme your perspective on it. seek. Pretty much agreed with everything OpenAI said, which isn’t surprising because as we know, deep Seek was trained on OpenAI. Gemini pretty much agreed with everything Chat, GPT said as well. Grok took issue with it. the facts, but the interpretation of the facts. And it basically said that chat PT was being too critical of the US government’s motives and it was very pro-US. Then I took Groks response and gave it back to chat PT and said, grok said this and chat.
PT basically said, yeah, well, GR would say that, wouldn’t it? Because it’s designed to be pro-US. Then I took, and it rebutted all of rock’s. Comments. Then I took that and gave it back to [00:37:00] GR and Grock said, yeah, well chat. GPT would say that, wouldn’t it? Because it’s a woke bitch. And this went backwards and forwards. And it’s very interesting ’cause you know, Elon has made this big deal about how Grok is. Neutral and it’s not woke and it’s free speech and like Twitter is and blah, blah, blah, but when you put grok to the test in terms of international relations and geopolitics, it turns out that it’s actually very, I. Pro us with its bias, which I find interesting and it’s like talking to one of my friends who’s like a American patriot. There’s always a justification. Well, America did invade Vietnam, but you have to understand at the time. That we really didn’t understand why we were doing it was for the right rea.
Yeah, basically the US makes mistakes, but when it does, it’s always doing it for the right reasons. Not that it’s a rapacious, imperial bunch of cunts that are just trying to take over as much of the world as they [00:38:00] can. It’s just, it’s just, oh, well, you know, sometimes we get it wrong, but we’re doing it with the best possible intentions, which is frog shit from my perspective.
So. Playing the playing them off. And I do this all the time now. I dunno if you do this, but I will get a,
Steve: I.
do split testing. Sometimes when I’m doing ideas, I go to one then the other and just see the differences are. But I haven’t got it to analyze the other’s work. Sometimes I’ll put in something and say, give me new ideas or different things. I. Don’t include any of these. What have you got? Uh,
Cameron: Yeah.
Steve: I haven’t said, Hey, analyze this and back and forth it, I haven’t done that.
I guess there’s a lot of context you could do that in. There’s a lot of different reasons. You know what I wrote down there? Tell me why the American empire is crumbling geopolitically and if there is a real risk of it, uh, of its institutions failing in. Falling into Civil War or dissolution like the USSR, give me statistics and reasons and whatever.
I’m gonna put that in and see what I get. Right up your
Cameron: Put it in. Yeah, it is. Put it into GPT and then give its [00:39:00] output and you have to do a deep research. So it needs to be oh three deep research. Let it run for an hour, then give that to Grok and see what Grok says. It’s fascinating and you know, this is the world that we will be living in, where you’re playing AI off each other for the right reasons, like I’m trying to get the best possible intelligence.
Perspective on this. And it’s like getting, it’s like having a debate between two really, really smart guys getting all girls, people and, well, I have, you know, voices. Actually I have a female voice on Chet now ’cause I got sick of the male voice. I,
Steve: culture is. I wouldn’t have said
Cameron: it is
Steve: probably,
Cameron: no.
Steve: what’s interesting, Cameron here is we, we probably both have to tidy up in today’s, uh, futuristic has been quite a philosophical one, which is interesting. And I guess we can pick up on the news next week, but we have almost gone full circle from
Cameron: Next week,
Steve: start.
Cameron: we’re gonna do a show next week, like we’re gonna do.
Steve: will.
Cameron: Yeah, yeah, yeah. Yeah. Three months from now. Yeah.
Steve: [00:40:00] almost gone full circle here where the first thing I said is we, we end up with this internet that is talking to itself. And you’ve just described the thing where you’ve kind of
Cameron: Yeah.
Steve: that, where you’ve put it in to the ais to have their discussion amongst each other.
And even though it was in English and it wasn’t in some foreign coding language, we can’t understand, was at a, speed and a, and a level of discourse and digestion of information that we couldn’t keep up with cognitively and at that speed as well. So it’s kind of. Part of what I was saying, it seems like this process is already well underway.
Cameron: This has been futuristic. 39, to you by Sammo 3D.
Steve: Thanks, cam.
Cameron: Thanks, Steve.
Steve: That was really interesting. Hey, like without the news, just a bit of Sam and Cam.
5
66 ratings
After a six-week hiatus, Cameron and Steve return for a sprawling, charged conversation about AI, politics, ethics, and the future of civilization. Steve reveals he’s been 3D printing buildings for TV, while Cam unveils his bold new concept: _Chapter 3_, a movement to engineer the next phase of humanity before AI and robots rewrite society by default. They dig into Mirror World drift, political alignment tools, and why Australia isn’t even remotely ready for the revolution already underway. There’s talk of AI-led political parties, the death of Google search, capitalist collapse, and even starting a cult. Welcome to the next chapter.
[00:00:00]
Cameron: This is futuristic, episode 39, recorded on the 16th of May, 2025. Our first show in six weeks. Steve Sammartino
Steve: I’m so sorry. I didn’t know it was that long, but we’re back and Cameron’s in the house ready to learn as good, including English and grammar.
Cameron: Well, look, there’s been a whole lot of things going on, um, in the world of tech and AI in the last six weeks since we’ve been busy doing other stuff. Steve, do you wanna gimme a quick burst of, uh, what you are proudest of tech-wise in this period, but since we last spoke?
Steve: Yes, so I have, uh, been doing 3D printing for a national TV show printed. Five buildings in five days. I can’t say who it is, but its initials are the block. So that is
Cameron: So it’s not your TV show. I thought this was your TV show. [00:01:00] You,
Steve: mine.
Cameron: doing it for
Steve: Yeah. Look, I, I think I can tell people I can’t show anyone anything, but, uh,
Cameron: five buildings.
Steve: Yep. In five
Cameron: This is with, uh, what’s the name of your
Steve: A
Cameron: building, c.
Steve: 3D with Tommy
Cameron: That’s right.
Steve: named after him because I’m not an egocentric guy. And, uh, this could be the breakthrough we’ve been looking for. ’cause we’ve, uh, we, uh,
Cameron: O 3D doesn’t sound as good. Sam O 3D isn’t as good as macro 3D.
Steve: real good. I
Cameron: It does, yeah. Yeah, yeah.
Steve: so that’s that. And the other thing is I’ve been thinking a lot about Mirror World drift, and I just posted, uh, a blog on that and I had a
Cameron: Explain.
Steve: was awesome.
Well, I think that we’ve created this mirror world, which has been explored by people like Kevin Kelly, where we create a proxy for the world that we live in. But increasingly this proxy, which used to be just the digital version of us, increasingly it’s not us. It starts out with us using AI as tools and then agents and then proxies, and then the ais talk to the ais, and then they [00:02:00] develop language and conversations where we just drift out of this mirror world because it’s no longer relevant to us or for us, and it becomes this almost a new sphere.
Uh, which was something that was popularized in the early, uh, 20th century, uh, where we kind of opt out and it becomes almost a, a new species like an ocean where we just dip our toes in there. But there’s a whole lot of species in there. We don’t understand what’s spawned them. We can’t talk to them, we don’t know.
But like another big ecosystem, it has a huge impact on our lives, but it becomes this other world that we are not really associated with, even though we built it.
Cameron: Yeah, I, I, look, I think that’s kind of inevitable, um, not just Kevin Kelly, but I know that, um, Eric, um, fuck, what was his name?
Steve: Ler.
Cameron: No, no, no. The former CEO of Google for a long time,
Steve: Schmidt.
Cameron: Eric Schmidt’s been talking a lot about this for the last year or two, [00:03:00] how will start to develop their own language that’s more efficient, and then they’ll start talking to each other and he says, that’s when we need to pull the plug on the whole thing.
But that’s not gonna happen.
Steve: No.
Cameron: Um, yeah, that’s, I think that’s inevitable and it’s very Philip k Dickey. Uh, just this whole idea of human intelligence spawning a new kind of intelligence, which is, becomes so vastly different to our own intelligence that we, you know, I actually, of the show notes I had, one of the things that I watched a couple of weeks ago was a YouTube. Interview mostly between Ben Gerel and Hugo de Garris. guys I know a little bit. Hugo and I were on stage together at a Singularity conference about nine or 10 years ago down in Melbourne. I. Um, but one of the, they were just sort of talking about, they’ve both been AI researchers for decades and they were talking about where things are at, but they were, uh, Hugo was talking about alignment.
You know, you hear the AI researchers talk about alignment, which is to make [00:04:00] sure that the AI’s values are aligned to human values. And Ben I think it’s kinda like squirrels at Yellowstone National Park. Like talking about are human values aligned with squirrel’s values? I guess at some level, you know, we both rely on oxygen. We both rely on the climate not getting too hot. You know, we, we value certain things, but really, I. know, don’t, you know, we, we look at squirrels, we find them cute and interesting, and generally speaking, we don’t wanna harm them. We don’t wanna hurt them. We want them to run around and do their thing, but we don’t really think about them on a day-to-day basis unless you’re a park ranger.
Steve: they’re outside of the consideration set unless you’re specifically working in ecosystems and the maintenance and the importance. And I think Bostrom talked about that too when he did his first, uh, artificial super intelligence thing. He said, if, if we want to build a highway, look, we don’t want to hurt the ants, but if you’re in the way, the highway’s going in, it doesn’t matter.[00:05:00]
Cameron: Mm-hmm. Yeah, so I, and they were basically saying, and I think this is right, that if we have a super intelligence, its relationship to us will be like our relationship to squirrels or ants and. Anyway. Listen, I wanna tell you what I’ve been talking, what I’ve been thinking a lot about since you and I last spoke, and I’ve been dying to speak to you because you are the guy I want to talk to about these sorts of things, right?
Steve: you.
Cameron: You are the, you’re the only
Steve: Someone
Cameron: someone wants to talk to you. You are the only guy I can have a serious conversation with this stuff about. It is politics. So we just had a federal election here of course my number one political issue right now apart from legalization of cannabis is what are we doing to prepare our society for the AI robot revolution that’s gonna hit in the next couple of years?
Uh, Steve’s doing a selfie. I’ve gotta put my gang sign up,
Steve: we got a, yeah,
Cameron: gang sign.
Steve: we go.
Cameron: Um, [00:06:00] and, uh, no political party that I was aware of was even talking about it. What we’re gonna do about the AI robot revolution in the next five years. Not even on the fucking agenda anywhere I.
Steve: so, of course, not on the agenda at all, but think of it like this. They didn’t even have the courage to truly talk about the property crisis, and that’s already here and people are living in it and can’t get out while not living in it. Sorry, no pun intended there. And they wouldn’t even talk about that.
Like you talk about, talk about the wall, the one we’re already in, there’re just ignoring you, pretending it’s not there because they’re too scared. They’ll get voted out, let alone something they barely understand. mean, disappointment, but not surprised.
Cameron: I think the Labor party, you know, to give elbow is due, did say they’re gonna invest in building 23 houses in the next 10 years or something like that. So, you know,
Steve: go.
Cameron: for big fucking vision elbow,
Steve: Oh,
Cameron: you’re really killing it mate. Killing it. [00:07:00] Killing it. Um, so you know that that sort of didn’t happen, that wasn’t on the agenda, but. I’ve been thinking a lot about the future of our society and what we need to do to engineer it. And we’ve talked a little bit about this before, but been working on this idea. I’m calling chapter three. And chapter three is a movement that I wanna start with you and I call it chapter three because I basically figured we’re moving into the third chapter of humanity. And the way I think about it, the first chapter was everything that happened up until the Industrial Revolution.
Steve: Yeah.
Cameron: 10,000, maybe we could say a hundred thousand years. That was the first chapter of humanity, and it was basically manual labor. That was the first chapter of humanity. Right? Then we got to chapter two, which was the industrial Revolution through to today, [00:08:00] everything that’s happened in the last 250 years, let’s say roughly 300 years. Uh, chapter three is the AI robot. Chapter where, what it means to be. It’s the singularity chapter, right? What it means to be human, we live, how we interact, how we survive is gonna be as vastly different from the of the late Industrial Revolution as the late industrial revolution is from people lived in the Dark Ages or the Middle Ages, right? And a big believer in the fact that we need to engineer that as much as we can. We need to be thinking now about what do we want, what’s important to us? What are we willing to sacrifice? What do we most desperately wanna protect? What do we need to engineer to get the best? Possible outcome as far it is, is it is [00:09:00] within our ability to engineer it.
Because a lot of things when we have a super intelligence, obviously are gonna be completely out of our domain and our control. So I’m pulling a chapter three and I, I want to get together the best thinkers, the best minds starting locally and then maybe going internationally to start to think about what does this look like?
Where is this dialogue happening? I know that there are. Definitely dialogues like this happening in certain rooms, maybe at Davos, maybe in Silicon Valley enclaves where your billionaires are gathering.
Steve: there. Look.
Cameron: Yeah, right. Um, as opposed to dross, who’s probably at Davos. Dross is probably at Davos. Um, he comes out of, I think that’s the backstory of
Steve: If
Cameron: He came outta Davos. Yeah, he’s Elon Musk in a wheelchair, and, uh, after he did too much plastic surgery and then he becomes dross and creates a dials. Anyway, Dr. Huan, uh, moving on. So one of the things that I’ve done [00:10:00] recently is go into Chachi PT and ask it to help me build my political profile. I did it sort of with a view towards the election, but it said after I did this, basically it said, dude, you don’t fit anywhere in the Australian political
Steve: yet.
Cameron: completely, you’re completely off the fucking tree. and I got it to, you know, have you used Vote Compass? Right. ABC’s Vote Compass. It’s,
Steve: used it.
Cameron: oh. ABC’s got their own version of this compass tool, which is being used around the world today. It basically asks you high level questions. How do you feel about orcas?
How do you feel about housing prices? How do you feel about climate change? How do you feel about L-G-B-T-Q rights? How do you feel about this? And then it tells you where you sit
Steve: Mm.
Cameron: a political axis and which parties are closest to you. I do it. I come out. Left of she Guevara. Right. So it’s like, dude, you need to move to Cuba.
Steve: And you don’t. You do. And you don’t. But anyway, we’ll
Cameron: Yes. Well, that’s what GPT said. I could, I should read you [00:11:00] GP T’S analysis. Like, dude, you’re like all over the fucking map. Um, but it was a, i, I got GPT to build a profiling tool for me that asked me. A 50, way more in depth questions, then it analyzed my responses and told me, you know, kind of where I fit in terms of my political profile.
Help me. Help me. I. Coalesce. What is important to me, what my morals and values and ethics are when it comes to politics, right? And again, it said, dude, you, you’re like, there’s no one, there’s no political party in Australia that even comes close to any of this kind of stuff, right? Little bit of this, little bit of that, but mostly it’s off the radar. So what I’m trying to figure out as part of chapter three. how do we build a tool that enables everyone to go through this kind of exercise? To [00:12:00] really think deeply about what’s important to them politically. ’cause that’s really the question of do you want society to be? Like, whenever I talk about the fact that I’m a sort of, um, a, a communist on my investing show, I get emails from members saying, how can you be talking about value investing and talking about buffet?
And at the same time say you’re a communist. I say, well, in my brain, there’s two sides to it, right? Where do I want society to be? I want society to move towards a place where everyone’s needs are taken care of. Everyone is looked after, regardless of your earning capacity or how pretty you are, or how many Instagram followers you have, I want everyone to feel happy, safe, fulfilled, be able to eat, have shelter protection. Get fulfillment, have an education, have healthcare, all those sorts of things. The question that I have is how do we get there from here? And to me, communism is the only political philosophy I’ve ever come [00:13:00] across that really has a plan for that. It’s like, okay, well it has a vision for that kind of a world.
The question then is, how do we execute that? What are the, how do we execute that in a way that’s. Moral and ethical, and viable economically, et cetera, et cetera, and these are just problems to be solved, but it has a vision for where society could be if we figured out answers to all of those problems. Capitalism doesn’t.
Steve: that. No, wait a minute. Adam Smith talked about that. As in the terms of human flourishing. And he talked about the idea of allocating resources more effectively using the market to do that. But he also talked about the idea of government setting boundaries and guess minimums in the, in the capitalistic sense that it’s what, what is that bare minimum?
And I imagine your minimum threshold of what that flourishing looks like with all the things that you mentioned. And the mixed economy tries to do that, to provide a. Benchmark or baseline of access to [00:14:00] resources for self-improvement and to look after our most vulnerable Anyway, keep going.
Cameron: No, you’re right. And Adam Smith, as, as you and I know, but I’m not sure everyone knows this, you know, wasn’t an economist, he was a moral philosopher,
Steve: Mm-hmm.
Cameron: he was talking about what was a, a, a good moral way that would a better society. What we know now that he didn’t know, but we know, is that capitalism, laissez fair capitalism. Fails, it doesn’t work in the United States as the longest running experiment in laissez-faire capitalism is, you know, classic ca is the example now that it doesn’t work.
Steve: Wow.
Cameron: it creates a shit load of money, but it also creates such an endemic level of oligarchic corruption that it needs
Steve: uh,
Cameron: to be very heavy.
Steve: too. Smith said that the one fatal flaw of capitalism is that power begets power. And every now
Cameron: Yeah.
Steve: you need to have a redistribution of power, because monopolies like almost, or, or duopolies or [00:15:00] very concentrated economic systems and wealth is inevitable. So then you have to recalibrate every now and again.
And, uh, I would argue we’re at a stage now where we need that recalibration.
Cameron: Yeah, but those recalibrations are extremely violent usually and painful.
Steve: they, become difficult to get. But, and now at this point in time, we’ve got media capture to, to a greater extent than we had during the age
Cameron: I.
Steve: um, yeah. The, the, what was the, the, the Gilded Age? I.
Cameron: We’ve look, there have been plenty of studies done on this. There’s a book I read on this not that long ago that looked at historical oligarchies and it said, across history, going right back to Athens, places like that. There are only four ways that oligarchies have ever ended in history. A civil war, a major international war, some other form of a major famine. Or a plague or some other form of societal collapse.
Steve: Yeah,
Cameron: Right. The of climate change, something like that. There’s only four ways that oligarchs ever end and [00:16:00] none of them are good. Right. And it’s a major civilizational reset. is either permanent in some cases
Steve: Hmm.
Cameron: you know, generational recovery. It’s not pretty, but you know, that’s, that’s not a vision.
We’re not working towards a vision. We don’t have a Department of Vision in Australia that’s down. You know, I’ve always said this,
Steve: We need the ideas department.
Cameron: I wrote about this in the psychopath epidemic where I said, you know, economics. supposedly a subfield of morality of ethics, right? You econ, economics sits under ethics.
So we have a department of, we have a treasury, we have a department, we have a department of economics. We should have a department of ethics that says, okay, you, you can’t have economics without ethics. Ethics be, should govern how economics works. And in the Westminster system, we don’t really have. Department of ethics. We have precedents, we have legal precedents, and we have a [00:17:00] constitution, and we have these sorts of things, but we don’t sit down as a people and go, where do we want to be as a nation? Ethically? How do we want to be living 10, 20, 50 years from now? And what do we need to do to get there? Capitalism doesn’t really encourage that it, it just throws it to the wind and say, we’ll figure it out as we go.
Steve: but, but for a long period of time. And it was a good measure way back, uh, when was the dollar or money was a good proxy for wellbeing because so few people had access to resources and clean water and all of these things, and, and that’s why GDP was a functional measure because we’re increasing the absolute wealth and relative wealth.
Of societies which created infrastructure and resources, access to food, transportation, healthcare, all of these things. And that dollar was, was a pretty good proxy. But in a global economy where things are worth different things in different markets, and you have all of this arbitrage that wasn’t possible before, that dollar is no longer, uh, a good measure for general wellbeing.
So there’s now[00:18:00]
Cameron: And it never really was either because we destroyed the environment in the process of increasing GDP,
Steve: not
Cameron: we fucked the,
Steve: A lot of things
Cameron: yeah,
Steve: in that, in
Cameron: exactly.
Steve: were not costed, which we’ve even had attempts to try and cost these externalities. Everything from carbon credits or whatever, and it always gets kiboshed by the oligarchy.
Cameron: So getting back to chapter three and political profiling. You know, trying to figure out how do we build these sorts of tools that get people to think more deeply about what do we want our society to look like 10 years from now, and what do we need to put in place to. Have the best possible chance of achieving that as opposed to just winging it, which is what I feel we are doing right now.
And hoping that Sam Altman and Elon Musk and Denise Hassabis and all of these guys don’t fucking land us in a huge ditch, which, let’s be honest, at least most of them are probably gonna do [00:19:00] that. Most of them are gonna handle it really badly. Um, if you look at. Zuck and Musk. Uh, not sure. I want them to have any power over the future.
Uh, Sam, Demi a little bit. I trust them a little bit more than Zuck and, but. You know, we, I don’t trust the future of humanity. I into the hands of a handful of billionaires and Trump, but that’s kinda what we’re doing right now. We’re just going, I don’t know. sort of work it out as we get there.
No, there’s no one, we are not coming together as a society, and I don’t want to hear that. There’s three people in Canberra that are sitting down thinking about what our AI policy is gonna be. That’s not what I’m talking about. I’m talking about at a societal level in Australia, coming together and going. Okay, what are we gonna do about this seriously? Like if there’s a non-negative, a non-zero chance that a substantial amount of the population are gonna lose their jobs to advanced AI and, and, uh, humanoid robots [00:20:00] in the next five to 10 years. We should be fucking talking about that right now and starting to plan what that’s gonna look like.
We should be coming together as a people and going, uh, what are we gonna do? And that’s not happening. And I think it’s gonna be up to us. If it’s gonna happen, it’s gonna be up to literally you and me. We’re the only people, Steve.
Steve: Look,
Cameron: Not that I have a Messiah complex, Steve, but
Steve: I’ve been saying that for some time now. Cameron and I.
Cameron: I’m the only person I trust and you we’re the only people I trust to do this fucking thing. Mm.
Steve: I have always wanted to start a cult, and I know that I could be a good cult leader. And if there’s
Cameron: I know.
Steve: sounds cultish, it is this a new world by ai, we’ve got the perfect audience. A disinfected entire society of gamers in their parents’ basements who can’t afford a house. They need cult leaders.
you and I and we’re gonna need a place to build this new
Cameron: Why don’t I have a sex cult? Steve, I’ve always wondered this. What’s wrong with me? Look, I’ve got long hair and a [00:21:00] ponytail.
Steve: I think if
Cameron: I should have. I
Steve: if you, if you wanna create
Cameron: step one, get a ponytail. I.
Steve: no, is cult is you gotta start a cult
Cameron: Well, to get a cult, you need a ponytail, I think. Yeah.
Steve: fine. We are starting
Cameron: Okay. Back to, in fact,
Steve: and
Cameron: chapter three, back start. A Cult. Cult.
Steve: No,
Cameron: Well, if you wrote it down, I know it’s serious.
Steve: I’m, I’m, I am serious, I think to create serious change, and most things are kind of cults. They really are. They, it becomes a non cult once it’s accepted in society. I mean, let’s say you started Catholicism Today, they go, that’s a cult,
Cameron: Chrisy, and I always chrisy and I always refer to our Kung Fu school as a cult because you start off going one day a week before you know you’re going six days a week. And if you’re not there one night, everyone’s texting you going, dude, where are you? How come you’re not here? What’s going on? Like, you’re not. It’s a total, it’s a total cult. No one’s allowed to leave. You’ve gotta stay, you’ve gotta do kung [00:22:00] fu
Steve: and,
Cameron: with every waking hour of your day.
Steve: the the, the, song that we can have is, the theme today is, uh,
Cameron: Don’t play songs we’ll get, we’ll get pinged. We’ll have to take it out.
Steve: not gonna
Cameron: Uh, sing it.
Steve: gonna I.
Cameron: Just sing it.
Steve: it’s living color. The cult of personality. I mean, you know,
Cameron: Cult the personality. Cult the personality.
Steve: that’s it.
Cameron: Okay. Speaking of which, moving on. ’cause we don’t have, we don’t have time. We don’t have time. We don’t have time.
Steve: no, we’ve
Cameron: The question I’ve got for you, Steve, thinking about elections is when we gonna see our first AI political candidate? And before that, when are we gonna have our first human as a proxy for AI candidate, as in saying I’m a human running as the candidate, but I will have all of my policies created by an AI and will use an AI to guide. My how I vote, uh, in every [00:23:00] situation, every policy, I will run it through ai, chew it up, good, bad, figure out how to, how to, you know, what’s the most ethical, moral, logical, rational way to process this. When do you think we’re gonna start to see ai, not just politicians talking about it, or political parties advocating?
We need to do more about it, which, let’s face it, now is the time we need an ai. political party, which is gonna deal with, we’re gonna figure out what, how we’re gonna deal with AI and robots, and we’re going to use AI to help us navigate this whole process in as we do it. When do we have the first AI political party?
Political candidate? Steve, your, I want you to put money on the table right now. Is it? When is it what? Gimme a gimme a year. When we’re gonna see that in Australia. Steve Santino, Australia’s leading futurist. Make a prediction right now. Put your career on the line.
Steve: The next election, will have
Cameron: State, local, federal, [00:24:00] what?
Steve: and now federal, federal. Guided by AI in the next election within four years, because there’s gonna be such radical change between now and then. It’s gonna have such an influence on society. They’re gonna have to tap into it that they won’t have a choice.
This is not a choice thing. We’ll have it by the next election because I just think with the, the level of recursion and how fast things are changing, four years is a very, very long time. Four years is like 40 years, a hundred years, 300 years. But in this election. I think they’ve already been doing it, except they haven’t been putting in the prompts that you would desire, the prompts that they’ve been putting in.
How do I develop a policy which keeps Gina Reinhardt happy? Pretends that I’m actually gonna make housing affordable and avoids any of the climate issues while not accepting royalties from all of the foreign companies who dig up our fossil fuels and send them. I mean, they were doing it. Every policy that is written, every
Cameron: It’s just,
Steve: written are all, they’re
Cameron: just,
Steve: AI to prompt it right now, but they’re just,
Cameron: they’re just doing what they’ve been doing for the last, just [00:25:00] doing what they’ve been doing since Howard Man, isn’t it? They
Steve: exactly,
Cameron: need, they. That’s the prompt to ai. Okay. Imagine you’re John Howard. What would you do in this situation? And that’s what elbow does too
Steve: yeah,
Cameron: well, talking about, you know, um, the, the normalization of this.
So on the front page of the Financial Reviews website today.
Steve: Alright, better
Cameron: It’s headline article. AI is starting to work. The Trump drama could look like a sideshow lost among the Trump turmoil is the disruption caused by the AI revolution? It’s happening. And Australian investors, politicians, and business leaders are not ready. James Thompson Calmness. March 16th, 2025. For the past few days, some of Australia’s top chief executives, including Commonwealth Banks, Matt Koman, nabs, Andrew Irvine, and Telstra Brady. Have been bunkered down in the US city of Seattle for one of Microsoft’s most exclusive and influential events. By the way, I used to help organize those events.
I took the CEO of [00:26:00] Telstra and all the senior executives of Telstra to Seattle for those events. Many times, back in my Microsoft days, 25 years ago, the tech Giants annual CEO Summit has an exclusive guest list, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah. and it’s talking about ai. Basically the, his biggest message is talking about Komen. His biggest message is that the AI revolution is moving faster than ever, and Australia may not be ready for many consumers. It may seem that the initial hype that accompanied the release of chat PT has faded. And generative AI models are simply better versions of existing tools, a smarter way to search the web, for example, or a suit virtual assistant. But inside some of the world’s big businesses, things are changing and fast. Also in the financial view today, there’s an article in the opinion section. How This Teenager Uses AI will surprise you. It’s by Elaine Moore because they are less enmeshed in existing structures. Teenagers tend to be more willing to play around with new technology, finding their own shortcuts and uses.
When 16-year-old Jra Lara [00:27:00] Gly checks her phone in the morning, she scrolls through the messages of friends have sent her, and then. If there’s something on her mind, she opens chat. GPT asks a question out loud and listens to the answer. Sometimes I ask things I was thinking about overnight. She says, just random thoughts.
Or if I had an interesting dream, I might ask about that. Lara uses chat, GPT every day, multiple times a day. Everyone she knows at school does the same according to the last poll by off comm. out of five 13 to 17 year olds in the UK are using generative ai. Early attempts to ban the technology in schools have given way to acceptance or resignation that this is an inescapable part of the world that students are growing up in.
Steve: I, I’m actually not surprised. One little bit, and I’m not surprised that she asked that. I did. That’s what I do all the time. That’s what you and I have been talking about for two or three years since we’ve been doing this. We
Cameron: My point isn’t that they’re doing it. My point is that the financial review got headlines about AI is now [00:28:00] changing everything. Like for the last couple of years it’s been, oh, look at this crazy stuff that the kids are doing and how bad it is and how it’s not as good as this, and not as good as that, and not as good as the other when their financial review that all the fucking business leaders and political leaders, et cetera, et cetera, read in this country starts going, holy shit, this stuff is serious. Um, people are gonna start to take, those sorts of people are gonna start to take more notice. But Sam Altman was recently at a Sequoia Capital event and I watched the YouTube chat that he gave and they asked him, why aren’t big businesses making more use of generative ai? And he said, look, this is the same in every technology revolution.
Big businesses are just way, way slower to move. And it’s the startups that move quickly and they get the advantage. . He says people in their thirties and forties are using it as a Google replacement. Uh, people in their twenties are using it. Uh, late twenties are using it first, basically. Personal and career advice. Um, what do I, you know, what do [00:29:00] I do here? What do I do there? College age people, people, late teens, college, he says they use it as an operating system.
Steve: Yeah. Yep.
Cameron: It’s basically the underlying thing that runs everything, which is how I use it. Chrissy’s starting to get there as well. it’s just everything. Now it’s my default.
Steve: It’s my quasi desktop and it’s the, it’s
Cameron: Yeah.
Steve: fulcrum of all of the other pieces that go into it now.
Cameron: Yeah, me too.
Steve: And,
Cameron: And.
Steve: thing I noted noted in that article as well, which for me was interesting from a business perspective, and this is what the. The financial review readers should pick up.
Laura’s preference is chat, GBT. With Google, you have to click on websites and you have cookies and adverts. It’s annoying like we’ve spoken before about I. Google searches, we know it being replaced in serp and I’m actually surprised that Google’s results were so good. I dunno if they brought it forward or the world hasn’t caught up yet.
But just cannot see a world where a page of links is, is even [00:30:00] gonna survive. Put aside the whole dead internet theory and if no one gets links, no one gets published. And we have that whole synthetic data problem. But gee, I tell you now, search has to be having a rapid decline, has to be.
Cameron: Not only search, so the CEO of Fiverr recently sent a an email out to all of his employees. Um, Hey team, I’ve always believed in radical candor and despise those who sugarcoat reality. To avoid stating the unpleasant truth, the very basis for radical candor is care, blah, blah, blah. So here is the unpleasant truth.
AI is coming for your jobs. Heck, it’s coming for my job too. This is a wake up call. It does not matter if you’re a programmer, designer, product manager, data scientist, lawyer, customer support rep, sales person, or a. Finance person, AI is coming for you. You must understand that one. What was once considered easy tasks will no longer exist. What was [00:31:00] considered hard tasks will be the new easy and what was considered impossible. Tasks will be the new hard. If you do not become an exceptional talent at what you do, a master, you will face the need for a career change in a matter of months. I’m not trying to scare you. I’m not. About your job at Fiverr.
I’m talking about your ability to stay in your profession in the industry.
Steve: Wow, that’s. It’s big, big statement in a matter of months. I feel like
Cameron: Yeah.
Steve: that a bit. I don’t think months because of the lag that you see with big corporates is the, is the main reason.
Cameron: Yeah.
Steve: that, that for me was interesting and that was, uh, last week’s post, uh, that I did, I spoke about.
That AI can do everything. The one thing it seems to me that it can’t kind of do yet is nothing in particular. [00:32:00] It’s actually just moving between tasks, if that makes sense. So AI can kind of do it all, but the, the thing that it doesn’t do, and I, I wrote, why hasn’t AI taken your job yet? And as we know it can elite law exams, better essays than grad students, all of that kind of stuff.
there hasn’t been a tidal wave or a terminator just while disruption with jobs. It’s kind of a paradox. Uh, and, and the thing for me is that it can, it doesn’t fail the tasks that are intellectually complex. It fails when workflow is messy. is what AI can’t do yet, and I don’t think agents are gonna be able to do it either.
If a job requires juggling fragmented tasks, shifting priorities and ambiguity, being a, a manager on a Monday, AI’s gonna struggle because you’re gonna be moving from a warehouse to a boardroom to, uh, a meeting offsite. Uh. [00:33:00] And it’s all of these tasks in between the tasks in different geographical and physical contexts that it, that it struggles with.
And it, and it might be that an AI bot. Then takes over and does that because it doesn’t bring its knowledge with it. It’s sort of trapped in the machine. It needs to be released from the machine in in some way. So for me, it’s the context switching and geographic switching. The more of that you have in the job, the less at risk you are, because even if there’s a lot of tasks that AI does, you’ll get your direct AI to do those pieces, and then you’re taking this piece to the next place.
Cameron: I dunno if I agree with that. I.
Steve: Tell me, tell me why you disagree, but it’s
Cameron: Well, I think it’s, I think it’s very good at context switching because you know, a one minute I’ll be talking like I have today at what not One minute I’ll be talking to AI about, you know, deep, deep politics, domestic politics. Next I’ll be talking to about which, which brand of matcha green tea has the highest [00:34:00] ceremonial grade qualities.
Steve: lemme refine context switching, not the context of the topic. Intellectually, it’s probably physical space, right? So you’re in a warehouse doing something. I. Like, how does the AI get from the boardroom to the warehouse? Like that sounds, I know that sounds like a throwaway statement, but I I genuinely mean that like our meat bodies takers from here to there.
Does,
Cameron: Well, when we have humanoid robots.
Steve: move between, or is it
Cameron: Yeah.
Steve: like is it a fluid movement of an
Cameron: It’s a bit of both.
Steve: ai or is it an
Cameron: Well, what’s it doing?
Steve: humanoid.
Cameron: it doing in the warehouse?
Steve: I don’t
Cameron: What’s its function?
Steve: meet Billy and Billy’s talking about this, and I, I don’t know. And it’s
Cameron: Yeah, so it’s a, it’s a, a console on the wall. Hey Billy, what are you doing? Hey, um. One final thing I need to talk to you about before we run outta time. I did a very interesting experiment that I’m not sure if you’ve done this yet, but it’s worth a try. If you [00:35:00] haven’t,
Steve: I’ll do it tonight. What is it?
Cameron: take a, take a really complex topic, particularly involving geopolitics and uh, a research project and throw it into deep research in Jet GPT. The one that I did a while, a couple of weeks ago was the history of the World Trade Organization and why it’s broken right now. Um, and it did a very, very damning deep research piece on why the United States has broken the World Trade Organization over the last eight years by refusing to appoint any judges to the Apple at court deliberately because the WTO was ruling against the US. And then the US would just do an appeal to the apple at court, but no judges sitting on the apple at court. So it just went into the void and no one could judge on it. And the, the ruling just goes into the void. I took it, I took the, uh, Chachi PTs output, which was very good and very detailed, like a 20 page report. I gave it to. Gemini, [00:36:00] I gave it to Deep Seek and I gave it to Grok and I said, I want you to fact check this. And I also want you to sanity check the interpretation of the facts and GI gimme your perspective on it. seek. Pretty much agreed with everything OpenAI said, which isn’t surprising because as we know, deep Seek was trained on OpenAI. Gemini pretty much agreed with everything Chat, GPT said as well. Grok took issue with it. the facts, but the interpretation of the facts. And it basically said that chat PT was being too critical of the US government’s motives and it was very pro-US. Then I took Groks response and gave it back to chat PT and said, grok said this and chat.
PT basically said, yeah, well, GR would say that, wouldn’t it? Because it’s designed to be pro-US. Then I took, and it rebutted all of rock’s. Comments. Then I took that and gave it back to [00:37:00] GR and Grock said, yeah, well chat. GPT would say that, wouldn’t it? Because it’s a woke bitch. And this went backwards and forwards. And it’s very interesting ’cause you know, Elon has made this big deal about how Grok is. Neutral and it’s not woke and it’s free speech and like Twitter is and blah, blah, blah, but when you put grok to the test in terms of international relations and geopolitics, it turns out that it’s actually very, I. Pro us with its bias, which I find interesting and it’s like talking to one of my friends who’s like a American patriot. There’s always a justification. Well, America did invade Vietnam, but you have to understand at the time. That we really didn’t understand why we were doing it was for the right rea.
Yeah, basically the US makes mistakes, but when it does, it’s always doing it for the right reasons. Not that it’s a rapacious, imperial bunch of cunts that are just trying to take over as much of the world as they [00:38:00] can. It’s just, it’s just, oh, well, you know, sometimes we get it wrong, but we’re doing it with the best possible intentions, which is frog shit from my perspective.
So. Playing the playing them off. And I do this all the time now. I dunno if you do this, but I will get a,
Steve: I.
do split testing. Sometimes when I’m doing ideas, I go to one then the other and just see the differences are. But I haven’t got it to analyze the other’s work. Sometimes I’ll put in something and say, give me new ideas or different things. I. Don’t include any of these. What have you got? Uh,
Cameron: Yeah.
Steve: I haven’t said, Hey, analyze this and back and forth it, I haven’t done that.
I guess there’s a lot of context you could do that in. There’s a lot of different reasons. You know what I wrote down there? Tell me why the American empire is crumbling geopolitically and if there is a real risk of it, uh, of its institutions failing in. Falling into Civil War or dissolution like the USSR, give me statistics and reasons and whatever.
I’m gonna put that in and see what I get. Right up your
Cameron: Put it in. Yeah, it is. Put it into GPT and then give its [00:39:00] output and you have to do a deep research. So it needs to be oh three deep research. Let it run for an hour, then give that to Grok and see what Grok says. It’s fascinating and you know, this is the world that we will be living in, where you’re playing AI off each other for the right reasons, like I’m trying to get the best possible intelligence.
Perspective on this. And it’s like getting, it’s like having a debate between two really, really smart guys getting all girls, people and, well, I have, you know, voices. Actually I have a female voice on Chet now ’cause I got sick of the male voice. I,
Steve: culture is. I wouldn’t have said
Cameron: it is
Steve: probably,
Cameron: no.
Steve: what’s interesting, Cameron here is we, we probably both have to tidy up in today’s, uh, futuristic has been quite a philosophical one, which is interesting. And I guess we can pick up on the news next week, but we have almost gone full circle from
Cameron: Next week,
Steve: start.
Cameron: we’re gonna do a show next week, like we’re gonna do.
Steve: will.
Cameron: Yeah, yeah, yeah. Yeah. Three months from now. Yeah.
Steve: [00:40:00] almost gone full circle here where the first thing I said is we, we end up with this internet that is talking to itself. And you’ve just described the thing where you’ve kind of
Cameron: Yeah.
Steve: that, where you’ve put it in to the ais to have their discussion amongst each other.
And even though it was in English and it wasn’t in some foreign coding language, we can’t understand, was at a, speed and a, and a level of discourse and digestion of information that we couldn’t keep up with cognitively and at that speed as well. So it’s kind of. Part of what I was saying, it seems like this process is already well underway.
Cameron: This has been futuristic. 39, to you by Sammo 3D.
Steve: Thanks, cam.
Cameron: Thanks, Steve.
Steve: That was really interesting. Hey, like without the news, just a bit of Sam and Cam.
32,283 Listeners
5,341 Listeners
38,374 Listeners
9,202 Listeners
111,917 Listeners
15,310 Listeners
9,207 Listeners
5,461 Listeners
15,321 Listeners
10,556 Listeners
3,286 Listeners
1,619 Listeners
1,218 Listeners