Futuristic

Futuristic #43 – The Lemming Race to Superintelligence


Listen Later

In this fast-paced episode of Futuristic, Cameron and Steve dig into a wild week in AI and tech. Cam shares how he stunned futurist Peter Ellyard by using ChatGPT to generate a bold, original idea called “The Other Year” – a radical, identity-swapping sabbatical for all Australian adults. Steve loves it, but the discussion spins off into a brutal critique of political cowardice, economic inequality, AI translation workflows, and the geopolitics of the AI arms race. From Neuralink trials to Honda’s reusable rockets, from AI-generated music to legal rulings on copyright, this one covers everything. Is AI stealing jobs or creating new ones? Are we on the edge of a superintelligent revolution—or just in a corporate lemming race off a cliff?

FULL TRANSCRIPT

Audio of FUT 43

 [00:00:00]

Cameron: Sure. My other, yeah. Uh, welcome back to the Futuristic episode 43. According to my notes, Steve Samino my, um, AI transcription engine in Descrip. Never, never likes having to work with your name after all these months and years of doing it. It’s likes So what? Summer Chi Chico, what Never gets it Right.

Doesn’t get my name right either. So don’t feel bad.

Steve: Look bias. Ai, ai, Italian racism is what we are hearing here. And I just wanna point that out,

Cameron: Systemic.

Steve: it’s systemic.

Cameron: Yeah, yeah, yeah.

Steve: the mob got us and now the technocratic mob, they’re after us Again,

Cameron: What’s the Italian version of anti-Semitic? Is it anti [00:01:00] tic and.

Steve: I don’t like logs. It’s,

Cameron: Any wog. I called you a wog on the last episode.

Steve: for it. We don’t, we don’t, we don’t like your type around here.

Cameron: Well, it’s been a crazy week, Steve, um, in AI and tech and all of that. It’s just so crazy. But I wanted to start with something if, if you don’t mind. I mentioned last time that my friend Peter Ard, 88 years old, um, futurist, um, has was, was in Brisbane with his partner Robin. Had a lovely time with him, but had a lot of con conversations about I, uh, with ai, uh, shit, let me start again.

I had a lot of conversations with Peter about AI and just realized that he wasn’t really getting it still. So I’ve, I’ve spent a lot of time in the last week in an email thread with him, um, encouraging him to think about it in terms of creativity and, um, and I. [00:02:00] I’m sort of acting as the intermediary between him and AI now.

Um, and I asked him to give me a challenge to give to one of the AI tools to demonstrate that they are creative. ’cause his are, his question to me when we caught up is, do you think they’ll ever have imagination? And I said, they do right now. They’re creative right now. So he came up with this, uh, question that he wanted me to see what it could create, um, that was genuinely creative, built around some of his work.

He has these things, he calls the six pillars of livability, and he wanted it to come up with, uh, an idea for how we could make Australia a better place by 2050 built around his six pillars of livability. [00:03:00] So I’m gonna walk you through this ’cause I, I think it was pretty cool and he was sort of blown away by this.

My prompt and I used Chatt PT oh three deep reasoning model. The prompt was this, you are now a futurist whose job is to design a better future for humanity. Come up with one idea that is completely original and is not based on extending existing trends or technologies that would make Australia a more livable country in 2050.

And fits in with the six pillars of livability, prosperity, harmony, inclusion, sustainability, health and security. Now, to be fair, it came up with a couple of things that I rejected early on. I was like, nah, that’s not really that original, that’s just an extension of other things. You’re just, you know, taking existing stuff and, and, and, you know, making it more extreme or [00:04:00] more advanced or whatever.

So I pushed back and a, a couple of times, twice maybe. Then it came up with this the other year act a once in a lifetime state funded identity exchange sabbatical. Core proposal. Every Australian adult is guaranteed and required to take one 12 month other year, somewhere between their 25th and 55th birthday.

During that year, they live, work, and vote in a host community whose demographics are as different from their own as an algorithm can manage urban versus remote, wealthy versus disadvantaged, migrant hub versus old settler town, et cetera. Think of it as compulsory whole life, cultural cross training. No uniforms, no bootcamps, no charity optics.

Just swapping places long enough that the word them loses, meaning. Why it’s [00:05:00] new? National service exists. Military civic Peace Corps style gap years and couch surf exchanges exist, but there is no country on earth where a universal mid-career identity swap sabbatical is a legal right and obligation. It isn’t an extension of technology or an NGO trend.

It’s a structural reboot of social experience built straight into citizenship. And it goes on and on and on. Has a timeline for delivery, how it would work legislatively, how it would be funded. Basically, a federal agency and trust fund, 0.6% of GDP offset by winding back duplicate employment subsidy schemes, a pairing engine.

Independent algorithm balances 20 plus variables. Income, quintile language, background remoteness, climate industry, disability status. No one has sensed somewhere they medically can’t live cost and cover award wage paid from the trust. If the host employer [00:06:00] can’t home mortgage slash rent frozen by law for the year child option, families can move together or split with travel stipends, civic rights, temporary enrollment in local council and state electorates, you vote where you live that year.

Return and certification completion internationally recognized cross-cultural competence license, a hiring premium in both public and private sectors. The bottom line is, instead of betting on gadgets or tax tweaks the other year, puts human beings into each other’s shoes, literally for a whole year. And lets empathy do the heavy lifting across prosperity, harmony, inclusion, sustainability, health and security.

One bold never tried lever. Six pillars lifted together.

Steve: Look, I cannot tell you how much I love that. That’s one of the best ideas I’ve ever heard, and I have heard similar things. It, it does come back to walk a mile in someone else’s shoes, like it. And [00:07:00] it makes sure there’s a couple of stones in those shoes and they don’t fit properly. Or if you’ve had stones and shoes with holes in the bottom, you get yourself a nice pair of, uh, a dope pair of Nikes, as Zach Di Larocca said in one of his great songs.

And ramble too. He had a dog pair of snake son.

Cameron: Look, I was impressed and Peter, Peter was impressed and, and you know, here’s so two points about it. One is, well, obviously apart from the fact that I think it’s a great idea. One is the prompt was very specific, what it had to focus on and two. It came up with this idea in a minute. I mean, after I pushed back a couple of times on the first couple it came up with, like by minute three it came up with this idea that you, Peter, and I all thought was a great idea.

Imagine if you i’d, I’d got it to come up with a hundred ideas over the course of the next hour and a half, right?

Steve: And it points out something [00:08:00] important is that we have all the technology we need right now. to solve all of the world’s problems and human frailty has always been the issue, always will be the issue. And it reminds me of Hara, who I’ve got in my little thing to talk about.

Noval Hari, who wrote Nexus and Sapiens, he said that it’s strange that we think that an AI will solve all of our problems. when the AI is based on us. He says, we, we don’t need AI to solve our problems. We need humans to do it. Now here’s the point. That idea is a great one, but. Humans still need to implement it and agree upon it.

At this point, maybe the AI takes over and says, here’s where you’re going. An Uber and a, a humanoid robot arrives at your door and takes you away to this place and takes away all your wealth if you’re wealthy. I don’t [00:09:00] know, but the, the ideas are there and AI’s got great ideas and, and this is a super idea that would work, but

Cameron: And it’s a bold, it’s a bold vision. And one of the things that we lack in Australian politics by design is bold vision.

Steve: we don’t have any, we used to, hundreds of

Cameron: Hmm.

Steve: granted there was a whole lot of other problems, which we are solving social problems and

Cameron: Well, a hundred years ago, I mean, Goff Whitlam, first seven days after he was elected, he sat down with his right hand man and basically crafted the plan for how Australia’s been living for the last 55 years. You know, just sat down and said, we’re getting outta Vietnam, free education, free healthcare, uh, legalized divorce, you know, blah, blah, blah, blah, blah, blah, blah, blah, blah.

Steve: a, a, lot of good ideas and the, the void of leadership and courage are the two things that are lacking in, in, uh, political society. And we’ve got corporate capture. And the [00:10:00] lobbying is, is, is a real challenge. The fact that South Australia’s outlaws lobbying is, is a massive move in the right direction.

That’s the real biggest issue because we don’t get brave policies simply because our politicians are captured economically. If we remove that capture, then we get a chance for politicians to make decisions for the majority, right? And we, and we don’t

Cameron: I think it’s also, I think they’re also just trying to be a small target, right? Um, if we don’t see anything bold, then there’s nothing to attack. If it’s just, eh, more of the same, then.

Steve: they’ve gotta ask themselves the question. What are they there to do? You they there to make a difference or to just fringe dwell. Because what we’re getting is fringe politics, just small, incremental, nothing bold and, and strategic and important like this would, would really work because the issues, as you say, and, and not technological, you know, we have all of the technology we [00:11:00] need to, to move society forward and create brush of flourishing on those six principles, which are all, I don’t think anyone could disagree with those as goals that we should have societally.

So they seem pretty good to me and, and that idea I think would have a dramatic impact if it was implemented.

Cameron: Yeah, it reminds me a little bit of Mormon missionaries. You know, my prey grew up in Utah, and, um, a lot of her family, uh, and friends go do missions when they’re 18, 19, 20. That’s how Mormon missions work. And usually they get sent to, I think she’s got a niece who’s in Chile or Argentina or somewhere like that at the moment, doing a mission.

A lot of her, a lot of Chrissy’s siblings went to places like that to do missions. Her father, uh, went to France to do his, but a lot of times they end up in very different [00:12:00] communities speaking different languages, different cultural issues for the wrong reasons. I mean, they’re trying to convince them that Joseph Smith looked into a top hat and with some magic rocks and translated some magical plates.

But, uh, you know, that.

Steve: I’m so sorry. I love that so much.

Cameron: The idea of sending people into communities like that, you, you, you’re gonna come out of it with, um, a better appreciation of the other, and that’s why it’s called the other act. I think so anyway, just proving the point that AI is creative today. If you know how to use it correctly and prompt it and work with it, it can do amazing things.

And that’s today, let alone where it’s gonna be a couple of years from now when we have super intelligence. But, um, what have you got to talk about for your past week, Steve, we get into news?

Steve: I used AI in a way that [00:13:00] was really effective. We had some investors from China who were interested in macro 3D. We’re raising 5 million in capital. for any rich listeners out there, chance to participate in the multi-billion dollar future of, uh, automated construction. any case, we’ve got an IM that they wanted us to send through yesterday.

I used chat GPT to translate the, IM into Mandarin Chinese written version. As you know, language is really nuanced. way that I did it was I translated, you know, pieces of the English into chat. GPT said everything I paced from now on is gonna be translated. That’s your singular instruction until further notice.

Again, prompting well, so you don’t have to go back and forth and do more keystrokes than needed translated. It’s a business based document. You’re going to have to make some interpretations on the language we’ve used. which is quite different in Chinese, I speak a little bit of Chinese, so I came back and then when I got the English, uh, the Mandarin [00:14:00] translation, I would then take that and put it into Gemini and go, now translate this back into English again.

And then I compare the two English, because one of the challenges is you’ve got to know what good looks like. You can’t just trust the AI that it did it, because who knows, I can’t read

Cameron: Hmm.

Steve: So I

Cameron: Hmm.

Steve: a look at it and it was flawless. Did not skip a beat. And some of the translations into the nuance for Chinese were just perfect.

It was

Cameron: Hmm,

Steve: Took me to do an a 20 page document with financial financials, technical statements, everything took under two hours. I came to a conclusion, not only did it do an amazing job and I. Managed to use two ais, which is one of the tricks you and I have been talking about a lot. Use more than one AI to check the other ai.

And it’s kind of almost like a little bit of blockchain, sort of having a number of verifications across something to to back reference and check. Uh, not only did it do an extraordinary job, but it took two hours. then I came to the conclusion AI [00:15:00] stole a job that would never have existed because we were

Cameron: hmm,

Steve: gonna hire a translator.

It would’ve cost us too much. It would’ve taken us two weeks. What we would’ve done is we would’ve just sent it across and crossed our fingers and say, hopefully someone understands Mandarin pretty well, uh, or and understands English.

Cameron: hmm.

Steve: with that, let us know. But we sent through a

Cameron: Hmm.

Steve: within a day of them asking, and they’re like, wow.

Like they were

Cameron: Hmm.

Steve: wow. Which surely they know we used AI to do it and no one lost a job. But new value was created and potentially $5 million worth of capital is going to flow into Australia, which is then gonna create other jobs.

Cameron: So my question is, did you, did you use Deep Seek or Quinn?

Steve: Sea. We didn’t, we

Cameron: Why wouldn’t you have used a Chinese AI to do that instead of an American ai? Is my question.

Steve: um, no reason. I just used the two that were right there in my browser. There you go. The [00:16:00] reason is I used the two that were just a mere click away, already open as tabs is the reason, no reason why I couldn’t have. And after that I thought I could’ve done this with four or five or whatever, but the result I got was extraordinary in any case.

Uh, I would’ve, even if I used deep seat check back more than once, but it really made me realize that I think the biggest thing that’s going to happen with ai, a whole lot of things that wouldn’t be done without it get done. And new value gets created. And when the new value gets created, you get a new multiplier effect, which is a common economic theorem where you spend a dollar that becomes a dollar 20, which becomes a dollar 50, which becomes $3.

I mean, that’s how the entire economy grows. It’s all based on things that don’t exist yet that then exist. to be honest, translators are right in the firing line, right? It’s one of the easiest things to get rid of, and we know that, and all of us have to be careful, but our job is to look at [00:17:00] where the new value creation is.

And I just thought it was a really good example of the multiplier effect and how revenue moves sideways

Cameron: Mm

Steve: create new revenue streams.

Cameron: mm

Steve: So there was

Cameron: until they gobble up all the revenue streams.

Steve: if they do, they, we’ve got bigger problems than that. And the other one I was really interested in, I’ve been reading Nexus by Noval Hara, and he’s been, and I even sent you a page, uh, photo.

Just the, the two ideas of story and bureaucracy incredibly interesting story is the, the systems of belief and how we translate ideas to each other. Big ideas of things that we should, could, and would do or have done. know, religion, technology, everything’s a story. We buy the story first, and the story help helps us to believe in myth so that we can invent things.

But then he did this overlap with bureaucracy. And bureaucracy is the rules and the methods and the systems that become a requirement, which [00:18:00] can sometimes stop things and sometimes things don’t fit into the bureaucratic, uh, pages or boxes. You’ve gotta fill things in. And just that juxtaposition between the two and how they at are at incredible points of tension with each other in times of great change.

Because we, we buy into the story and then once the story’s been sold and everyone agrees, then you build a bureaucracy around that to temper the story and create boundaries so that we can operate effectively within a society. Society requires, you know, the way you coordinate big people and big ideas is through bureau bureaucracy.

You need it. And it may be feel about. AI is that there is no bureaucracy around it. There’s just a bunch of story and independent players just forging ahead without any boundaries at all. And while boundaries, people don’t want them when they’re innovating, you know, the reason we have clean water and safe roads is because of bureaucratic boundaries, which are really, really important.

And, and they are [00:19:00] left wanting at the moment. So I’m only a third of the way through that book, but oh my God, he’s just mind blowing. He, he’s just quite possibly the world’s greatest thinker.

Cameron: Wow. I’ll have to, um. I’ll have to go arm wrestle him for that title. That’s, that’s my title. I’m, um,

Steve: I’m so sorry.

Cameron: trade, trademarked that title.

Steve: Will have you really, I, I hope that you have Will’s greatest thinker, TM self-proclaimed.

Cameron: Tm. Yeah. Yeah.

Steve: I

Cameron: Yeah. Interesting. Uh, well, speaking of bureaucracy in ai, uh, Trump’s big one, big beautiful bridge bill passed both houses finally. Yeah.

Steve: Oh my

Cameron: Um, so it’s been

Steve: to adapt the statement that no laws can interfere with the progress of ai or is it still in there?

Cameron: to. To the best of my knowledge, that [00:20:00] is still part of it. So when Trump signs this today, tomorrow, it will be illegal for any state in the US to pass any laws that regulate AI in any way until 2035.

Steve: Well, no one will be here by this, so it’s fine. The singularity will have occurred. I don’t know whether I’ll just be living in a cloud. No one knows. 2035. He’s basically said he’s gonna be a nuclear war, an AI war. We don’t know what will happen. Duck and cover. Okay.

Cameron: So, yeah, obviously for the last two, two and a half years since GPT came out, there’s been an enormous amount of talk, uh, in the US and around the rest of the world about regulating AI safety, guardrails, et cetera, et cetera. Now, the US is not going to regulate it and. I suspect any other [00:21:00] country that tries to regulate it, let’s say if the Australian government tried to regulate ai, like we’ve got the safety Commissioner that’s been regulating social media, I suspect that the Americans will push back and penalize countries through tariffs, some other mechanism if you try and regulate ai.

So we’re pretty much in a situation now where there is gonna be no effect of regulation anywhere in the world on ai. China may, China may regulate it in terms of what it can and can’t say about the CCP in Tiananmen Square and those sorts of things. But effectively, the Trump administration has just removed any legislative approach to safeguarding us from ai.

Steve: It. That’s terribly concerning, [00:22:00] especially when. The jury is out from experts on what the potential consequences could be socially, economically, in terms of seating control to a sentient, potentially sentient being. It, it, it seems like an incredibly foolish thing to do. Of course, Trump theoretically, uh, only has three and a half years left, or a little bit less than that, and it could potentially be kiboshed legislatively, but it just seems like a, someone making a geopolitical position trying to win the race, the geopolitical race with AI without understanding the potential consequences.

Cameron: Look, I was skeptical that humans would be able to regulate AI very effectively or for very long anyway. So I don’t think it makes a great deal of practical difference, but it’s interesting now that that’s the p [00:23:00] Yeah.

Steve: if we can’t legislate against known impacts of social media, algorithmic division, uh, on the

Cameron: No, but I’m gonna,

Steve: of preteens on social media, then we’ve got zero chance. I mean, and that’s clear. And the jury is in, the studies have been done and we know the impacts.

And if we can’t regulate against that or monopolistic behaviors of big tech, then we’ve got zero chance of doing it with ai. ’cause that’s far more complex and has less research, less understanding. And the experts can’t agree what the potential impacts are. So you’re right, but, but it

Cameron: but I’m not even talking about it. I’m not talking about it from that perspective. I’m talking about it from, you can’t, a, a, a lesser intelligence can’t regulate, uh, superior intelligence. If we have super intelligence,

Steve: yes, but.

Cameron: you’re not gonna be able to regulate it

Steve: that’s right. But you are talking

Cameron: by definition.

Steve: a post moment when that, that bridge gets crossed. And I think what we’re talking about here isn’t, the level of intelligence is, [00:24:00] is more the, the level of independence from an ai, but there is a window where that could be There’s a window of time still available where that could be regulated before the moment when the AI has self-direction, uh, its own independence, which we’ve, we’ve spoken about.

Cameron: But commercially and from a security perspective, there’s no, I like my understanding of the way that the AI industry elite think about this in the US is it’s a, it’s an all or nothing game. And we’ll, we’ll talk about Zuck and, and his, uh, buying spree at a, in a moment, but there, it’s an all or nothing game here.

It’s the first country or company and or both to get to super intelligence who wins and they believe that China is. Quickly catching up to the US and probably will supersede their, uh, [00:25:00] development in this space in the near future. So whoever they, they, they can’t slow it down until they get to super intelligence.

When you get to super intelligence, it’s too late. Anyway, so I, I just don’t think it was gonna happen for commercial and, and, uh, security reasons. ’cause they’re terrified of what will happen if, if China gets it. But

Steve: Human

Cameron: of

Steve: Human lemmings. It’s the AI lemming race.

Cameron: Yeah,

Steve: the cliff. We know

Cameron: yeah,

Steve: is coming, and we’re like, yeah, but we have to be first on the cliff.

Cameron: yeah, yeah. Let’s get to the.

Steve: AI geopolitical race has become. Lemmings Oh yes, it’s right there. What’s gonna happen? We go, we don’t know.

Probably won’t end well. Could be some carnage. what are we gonna do? Let’s make sure no one gets in the way of us running off the human AI lemming Cliff.

Cameron: So, uh, I think in our last show we talked about the fact that Zuckerberg was trying [00:26:00] to buy an AI company and or all of open AI’s top devs or the top AI devs from everywhere. Uh, really? And, and Sam Altman, I, I heard on a podcast a week ago or so saying that Zuck hadn’t been successful. ’cause open AI’s people didn’t care about money.

They wanted to be part of something important. Well, that didn’t last very long, that age like milk, because it Zuck has managed to hire about 10 people. Uh, now, whether or not they are the top, top. Tier open AI or just second tier open AI researchers is still debated. I did hear on a podcast yesterday, uh, somebody was saying that they heard that one of the people that Zuckerberg has hired the salary package was a billion dollars, not just a hundred million, but a billion dollars to get [00:27:00] this person.

But the,

Steve: company cargo. With that there and, uh, some great lunch benefits.

Cameron: but the rationale that this guy who’s, um, Patel, Des Patel, I think was giving was interesting. Like Zuck has been trying to buy Ia, KO’s startup, SSI, safe super intelligence. The guy who was the chief scientist at OpenAI, co-founder, left after the whole Altman firing rehiring thing. Zucks been trying to buy his company for about $30 billion.

Um, is the rumor. Ilya turned it down, but his co-founder and CEO Gross, Daniel Gross, I think has just left the company. So he might be going to meta to be part of their AI play. He might be the guy that’s getting a billion dollars, but the rationale for paying these sorts of [00:28:00] salaries apart from the obvious one that he wants to win and he wants to suck up all the best people, is interesting.

’cause this guy was saying, well, SSI only has about 15 people in it. If you’re paying $30 billion to get 15 researchers, and let’s say one of them is. IA and maybe 10 billion is for ia. The other 20 billions for the other 15 researchers, that’s basically, you know, roughly a billion dollars per researcher.

So if you’re willing to pay a billion dollars per researcher to buy a company that where you, they don’t have a product, you’re just getting the people, why not just offer that money directly to the people anyway? Right. It kind of makes sense.

Steve: I’m being a bit flippant here. Two costs. Yeah. The, the, the chips and the server farms and the researcher. That’s really all there is. There’s only two pieces of the puzzle. Couple of UX designers, if you’re gonna launch something, but there isn’t a huge amount in it.[00:29:00]

So that’s your, your major cost. I found it interesting to see pop up in my social feeds, the signings of AI rock stars versus football players like Ronaldo and Messi. And I just thought that was a really nice, uh, approximation of, you know, where societies become the idolatry of innovators and, uh, corporate CEOs.

And now it’s not just the CEO, it’s the star of the club who becomes the rock star now, the coder is the player. Uh, I think that that’s a really interesting analogy, but again, to me it points to in inequality and incomes now where if you’re on the right side of some sort of economic equation, now you, you’re going to Ghana, inordinate wealth, uh, the benefits, uh, really being served by a few large corporations.

You know, we’re in a technocratic oligarchy and, and this is just a, a, another reflection. Of that economically. I get why Big Tech does it, it makes [00:30:00] sense for Zuckerberg to do it. The prize is so big you can pay it. It’s a just a really simple economic equation where you look at the cost of acquisition versus the benefits of set acquisition on capital flows.

It’s actually quite easy. but, but I think socially it’s a bigger reflection of the problems that we face now where there is that much money floating around. That’s why we don’t have any of these hand brakes happening. Uh, there’s too much power and too much money, and this is just another reflection of that. as much as I love watching a football player run around, think that it’s nice that people are actually building and making things are making more money than someone just kicking around a dead animal filled with air.

Cameron: Look, I’ve argued for years that there should be salary caps on everybody, CEOs, sports players. There should be, I don’t, I don’t care what it is, a million, 2 million, 10 million, but there should be a salary cap on what people can get [00:31:00] paid.

Steve: amount of money and in wealth, I, no one needs a billion dollars. Yeah. Having more than a billion, no one even needs a

Cameron: I.

Steve: or 50 million, let’s be honest. But, you know, to, to keep the capitalist viewpoint and incentive in people’s minds, which is bullshit because a couple of million bucks and your life’s pretty good, I imagine.

Um, it should be after you earn a billion dollars, you is 99% tax, or 90 cents on the dollar is taxed. Once you earn over 5 million, 10 million, whatever the number is, 90% tax. I think that’s really simple. My view on how to reign in CEO salaries and sports salaries. I think rather than putting a limit on the top, it should be, it’s a maximum multiple of the lowest per paid person in your company.

  1. Not the average,
  2. Cameron: That’s interesting. Hmm.

    Steve: So, um, the

    Cameron: Hmm,

    Steve: can only earn, I don’t know what the number is, uh, 50 times what the lowest paid person in the company is then what you’ve got is a, a nice alignment [00:32:00] where, and, and they have to justify it, but they have to justify their pay rise, impact on the others within that construct.

    And I think that’s, that

    Cameron: hmm,

    Steve: a really nice way to do so. There’s no limit. You can earn as much as you want, but we need to

    Cameron: hmm

    Steve: society along with us. And, and

    Cameron: hmm.

    Steve: you know, whether it’s the cleaner or whoever, can only be a multiple of the lowest paid person in that company.

    Cameron: I like that.

    Steve: I, I knew you’d love it ’cause you’re a, you’re a big, long-haired communist from

    Cameron: Um, big, big comment. Yeah. Um, Mark Chen, the Chief Research Officer at OpenAI, sent a memo to staff on Saturday promising they would go head to head with, uh, salary discussions with meta. And there he said, I feel like someone has broken into our home and stolen something. Please trust that we haven’t been sitting idly by.[00:33:00]

    And they’ve announced that their. Basically shutting down the company for, I think it’s a week. Um, while they figure it out, they’re closing the doors.

    Steve: what does that mean though, in terms of end users? Does it mean what, what does that mean?

    Cameron: Dunno, man. Hasn’t really,

    Steve: mean the products are

    Cameron: I

    Steve: available to, for anyone to use during that week? Does

    Cameron: no.

    Steve: one’s in the office do the servers get a little rest and we save some global electricity?

    Like what happens?

    Cameron: I am quoting from Wired Magazine where they’ve got somebody off the record telling ’em stuff. OpenAI is largely shutting down next week as the company tries to give employees time to recharge according to multiple sources, executives are still planning to work. Those same sources say Meta knows we’re taking this week to recharge and we’ll take advantage of it to try and pressure you to make decisions fast and in [00:34:00] isolation.

    Another leader at the company wrote according to Chen’s memo, if you’re feeling the pressure, don’t be afraid to reach out. I and Mark are around and want to support you. So I guess, uh, they’re gonna have some people keeping the surface up and running, but everyone else has taken the week off to think about their future, what they wanna do.

    Do you wanna just take the money and run and hope that Meta can deliver something or do you wanna stay at open AI and. They’ll try and match the salary offerings. But you know, when it, when people talk about, you know, there’s a lot of, um, debate still about LLMs and how much runway the current models have and whether or not LLMs are gonna get us to super intelligence.

    A GI, by the way, no one’s talking about a GI anymore. A G i’s just assumed now. Um, everyone’s focused on super intelligence. The A GI thing is kind of,

    Steve: because [00:35:00] we’ve been pushing that on the futuristic for some time now.

    Cameron: yeah. Um, and as they keep saying like a GI, you ask 10 different AI researchers for a definition, you’ll hear 10 different things. So it’s, it’s kind of stupid. But we’re, you know, people argue that the, the current models aren’t gonna get us there. I have to point to the fact that the people leading these companies are.

    Taking billions of dollars of investors, money and capital and investing it in this race. Hundreds of billions of dollars if you factor in building data centers like Stargate, et cetera. So they certainly believe that this is gonna get them there and they’re spending everything in the bank and then some to get there as quickly as possible.

    Now, pushback is, Zuckerberg also thought that he was going to bring the metaverse [00:36:00] into reality for the last five years, and he spent billions on that and got nowhere air.

    Steve: fever dream where he just did not understand humans, and that’s what happens when you’re a robot, mark Zuckerberg, unless you’re a human, it’s very, very hard to understand humanity. I.

    Cameron: Speaking of robots, I watched the latest Neuralink update. Uh, it was a video that Elon opened, uh, and then a bunch of his top guys were talking about what they’ve been doing recently. They now have seven or eight people with neural links inside of them. They were all on the video, uh, talking about their experience.

    Two of them were playing Call of Duty, using their brains against each other. Uh, my first thoughts were, as I’ve.

    Steve: really playing Call of Duty, given that he has pretend robots at his launch is just saying,

    Cameron: First of all, as I’ve said before, who the fuck is gonna [00:37:00] let Elon anywhere near your brain after the last six months? B Elon seemed quite back to being normal and rational. So either he’s off the ketamine or he’s had an upgrade to his Neuralink, and I’m starting to think that the last six months he just had a, a very early version of Neuralink in his head and it was glitchy.

    Yeah, yeah, yeah. He had to get an update done. But that aside, the reason I’m still interested is because whether or not the company that puts that brain computer interface into you is Neuralink. Someone is gonna be doing it. Obviously they’re not the only company doing bcis. There were companies doing it before them.

    There are gonna be companies coming after them. It’s the advancement in the innovation and the technology. That’s interesting. We’re all gonna have a, B, CI. And you know, you talked on the last episode about Kurtz Wall’s view of merging with the ai. Elon [00:38:00] was talking about that basically in his introduction.

    He was talking about the fact that the human brain for all of its wonders, is actually quite slow at processing information. And if we have a B, CI, we’ll be able to process information a thousand times faster than we can with our carbon based wetware. So, uh, you know, the, it will. If there are still jobs to be had at some point become a competitive advantage in the marketplace.

    If you don’t have a, B, CI, it’ll be like going for a sales job in the nineties and not having a driver’s license. Like if you don’t have a driver’s license, you know, you can’t be a, you can’t be an Uber driver, right? You can’t be a sales guy. Same sort of thing. And it was, that was my justification for getting a mobile phone in 1989 or 1990 or whatever it was.

    It was [00:39:00] a competitive advantage as a sales rep that I could be contactable by the office, I could contact my clients, that kind of stuff. My clients could contact me.

    Steve: technology becomes some. advantage, whether it’s having a car, whether it’s using a mobile phone, being computer literacy, must have computer literacy circa 1993. And then it was like, you understand the web and must have a degree, which again, you know that that is putting information into your brain, uh, and comes at a cost.

    And then now what’s the cost of Neuralink? I mean, one of the real dangers of brain computer interfaces is that they become subscription models. that’s incredibly dangerous and the fact that you could upgrade your brain, but be dependent on a cloud which you don’t own or control. I think

    Cameron: Have you watched the last season of Black Mirror?

    Steve: have in the last episode there was extraordinary.

    I think let’s give the listeners a little bit of a spoiler on that one.

    Cameron: [00:40:00] I haven’t seen the last episode. I only watched the first couple, but I was talking about the one where the woman has the chip in her brain that’s requires a cloud subscription.

    Steve: injury, a brain injury of sorts. And then they put a chip in her that she can operate. But what happens is it’s, at first it’s free and then they have to upgrade to subscribe, and it has geographic boundaries. So they go away on a trip and she crosses a geographic boundary and you know, the equivalent of losing 5G, but she loses access to her cloud and then they can’t afford it.

    So she starts doing contextual advertising in the middle of the day. She’s like, did you wake up tired in the morning? We’ll,

    Cameron: She is a teacher in a classroom. She starts to, starts doing, giving ads to the students in the classroom. Yeah.

    Steve: really, uh, horrible. And then they have all sorts of upgrades you can get where a dopamine levels get up. And a horny husband does that on a, on a, on a trip where they go away and she gets crazy horny. And, but, but the whole thing is that. It ends up [00:41:00] in a cycle of, it keeps costing you more.

    You’re locked into something with zero escape and someone else controls your own mind and you have to subscribe to that and it becomes more and more draconian, inexpensive. I think that’s an incredible danger. And, and this points to the importance of open source. And I think about a lot of things that are of incredible valuable value that are open source.

    You know, like language, like language, us speaking English, or whatever language we want, we can adapt it. We can do what? That’s how you end up with dialects. That’s how you end up with slang and certain, uh, industry vocabulary. It’s, this is kind of where we are. It’s an extension of language and knowledge and the fact that that’s op not open source is a real problem.

    Cameron: Mm. Honda Rockets.

    Honda successfully launched and landed its own reusable rocket [00:42:00] looks, uh, very similar to, um, a SpaceX rocket and Landing. Didn’t have the chopsticks, just did a vertical land. Uh, and again, this is sort of my point regarding the stuff that Elon’s doing, like cutting edge, not necessarily the first, but, you know, fast follower kind of stuff with a lot of this stuff. If he’s not completely innovative, but.

    It’s not gonna be the only one doing vitol rockets and, uh, reusable, reland able rockets. There’s gonna be a whole bunch of companies that are able to catch up and do this sort of stuff. One, it’s the three minute mile, right? Once it’s been done, everyone else is gonna figure it out. So, you know, we’ve, we’re gonna have a whole bunch of players that are gonna follow in Elon’s footsteps for not just bcis, but the space race, the rockets, all that kinda stuff.

    But if you [00:43:00] haven’t seen the video, go look it up. Honda’s reusable Rocket. It’s still amazing, regardless of who does it. It’s super impressive to see.

    Steve: and I, if you hadn’t have told me it was Honda, I would’ve thought, oh, that was just another one of Elon’s things.

    Cameron: Yeah.

    Steve: looked exactly the same. Someone who doesn’t follow it closely. Uh, yeah. Which goes to show we need as many substitutes as possible in as many different economic and technological realms.

    The more overlap and substitution we have, the more competition, the more open things become, and the less draconian that that powerful new technology becomes.

    Cameron: So everybody’s been talking about the velvet sundown this week. Steve, uh, I sent you a link about these guys during the week. So as people, if you’ve been reading anything, you’ve probably heard about this. Um, there’s a band appeared on Spotify, the Velvet Sundown. They’ve got an album out they’ve got now, I think about half a million people subscribe to them on [00:44:00] Spotify.

    But until the last couple of days, there was no evidence that this band exists or has ever existed. There was a lot of people assuming that they’re an AI generated artist, Rick. Beato on YouTube

    Steve: I love Rick.

    Cameron: threw their music into his AI analysis tool, looking for evidence of humans in the recording and couldn’t find any.

    He believed it was AI generated music from his AI analysis of the AI songs. Uh, since these stories started to come out, the band now does have an official X account and they’re going, no, we’ve never used ai. We’re humans. But the photos of them are obviously AI generated. But my point was, if again, like you looking at the Honda Rocket, if I had listened to this album and not heard any of the media about it, I kind of [00:45:00] dig it.

    It’s kind of Americana Rock.

    Steve: Yeah.

    Cameron: Who’s playing the piano in your house, by the way?

    Steve: you hear that? It’s my

    Cameron: Yeah,

    Steve: I thought, oh, I was hoping you couldn’t hear it.

    Cameron: I can hear it. Yeah. Uh. Uh, no, it’s okay. It’s a little bit of ambient at first. I thought you were listening to the Velvet Sundown. Um, yeah, so it gets back to this question that we’ve, we’ve talked about before about art and ai. Um, I was having this debate with my son Taylor, uh, the other day about social media as well.

    ’cause he and his brother keep sending me videos that are AI generated, vo generated stuff, more of the Yeti stuff or storm troopers or some straight up racist content that he was sending me. Sitcom racist stuff. Um, there’s a whole series of things about Chinese people eating cats and dogs. There’s these, [00:46:00] uh, things that have been pushed out.

    But the, the, the point is that if it’s entertaining.

    Steve: it

    Cameron: You know, we’re still at this, we’re at this weird period of time, I think where, you know, we’re questioning it’s fuo or rupo, which is what I asked you when I sent you the vet sundown. But we’re gonna quickly reach a point, I think, where we don’t even ask the question.

    If I stumbled across this music on Spotify, hadn’t read any of the media, it just popped up in my, you know, recommended new things to listen to, I would’ve gone, this is good. I dig it. I would’ve listened to it. I wouldn’t have questioned it.

    Steve: Well, we already have that in many ways in, in, in the movies. you’ll watch a movie and you don’t care whether the scene was actually filmed and some explosions or ai, you’re just like, am I digging it? And I think that entertainment es especially is the, am I [00:47:00] digging it? I do think there will be a new kind of genre because categories tend to split rather than aggregate, where it’s like, there’ll be a category where it’s, this is a live band, this is an AI band.

    And you might have to flag that. And I don’t mind that as an idea. Some people won’t even check and won’t care. But you, you might have to at some point flag say, this is AI generated. I listened to it. I didn’t really like it personally. I thought fake everything was a far better AI generated song.

    I’m just saying. But I, I don’t think it matters, but it does point to one important thing. The velvet sundown are definitely using the tactic at the moment, which is, is it ai? Isn’t it ai, which is one of the great marketing tactics right now, fuo or rupo, and smart brands are making something and saying, this is AI or this isn’t, or making people guess and, and that’s a great [00:48:00] way to get attention in the attention economy at this point in time.

    Cameron: Yeah, I still believe that, uh, brands will very soon if they’re not already be creating their own bands, their own social media influencers, and then sneaking their advertising and marketing and promotional messages into the content. What was I watching? I was watching, um, some crazy movie from the two thousands the other day.

    Uh, can’t remember what it was, but I. You know, there was just so much branding in it. Like you’d see the, the mobile phone, uh, brand where the person picks up their mobile phone and they’re holding the, the brand in front of them and they’re drinking a soft drink and the logo is turned to the camera and it was really in your face.

    Steve: enjoy Pepsi Cola. [00:49:00] Well, if you don’t have

    Cameron: And

    Steve: If you have an artist is a bestselling or most downloaded or streamed artist, all of a sudden you don’t

    Cameron: oh,

    Steve: a rock and roller cola wall.

    Cameron: it wasn’t a film. Uh, it was a Beyonce clip. It was a, it was an over the top Beyonce clip from 15, 20 years ago. I can’t remember what it was. Um, but the film clip, which was super high production and, you know, massive budget, you know, massive cinematography, big action sort of thing. Yeah. And, and the, the brand positioning in it was insane.

    I was like, okay, well no guesses who paid for most of this video clip, right?

    Steve: Yeah,

    Cameron: So I, I think we’re gonna see that. But the, we’re gonna have books written by a AI and movies and TV shows and music and, you know, some [00:50:00] people will lud out their way through it and go, no, I refuse to watch this. I need to know if it’s real or fake first.

    But I do think the majority of people, and I include myself in this, will just stop even, it won’t even be a question, is, is it good? If it’s good and I like it, then who cares?

    Steve: so I would love a new rage against the Machine album, and if the

    Cameron: Me too.

    Steve: together and say,

    Cameron: If Zach Della Roker can’t get his fucking shit together with Tom and make another, then fuck it. I will listen to a fake rage against the machine album tomorrow.

    Steve: to a fake Rage against the Machine album. I’ll fucking make the fake rage against the Machine album.

    Cameron: I will even listen to covers of Rage against the Machine I was watching. I just rewatched the fourth matrix film, whatever the fuck it was called,

    Steve: Right. I don’t know if I’ve seen it.

    Cameron: Rema Resurrection, uh, to see if it held up any better. And it, I enjoyed it. Maybe a little bit more the second time around, but it’s still [00:51:00] kind of not very good.

    But the final track is, um, a cover of the. Wake up from the final credits of the original film with a woman singing it and it’s kind of, it’s ripped, it’s stripped back. It’s just sort of a drum and bass with her doing the lyrics. And I was like, you know, it doesn’t hold up to the original, but uh, it’s still okay ’cause it’s a great track.

    Steve: yeah.

    Cameron: have been AI for all I know, but yes, fake media is gonna become a bigger and bigger thing like it hate it. Speaking of that though, how much time have we got? Eight minutes. It’s come out, there’s been this court case against Anthropic. It came out that Anthropic purchased millions of physical print books to digitally scan them for training Claude.

    And they [00:52:00] won the federal court case. Um,

    Steve: That is an absolute disaster that they won the court case.

    Cameron: you think.

    Steve: Absolutely. I.

    Cameron: Judge William Alsup of the United States District Court for the Northern District of California ruled in favor of anthropic finding that the company’s use of purchased copyrighted books to train its AI model qualified as fair use. While the case centered on emerging AI technologies, the implications of the ruling reach much further, especially for institutions like libraries that depend on fair use.

    To preserve and PRI provide access to information. This is a blog post from the internet archive, which I’m a big user of. In this case, publishers claim that anthropic infringed copyright by including copyrighted books in its AI training dataset. Some of those books were acquired in physical form and then digitize by anthropic to make them usable for machine learning.

    The court [00:53:00] sided with anthropic on this point holding that the company’s format change from print library copies to digital library copies was transformative under fair use factor one, and therefore constituted fair use. It also ruled that using those digitized copies to train an AI model was a transformative use, again, qualifying as fair use under US law.

    Steve: Again, stealing their raw materials to make a product. It is not fair use because AI has an unfair advantage compared to a human using something and learning and, and putting their own creativity on top of it. My honest opinion is this is the same disaster that happened when everyone let Google crawl their websites free in every search engine, and then they stole all the traffic and all the revenue, and they basically just put a thin layer of innovation on top of a whole lot of people’s hard work.

    I, I, I think it’s a disaster. I think that copyright in many ways is over the top. [00:54:00] cite Disney here, stealing stories and extending, uh, the periods of among things, but. This doesn’t feel as though it is a fair use because you have a non-human ability to digest that information and create new value where the original content creator is not in any way rewarded.

    That’s my view.

    Cameron: This goes against everything that you’ve said to me on this show over the last couple of years. Steve,

    Steve: Well,

    Cameron: just one aided, you’ve just, you’ve just tared this whole thing.

    Steve: I haven’t tar coded it. No, I haven’t tar coded it. I think the way they went about it and bought copyright materials and put them in there is very, very different to scouring the web. That’s what I think. They’re two different ways.

    Cameron: So if I buy a hundred books and read them [00:55:00] on Julius Caesar and then go write my own book on Julius Caesar, based on what I’ve read,

    Steve: Yes.

    Cameron: okay.

    Steve: Yeah, because it’s like having a running race

    Cameron: I.

    Steve: running on their legs and someone having a motor vehicle. They’re two different things. They’re two different categories. They’re not the same category. That is fair use if you do that because you’re AI Cameron. I know you’re the world’s most intelligent man, but you’re not an AI with superpowers.

    So they’re basically scooping everything up then spinning it out. I think that they should be able to, uh, use the books and create the ais, but I think they should be some form of distribution. like what the music industry did with radio stations and TVs for years, where they have like a licensing fee or something like that, where you get a distribution, which wouldn’t be a lot of money.

    It’d probably be 50 cents to every author. say if your book’s in there, but it would pay homage to the fact that the raw materials come from somewhere. [00:56:00] So I think there should be some kind of a licensing or royalty structure where Theis and the companies running these ais have to, in some way participate in the economy underneath it, which makes what they do possible,

    Cameron: Hmm. Yeah. Oh look, I fundamentally disagree and I don’t think that’s even workable on a practical level. But, uh, you know, I think the fact that we’ve built, we being the human race have built a tool that is more efficient at writing or producing music or producing film or whatever it is, um, is a tremendous thing.

    The fact that it can do something better than humans.

    Steve: want it to stop. What I, what I just think and you’re right practically, it’s a very, very difficult thing to do, right? It’s very difficult and the music [00:57:00] industry tried to stop people from downloading and going streaming and all of that, and it’s sort of leveled itself out and found a way forward with Spotify and so on.

    But I do think that there’s a precedent within the digital economy where thin layers of innovation and thin is probably a bit disingenuous. Innovation is layered the top something previously, but they historically have not paid for their raw materials. And it’s created enormous wealth inequality. It’s created, uh, too much power into too few hands.

    And I feel like we haven’t learned the lessons of the first digital era where the large companies basically hoovered up everything with their raw materials, got it. Free and distributed it, and. Put more money into fewer hands. I want the technology. I think the technology’s good, but I think in some capacity we need to find a way so that the corporations creating this new technology that we all want and I want, and I don’t want them to stop in some way, [00:58:00] in the economy.

    That made it possible. That’s what I’m

    Cameron: So what you’re saying is we need a UBI is what you’re saying.

    Steve: No, I’m definitely not saying

    Cameron: are, you are, you’re just using different words for it. But you’re basically saying these companies are gonna make a lot of money out of this. So they,

    Steve: income. Universal

    Cameron: they,

    Steve: not

    Cameron: they need to redistribute those funds in some way that everyone gets, uh, participates in that.

    So it’s a UBI. You’re just

    Steve: materials. No, I’m talking about the raw materials that went into it.

    Cameron: Yeah. You’re talking about a UBI for authors.

    Steve: materials Cam.

    Cameron: The basis of A UBI in terms of an AI world, is that the AI’s, you know, fund it all.

    Steve: Well, there’s a better way. Just haveis run everything and everything be free and everyone has access to everything.

    Cameron: It’s UBS

    Steve: from the economy

    Cameron: Basic Services. Yeah. Yeah. All right. We’re coming up to an hour, Steve. That’s it. We’re done. We’re out. You good?

    Steve: Yeah. I’m so

    Cameron: Great.

    Steve: [00:59:00] I think we

    Cameron: That’s good.

    Steve: today and we had some disagreements and I think Noah wants the Mutual Agreement Society on a podcast. I’ve always said that Cam. In fact, I’ve never said it, and if I say I’ve always said it, I’ve never said it. And that’s the first time.

    [01:00:00] [01:01:00]

    ...more
    View all episodesView all episodes
    Download on the App Store

    FuturisticBy Cameron Reilly

    • 5
    • 5
    • 5
    • 5
    • 5

    5

    6 ratings


    More shows like Futuristic

    View all
    Freakonomics Radio by Freakonomics Radio + Stitcher

    Freakonomics Radio

    32,283 Listeners

    The British History Podcast by Jamie Jeffers

    The British History Podcast

    5,341 Listeners

    Casefile True Crime by Casefile Presents

    Casefile True Crime

    38,374 Listeners

    Pivot by New York Magazine

    Pivot

    9,202 Listeners

    The Daily by The New York Times

    The Daily

    111,917 Listeners

    Behind the Bastards by Cool Zone Media and iHeartPodcasts

    Behind the Bastards

    15,310 Listeners

    All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

    All-In with Chamath, Jason, Sacks & Friedberg

    9,207 Listeners

    Hard Fork by The New York Times

    Hard Fork

    5,461 Listeners

    The Ezra Klein Show by New York Times Opinion

    The Ezra Klein Show

    15,321 Listeners

    The Weekly Show with Jon Stewart by Comedy Central

    The Weekly Show with Jon Stewart

    10,556 Listeners

    The Rest Is Politics by Goalhanger

    The Rest Is Politics

    3,286 Listeners

    The Economics of Everyday Things by Freakonomics Network & Zachary Crockett

    The Economics of Everyday Things

    1,619 Listeners

    Real Survival Stories by NOISER

    Real Survival Stories

    1,218 Listeners