This week on Futuristic, Cam, Steve, and Tony Kynaston of the QAV Podcast tackle the burning question: Will AI and humanoid robots bring about a utopian future or massive unemployment? They dive into potential economic shifts, the state of media, Apple’s antitrust issues, and whether competition will drive tech adoption. Tune in for a lively two hour debate on the future job market and the possibility of a global AI Cold War.
00:00 Introduction and Nostalgic Sitcom References
01:06 Guest Introduction and AI Optimism
12:04 Ray Kurzweil’s Predictions and AGI Discussion
45:22 Australian Media vs. Meta
58:02 Big Tech and Antitrust Issues
01:01:25 The Impact of AI on Employment and Economy
01:20:24 Debating Utopian Futures
01:20:59 Economic Realities of Utopia
01:25:17 AI’s Role in Future Jobs
01:26:56 The Human Element in AI Integration
01:31:07 AI’s Impact on Professional Sectors
01:32:49 The Future of AI and Society
01:38:57 Regulatory and Economic Challenges
01:53:15 The AI Cold War
01:55:41 Final Thoughts and Reflections
FULL TRANSCRIPT
[00:00:00] Cameron: you guys. Welcome back to a very special edition of Futuristic. That’s what they used to say on sitcoms in the
[00:00:16] Cameron: 80s, wasn’t it? If they were going to talk about AIDS or suicide or something, it was a very special edition of, uh, Different Strokes this week, where somebody got touched by a Catholic priest and we’re going to talk about the ramifications.
[00:00:32] Steve: See, I thought when different strokes came out that look, America’s not racist. It’s amazing. Look, a white rich guy stole two black children and put them in his
[00:00:40] Steve: penthouse apartment. Like, seriously? And I fell for that? What?
[00:00:46] Tony: I thought you were going to tell your joke about reverse exorcism then, Cameron.
[00:00:51] Cameron: Ha ha ha ha! My
[00:00:52] Cameron: joke? Why is it my joke?
[00:00:53] Tony: Oh was Taylor’s. It was one of the Reillys.
[00:00:55] Cameron: Yeah, yeah, yeah, it’s Taylor’s joke. Yeah, yeah. What’s a reverse exorcism, Steve?
[00:01:00] Steve: No idea.
[00:01:01] Cameron: When a demon pulls the priest out of a child. Yeah, uh, it’s episode 27 of Futuristic with myself, Cameron Reilly, Steve Sammartino and Tony Kynaston, the man from the QAV podcast, uh, who’s come on to, um, discuss, debate. Steve and I about our optimistic AI projections. How are you, Tony?
[00:01:31] Tony: good things, Cam. I don’t know about discuss or discuss. Yes.
[00:01:34] Tony: I don’t know about debate. I often agree with what you guys are saying and thanks for inviting me on. It’s a great show.
[00:01:39] Tony: It’s great to explore this area and you guys do a good job.
[00:01:43] Cameron: Thank you, Tony. How are you, Steve?
[00:01:47] Cameron: What’s on your hat, Steve? I should have worn my Joe’s BBQ hat, but you’ve got a hat on. What’s that? Buck Fever. I thought that was, I thought you were going with the Pulp Fiction line there. My name is Buck.
[00:02:01] Steve: It’s a, it’s a local hat business called The Farm
[00:02:06] Steve: and all of them have some sort of an animal and a statement and I just like them and this is one and I get one every year for Christmas.
[00:02:14] Cameron: That’s nice. Well, let’s, uh, let’s talk about AI, um, before we do, Tony, the way we
[00:02:22] Cameron: normally start the show is we talk about one thing of note that we’ve done with AI or futuristic technology since our last episode. Uh, do you want to start?
[00:02:34] Tony: I’ll be very short, I’ve done nothing, but I’m just looking at Steve’s cap there, you sure it’s not a spoonerism, Steve? Buck fever?
[00:02:42] Steve: Could be.
[00:02:44] Cameron: Spoonerism.
[00:02:45] Tony: Yeah, swap the initials.
[00:02:47] Steve: Oh, no, it’s definitely not that.
[00:02:51] Cameron: Oh, there you go. You’ve, you’ve taken the show right, immediately, straight out of the gate, down to another level, Tony. Well done. Well done.
[00:02:59] Tony: not me, I’m not wearing the hat. It’s just where my mind, it’s just where my mind goes.
[00:03:05] Steve: If they’re all spoonerisms, that one doesn’t work, so.
[00:03:08] Tony: okay. It’s not the
[00:03:09] Steve: That botches that theory, doesn’t it?
[00:03:12] Tony: Well, possibly, yeah. Uh, no, I’ve done absolutely nothing with technology this week,
[00:03:17] Tony: Kev. I mean, apart from the fact that we live in an age where we’re talking to each other across vast distances and, and, uh, in real time with video, it’s pretty amazing, but otherwise, no.
[00:03:28] Cameron: What about you, Steve? Done anything impressive?
[00:03:30] Steve: well, I have something disappointing to announce. Weekly, almost daily, I put up what I believe are thought leading posts and ideas of things I’ve written and said on places like LinkedIn to generate work, and I get a few views. If I do something, it’s Incredibly poignant about the economic impact of AI.
[00:03:49] Steve: Five people read it. But this week, I went Mr. B style and went into the, let’s call it the game show side of AI. I put up a quiz. Guess which picture is AI generated and which one is an actual photo? Bam, 20, 000 views, which is the decline of modern society. Put up a pop quiz. People are in. Put up something thought leading that might change their economic or social circumstances.
[00:04:12] Steve: Crickets!
[00:04:13] Steve: up something that goes to the lowest common denominator of I’m smart, I can guess which one’s AI, and bam, 20, 000 views. That’s all I got to say.
[00:04:21] Steve: That was less than a minute. Check me later.
[00:04:24] Cameron: you gotta entertain, I guess is the lesson there, Steve. People wanna be entertained,
[00:04:28] Steve: Seems like
[00:04:29] Cameron: not educated, entertained, which is why we’ll all be getting our close off
[00:04:34] Steve: Entertainment is greater than education in 2024. Is that where we’re at? Wasn’t the internet the promise of all knowledge to
[00:04:40] Steve: all people?
[00:04:41] Cameron: yeah. Well, I’ll tell you what was entertaining was the, uh, US presidential debate, um, talking about the, the decline of decline of civilization, uh, that was entertaining for all the wrong reasons. Um, well, I, you know, I continued to do a lot of code, um, the, the. AI tool that everyone’s raving about from a coding perspective is Claude, again, Anthropic’s AI tool, Claude, 3.
[00:05:06] Cameron: 5 Sonnet is the current iteration of it. Um, a lot of people have had a lot of success, a lot of professional coders are saying they’re using Claude for their coding work. I’ve been playing with it a lot in the last week, trying to continue to automate our checklist process, Tony, uh, for QAV. And I have not had a lot of success with it.
[00:05:30] Cameron: It’s, uh, gets very confused when trying to write Python code to manage Excel spreadsheets. Leads me around in circles, so, you know, it’s this continuing thing of coding with AI, on one hand, is incredible and amazing because I can code shit and, you know, two years ago I couldn’t code to save my life. But it’s also really frustrating at times going around in circles.
[00:05:55] Cameron: So I continue to push through and part of me goes, dude, you’re wasting so much time doing this. A year from now, you’ll have ChatGPT 5 and it’ll just do it in a heartbeat. But you know, what am I going to do? I gotta, I gotta do something. So anyway, it’s, um, it’s, it’s, it’s one of those things where you have enough success that it sort of keeps drawing you in.
[00:06:19] Cameron: Just when I thought I was out, they drag me back in
[00:06:22] Tony: conditioning. Random reward.
[00:06:26] Tony: keeps you there.
[00:06:27] Cameron: Yeah, it is. It’s like golf, right,
[00:06:29] Tony: Yeah. Oh, like any poker machines and gambling,
[00:06:32] Steve: social media.
[00:06:33] Tony: social media. Yep.
[00:06:34] Steve: Yeah. I mean,
[00:06:35] Tony: like watching sitcoms. So, you know, you get one good episode every three or four. It keeps you, keeps you tuning back in again.
[00:06:41] Steve: Although I don’t think that was by design, but I love that.
[00:06:46] Tony: that’s an interesting point you make, Cam. I think, you know, when I was putting together my notes for the show, I actually made a note to say that the democratize, democratization of coding is the, is probably the biggest thing I’ve noticed about AI in, My Lives or Society.
[00:07:02] Steve: And that’s the,
[00:07:03] Tony: And seeing you do it. I mean, going, as you say, over the last couple of years, going from very little knowledge of coding up to where you are now, just because you can ask ChatGPT to write some script for you.
[00:07:13] Tony: It’s actually, it’s been amazing to watch that. And, and democratization of coding and access to data I think are two things which will be big going forward.
[00:07:22] Steve: I do that on
[00:07:23] Cameron: When we started.
[00:07:24] Steve: I talk a lot about the idea that everyone’s an AI developer now with natural language processing. And most people are actually a little bit surprised and people will come up after and go, I just did that thing that you asked. And it’s like, wow. I still think. And it’s easy for us to get in this bubble of like, everyone knows this, but they don’t, they just don’t know that how emancipating the ability for everyone to create code and shape the technological world, because now we’ve got natural language processing is probably the heart of this revolution.
[00:07:53] Steve: I think.
[00:07:54] Tony: Yeah. And can you, you think about all the people when we started QAV who said, I don’t like Excel spreadsheets. Don’t give me Excel
[00:08:01] Tony: spreadsheets. Right. can sort of jump over that barrier with, with a kind of coding that you’re doing and just produce an end result. It doesn’t matter if it came out of a spreadsheet or whatever.
[00:08:11] Cameron: Yeah. Well, I was going to say, when we started QAV
[00:08:14] Cameron: four or five years ago, I could barely work a spreadsheet. Uh, and
[00:08:18] Cameron: now, you know, I can build
[00:08:20] Cameron: spreadsheet, you know,
[00:08:21] Cameron: um, masterpieces, because GPT tells me what to do. Put this code in here, do that there. It’s, it’s, um, been incredibly liberating. I, I did remember, though, one thing I did yesterday with tech, with AI, which I was So I’m standing at my mum’s place up in Bundaberg helping her do some yard work.
[00:08:39] Cameron: I’ve spent the last few days chopping down trees and taking loads of green waste to the dump in her trailer. Feeling very manly as I do it. It’s like the only manual labour I’ve done in the last year. Uh, but she doesn’t have, my mum’s got a TV that’s smaller than my iPad screen, and it’s not connected to any of the digital services.
[00:09:01] Cameron: She’s got a whole bunch of DVDs that have been sitting on her shelf for like 25 years, um, that are mostly crap. Uh, so Chrissy and I were in this in town the other day, uh, yesterday, and I was in like a silly soys, like a $2 discount store, and they’ve got a bunch of DVDs there for a couple of bucks. And I was slipping through ’em and most of them were crap, but there was some that looked like art house palace cinemas kind of stuff that Chrissy and I enjoy watching.
[00:09:27] Cameron: In the past, I would’ve. Pulled up IMDB or Rotten Tomatoes to look at reviews for these things. What I ended up doing was just holding my phone up, taking a photo of the cover into GPT and saying, is this worth watching or not? And it would give me a pricey on the film, tell me what the critics said, how it did at the box office or whatever.
[00:09:48] Cameron: What awards it’s won at Cannes or Sundance and places like that. Just boom. One second, I’ve got a review. And I did that like four or five times. Just hold up the cover, take a photo. What about this one? What about that one? What about this one? The other thing I did with it last night is we, Chrissy and I like to drink tonic water.
[00:10:05] Cameron: You know, we’re not alcohol drinkers. Chrissy’s sober for 12 or 13 years. I barely drink. Although my mum has poured me a couple of scotches since I’ve been here, which is nice. But we drink a lot of tonic water, and I just had the label, it was like a Schweppes tonic water, and I took a GPT photo of the ingredients on the back and said, how would I make this at home?
[00:10:26] Cameron: And it gave me the entire recipe and the process and what to do. And it involves quinine powder, I went to look up quinine powder, hard and expensive to get quinine powder. So I said, well, what, what, quinine powder’s a pain in the arse to get, what else could I use? And they said, what about Angostura bitters to put the bitter flavor in?
[00:10:44] Cameron: I go, I got some of that in my fridge. Yeah, okay, well, just, you know, replace the Angostura bitters and you’ll have something similar. So boom, got a homemade recipe for making my own tonic water now. I have heard people talk about this. You take a photo of any ingredients on the back of a thing in the GPT and it’ll give you a recipe to replicate it at home.
[00:11:03] Cameron: So if you can be bothered, there you go.
[00:11:05] Steve: I mean, I don’t think people realize how much GPTs and AIs are going to eat the internet. Things that we would go to to get information, yeah, you know, the IMBD example or Rotten Tomatoes of a movie is a great example. Take a photo of anything, image recognition, and image recognition, Leads you to a commercial answer that someone otherwise had a valid business running behind that on some sort of internet or app play, it’s really going to eat up apps because a GPT can do anything that an app can do.
[00:11:37] Steve: And you know, one of the topics that we want to discuss today is what, what are the competition implications of that? When classic power laws emerge and you’ve got three AIs that literally can do.
[00:11:50] Steve: Anything on the internet, what happens to the entire, uh, commercial system underneath it at the moment.
[00:11:57] Cameron: Yeah, that’s the thing that’s been mostly front of mind for me for the last few weeks. But before we get into that, I sent you guys a link to Ray
[00:12:04] Cameron: Kurzweil’s latest TED Talk. I’ve been reading his latest book, The Singularity Is Nearer.
[00:12:11] Tony: I thought it was coming out next year you’ve already read it have you?
[00:12:15] Cameron: Uh, yeah, no, it’s
[00:12:16] Steve: Came, it became so near tiny that it’s already arrived, see what, what happened, what, it just, it
[00:12:21] Steve: just, it went back, it went forward in time, and, and that’s, that’s what
[00:12:24] Steve: Ray can do, because have you seen his hair? With hair like
[00:12:27] Steve: that, he’s capable of anything.
[00:12:31] Cameron: That hair. Uh, yeah, so, um, Uh, I’ve been reading the book and it’s great. And it prompted me to pull out his older books too. So I went back over the last couple of days and I pulled out The Singularity is Near from 2005, Age of Spiritual Machines from 1999, I think, and The Age of Intelligent Machines from 1990 and had a look at some of his predictions from those books, which I’ll talk about a little bit later on, but in terms of the TED Talk, did you guys get a chance to watch that?
[00:13:03] Steve: did watch it,
[00:13:06] Cameron: any takeaways, any thoughts on Ray’s latest TED Talk, apart from his hair and his braces?
[00:13:11] Steve: but look, I liked it, and, and I think, The thing that I like about his TED Talks, which is the antithesis of most of them, is that it’s not really a keynote speech, which has got like this story arc and this whole thing, it’s just like, here’s the data, and here’s where it’s going, and I really like that. The one thing that is, I think, uh, a phrase that, uh, And the last one that really stood out to me, and you’ve written it down here, is Longevity Escape Velocity.
[00:13:38] Steve: I mean, that’s a really, really interesting idea where, uh, you know, because everything’s just really organized at the atomic level, I guess,
[00:13:45] Steve: to some extent. And, uh, I thought that that was a, really poignant moment,
[00:13:50] Cameron: to some extent, to,
[00:13:52] Steve: to all extents. There you go. There you go. Thank you, Cameron. Yeah. So that,
[00:13:57] Cameron: you know me, I’m a, I’m an atomic, I’m an atomic dogmatist. Everything is atoms, don’t tell me anything else. It’s all atoms,
[00:14:04] Steve: Including my thoughts, which I have no
[00:14:06] Cameron: including your thoughts,
[00:14:07] Steve: that I just made then wasn’t my fault, according to your thesis.
[00:14:11] Tony: Hehehe. Hehehe.
[00:14:12] Cameron: Not according to my thesis, according to physics. Alright, move along.
[00:14:17] Steve: I, I really liked
[00:14:18] Steve: it. And I like, I love that he just went, here’s a whole heap of charts, bam, bam, bam. You can’t look at them.
[00:14:22] Cameron: Yeah, no presentation level, whatever with
[00:14:26] Steve: Zero. I don’t care. They’re in there. I just want, just read
[00:14:28] Cameron: He reminds me of, he reminds me of Noam Chomsky, like, like, it’s like, just, here’s the
[00:14:32] Cameron: data, take it, don’t take it, I don’t care, you know, let’s move on. Um, explain the longevity
[00:14:37] Cameron: escape velocity for the audience at home, Steve.
[00:14:41] Steve: Yeah. So the idea is that. As we move closer to having artificial general intelligence, we will eventually merge with the machines and that we’ll have enough knowledge to cure disease. And yeah, either, either with nanobots as well inside our body, that the life expectancy will increase at a faster rate than one year per year.
[00:15:06] Steve: Is that, is that a fair way to describe it? So we’re actually, We get to a point where everything is solvable because the technology is moving so fast that even though you’re aging at a year by year basis, the technology. is increasing at a level that’s faster
[00:15:20] Steve: than that, that will enable us to solve all disease and cure things
[00:15:24] Steve: so that we don’t ever die, is
[00:15:25] Steve: basically the idea.
[00:15:26] Steve: Unless you get hit by a bus, which he poignantly points out in the presentation.
[00:15:30] Steve: It’s, it’s actually not that easy to explain. How did I go
[00:15:33] Steve: Cam? Maybe you can give your version
[00:15:35] Cameron: well, I think the way I explained it to my mum and Chrissy after I watched it was essentially that at the moment with medical science, for every chronological year that you age, you’re actually only aging eight months because medical technology is able to stave off You know, sort of four months of what traditionally, a hundred years ago, you would have been experiencing in terms of the breakdown of your body and illnesses and the impact of those illnesses and
[00:16:05] Steve: injury was a, was a big one as well. That
[00:16:07] Steve: you die from a broken leg or something, or you have an appendicitis. I had that when I was 10, I’d be dead. If it was, if I was born 200 years ago,
[00:16:14] Steve: the end. Thanks for coming. I hope you enjoyed your stay, Samantina.
[00:16:18] Cameron: And he was saying that as, as, um, AI continues to progress, uh, over the next decade, we will be buying back more and more time for every chronological year that you age. So eventually you’ll get to a point, he’s saying in the next decade, where for every chronological year that you age, medical science will actually save you a year, so it becomes neutral.
[00:16:43] Cameron: You don’t actually
[00:16:44] Tony: It adds a year to your life
[00:16:45] Cameron: the aging process. Well then, and then he said it goes backwards, then for every chronological year that you will age, you’ll actually be saving 18 months,
[00:16:56] Cameron: um, so you’re actually getting younger as it’s repairing the damage that aging has done to your body, and so you will get to a point, and he’s saying in the next two decades, basically, by 2040, essentially, we will have reached a point where you’re You will get younger and younger as you get older until a point where you reach some sort of stasis and then you’ll just be held in that stasis.
[00:17:25] Cameron: Uh, you’ll, you’ll get to basically 25, the physical health of a 25 year old, and you’ll just basically hold there until you get bored and you want to terminate things, you know, or you get hit by a bus.
[00:17:38] Steve: it was,
[00:17:38] Steve: it was peak oil. And now in the Kurzweil era, it’s peak humanity. Welcome to 27 forever, club 27, Kurt Cobain, we’re coming for you.
[00:17:47] Cameron: And I’m sure we all hope that Ray is the first person to get access to these
[00:17:52] Steve: Well, he needs to improve his hair. And, and that is the first thing on Ray’s list. Let me tell you.
[00:17:58] Cameron: You know, it’s sad to me, like you mentioned Marvin Minsky a couple of times, and for those people who don’t know Marvin
[00:18:03] Cameron: Minsky or his role in AI, he was like one of the original 50s and 60s who was at the forefront of AI research. He was Ray’s professor and mentor and friend. For many, many decades, I remember reading Marvin Minsky’s Theory of Mind book, um, geez, in the early 90s, maybe, when I first started getting really interested in this stuff.
[00:18:27] Cameron: And he passed away, uh, a couple of years ago, sadly. Like, he didn’t even get to see ChatGPT land in 3. 5, or all of this kind of stuff. So, you know, I find that Kind of really tragic. I’m just looking up when he actually died here. Hold on. 2016. Yeah. Okay. Eight years ago. Like, um, really like just, you know, the, the, the tragedy and irony of one of the guys who spent decades.
[00:19:00] Cameron: But here’s the thing that’s interesting that Ray talks about in his latest book is he and Marvin had a huge debate. You know, Ray talks in the book about the two approaches, the two basic approaches to AI. They’re thinking about AI over the last 70 years and, um, one is what he refers to, I think, as, um, symbolic logic or some variation of that.
[00:19:27] Cameron: Basically, and this has been the dominant approach for the last 70 years, is where you had to write down every rule about how the world works and give it to the AI. The sky is blue, cats like to drink milk, roses are usually red. Unless they’re not, etc, etc. And you had to write all of these rules, capture all of these rules, and then the AI, if you asked it a question, would have to look up all of these rules, and work out how to answer your question based on all of these rules.
[00:19:59] Cameron: And then he talks about the problem with, um, complexity. The com Plexity theory where every time you add another rule it gets more and more complex and if one rule is wrong or breaks it just breaks everything and it’s a very fragile system. The other idea was basically neural nets. Here’s another term for it in the book, I can’t remember off the top of my head what it is, but essentially what we now call neural nets, which is where you basically give it tons of information and tons of compute.
[00:20:33] Cameron: And tell it to, you know, build little nodes or tokens of information. And he says that Minsky and another guy, uh, wrote a book decades ago that basically shit on the neural net idea as not being valid. And it, he said it stopped research in neural nets for like 40 years. And before he died, Minsky said that one of his biggest regrets was co writing this book because it basically shut down a whole field of research, which of course today is exactly how transformers, which underlie the LLMs, actually work.
[00:21:12] Cameron: The problem that they had, they used to try and do this stuff in the 60s and 70s, he talks about a guy that was experimenting with it earlier on, and when Minsky shit on it, um, you It, you know, with Papert, I think was the name of the other author, it just, you know, everyone just veered away from it and his experiments shut down and no one took it any further.
[00:21:32] Cameron: But we didn’t have the compute to, to do that back then and, you know, all of a sudden now, thanks to largely NVIDIA, We have the compute to make this kind of stuff work. So anyway, Minsky, I just, I just hope that Ray survives long enough to take advantage of the stuff that he’s been predicting for the last 50 years, you
[00:21:54] Tony: that’s, that’d be the real tragedy for me. I mean, certainly Minsky was important in my life, but, but yeah, if I gets like dies a year before the upload. Yeah. I think you’re talking about Markov chains when you were talking about neural networks before. Expert systems and Markov chains. And that’s really
[00:22:11] Tony: what I grew up experiencing in business and how they were going
[00:22:14] Tony: to be revolutionary and, and just really ran out of computing space to, to be successful.
[00:22:20] Cameron: yeah. Anyway, um, sticking with, uh, Ray, so, um, He’s still predicting, he’s predicting AGI by 2029, and again for new listeners, what is AGI? I get asked this a lot by people, um, it stands for Artificial General Intelligence, the basic Like, there’s a lot of people with different definitions. It’s a bit fluffy, but the one that I always come back to that seems to make sense is AGI is when an AI system is better at most things than most people.
[00:23:01] Cameron: So it’s going to be better. Yeah. Why? You have a different definition?
[00:23:07] Tony: Sorry, I shouldn’t interrupt. I do, yes.
[00:23:10] Cameron: That’s fine. What’s your definition?
[00:23:12] Tony: Well, I thought in my mind an AGI is when you, uh, whatever you program can learn. So you didn’t, as you said before, you don’t program in the nodes and then have the system work out the logic to present the answer based on the data.
[00:23:27] Tony: It’s, you’ve written a program and
[00:23:29] Tony: it can think for itself and teach itself and then, um, solve problems that weren’t coded originally into,
[00:23:35] Tony: into the program.
[00:23:37] Steve: You can already
[00:23:37] Cameron: Well, we already have that. That’s what AI is now.
[00:23:41] Tony: Yeah. Okay. All right.
[00:23:43] Cameron: I mean, the way an LLM works is nothing’s coded into it. There are no answers coded into an LLM,
[00:23:51] Steve: Yep. it
[00:23:52] Cameron: it is
[00:23:53] Steve: uncover an answer based on the data that it has and
[00:23:56] Steve: the inference it can make through the probability in the large data sets. So it sort of does that now.
[00:24:04] Cameron: no one’s, no one’s coded, like, and this is a thing I again explain to people that are new to this a lot, because I think this is the thing that people don’t appreciate, like when people, people bring up hallucinations. Oh, it’s not perfect. It makes mistakes. You can’t trust it. See, the thing you have to realize is two years ago, three years ago, no one even knew this would work, this approach.
[00:24:32] Cameron: I saw when, uh, Jensen Huang, the CEO of NVIDIA, interviewed Ilya Tsutsikova, the chief scientist at the time for OpenAI, then unfortunately left recently, um, after the whole kerfuffle with Sam’s removal, etc. But he was interviewing Ilya, who’s basically the brains behind ChatGPT. One of them, but one of the dominant ones, and he said, Jensen Huang asked earlier, What surprised you the most about ChatGPT 3.
[00:25:00] Cameron: 5? And he said the fact that it worked.
[00:25:04] Cameron: So you’ve got the chief designer of the AI who was surprised that it worked because the theory was just, you know, they started to play around with this approach of transformer models, which are only themselves five or six years old. Well, at the time they were five or six years old when they started putting it together and it seemed to be exhibiting emergent behavior.
[00:25:28] Cameron: And so they basically just thought, what happens if we throw way more data at this? And to do that, we need to throw way more compute at this. And so they scaled up their data and their compute. That’s all they did and continue to train it like they would normally train it, which is just, you know, come up with an answer to this question.
[00:25:49] Cameron: Yeah, that’s the best answer. Come up with an answer to this question. Yep. That’s the best answer. Come up with an answer to this question. Yeah, that’s the best answer. And let it, you know, figure everything out for itself based on how well it’s answers scored. As they threw more and more compute at it, it all of a sudden became intelligent.
[00:26:09] Cameron: And they were like, oh shit, like, this works! Like, they didn’t know, they, and we still don’t really know how it works. The people who run these things, Still really, it’s a bit of a black box. That’s the problem with it. We know if you put information in, intelligence comes out. We don’t know exactly why or how, but it just does.
[00:26:34] Cameron: So the fact that it’s, makes mistakes and makes errors is kind of not the key takeaway. The key takeaway should be, holy shit, this is magic. We just invented magic. You, you stick stuff in and something comes out. Now we have to figure out how, why it works so we can compress it and compress the data set and compress the level of compute that we’re using, et cetera, et cetera.
[00:26:57] Cameron: But the AGI,
[00:26:59] Tony: sorry. So I’m just going to read from the first thing that popped up when I asked for an AGI meaning. It comes from Amazon Web Services. AGI is a field of theoretical AI research that attempts to create software with human like intelligence and the ability to self teach. The aim is for the software to be able to perform tasks that it is not necessarily trained or developed for.
[00:27:21] Steve: sounds like your definition, sounds exactly, looks like you and AWS agree. Seems like you’re the man, Tony.
[00:27:26] Tony: Yeah. So, and definitions matter.
[00:27:29] Steve: there’s a lot of different definitions around this and often I think AGI gets confused with ASI and uh, you know, cause you’ve got Artificial Intelligence, Artificial General Intelligence and then Artificial Superintelligence, which is kind of what I think most people think of.
[00:27:44] Steve: When they, when they, um, talk about AGI, where it’s an artificial intelligence that far exceeds human intelligence in all manner of, uh, human intellectual endeavor, whether it’s creative, intellectual, economic, whatever it happens to be. I don’t know.
[00:27:59] Tony: I agree, Steve. I, when I,
[00:28:00] Steve: can’t even agree on the definition tells us where we are.
[00:28:04] Steve: We’re in a moment of significant change, right? That, that kind of is, I think, part of this whole
[00:28:08] Tony: we’re making it up as we go along. I agree. But, but when I think about this subject area, I think we already have AGI, but we don’t have ASI. AI is a sort of
[00:28:18] Steve: Cam and I have talked about that too, at various points.
[00:28:21] Tony: Yeah. So that’s, I guess, just the term straight. That’s what I think of when I think of those two things. And I think it’s important for like discussions about what happens with AI.
[00:28:32] Tony: I think ASI is really hard to predict what will happen and it’s way out there into the future and well, maybe not too far into the
[00:28:40] Steve: 2045, according to
[00:28:41] Tony: yeah, but it’s coming.
[00:28:43] Steve: which is even more radical,
[00:28:45] Tony: But I think AGI is a bit easier to predict. It’s Kurzweil’s curves. It’s, we know what an increase in computing power will do. You know, it allows us to code and all that kind of stuff.
[00:29:00] Steve: we have, we have been here before in the industrial era, when we used those curbs before, and we assumed that they would exponentially increase in perpetuity. And then we got to a point where they didn’t. Well, for two reasons. One was we, we decided that, um, faster and stronger wasn’t necessarily answer more efficient was.
[00:29:20] Steve: And we might, we may decide that as well with compute. We might decide, well, wait a minute, we’re just using too much energy and go, this is intelligent enough. Like airplanes haven’t gone any faster since, you know, the early seventies. That’s it. It’s 1969.
[00:29:35] Steve: And we made them safer.
[00:29:36] Tony: it. Yep.
[00:29:37] Steve: And more comfortable.
[00:29:38] Steve: And maybe we make AI safer, more usable, you know, less, less, more economic in terms of energy. So there, there, there is some historical, and this might be different because AI and intelligence, computational intelligence is different to industrial powers, but we have, we were on that trajectory at one point where we were talking about, Oh, you’ll fly to London in half an hour, like, and it just never happened.
[00:30:00] Steve: So, I mean, I don’t know, but it’s worth, you know, Remembering
[00:30:04] Tony: Yeah, well, there are constraints, right? And probably in preparing for this show, I was reading articles and they, I forget now who it was, uh, may have been Sam Elson or someone like that, but they came out and said, the constraint they’re facing now is the data set, right? They’ve trained AGI on the internet, um, but they, but they’re finding that’s limiting.
[00:30:25] Tony: The learning is now being limited by the data set. So it’s possible that we don’t get any smarter or faster than the computing
[00:30:32] Steve: I think
[00:30:34] Steve: is kind of interesting because it’s what you put in. There’s two, there’s two parts to that equation. One, and Cameron and I have discussed it, is the idea of dead internet theory, where the internet is already starting to be heavily populated by AI, and you get into a recursive, uh, Decline Spiral, where the resolution of the data and the nuance of new information gets lost because AI is populating the internet, which is what the AI learns on, so it can decline.
[00:31:00] Steve: The opposite side to this is the idea that self drive cars and closed circuit and open circuit cameras start to train the internet with inferences from the real and the physical world and satellite data and all of the information that is just absorbed rather than published, let’s say.
[00:31:17] Tony: really interesting. That’s where I was going to as
[00:31:19] Steve: so we absorb
[00:31:20] Tony: is the constraint, then it needs to have an interface with the physical world. So it doesn’t just look up every picture of a rose. It actually looks at all the roses that you can see.
[00:31:31] Steve: And if everything is on camera and everything is recorded audibly, and we have enough camera and computation taking in inferences from the real world, the AI could maybe train itself without relying on human publishing of language and words and video, and then populate itself with a second generation of data points to train the AI.
[00:31:54] Tony: Yeah, and I go one step further. AI, I think, will hit the next quantum, so we’ve kind of reached the quantum level now, goes up again with synthetic eyeballs and synthetic eardrums, so it’s, I don’t think cameras will work because it’s a flat 2D image, but I think if you have some kind of, you know, if you can grow
[00:32:14] Tony: an eyeball on the Petri dish that links into Bye.
[00:32:16] Tony: Bye. ChatGPT, then it starts to experience the real world and the data set expands again. So I think that’s probably,
[00:32:23] Steve: And that goes into the whole Kurzweil bit too of, you know, synthetic, um, you know,
[00:32:30] Steve: symbiosis with, with computation and yeah.
[00:32:34] Cameron: your, your, your iPhone camera has LiDAR built into it. So we now have cheap, deployable LiDAR that can be used in cameras that are plugged into AI. So you get full depth perception, um, full recognition of shading, of colors and movement, that kind of stuff. I, I think that’s almost there. Anyway,
[00:32:57] Tony: But also, but yeah, but just, just before we leave that, like, I don’t think that’s good, but it’s not going to be enough to increase the data set enough to make AGI take that next leap. And I, and I use as evidence for that self driving cars. And, um, there was an article in the AFR recently about, um, Is it Waymo in San Francisco now offering autonomous taxi services?
[00:33:18] Tony: And if you look at the car, it’s full of LiDAR units and spinning cameras and all the rest, but it still does have some problems actually driving. It’s supposed to have less accidents than a human at the wheel, but it still has accidents. So, but yeah, you can only take spinning cameras and LiDAR so far.
[00:33:36] Tony: What’s the difference between us and that we have eyeballs and a, and ears and a head that can swivel. So that’s probably, I think, gonna be the, the next game changer for ai when it, when it merges with
[00:33:47] Tony: synth synthetic tech,
[00:33:49] Steve: Tony, we need to have a rock band called Swivel Heads and I just feel like
[00:33:52] Steve: I can have a punk band called the
[00:33:54] Tony: you can supply the hats
[00:33:57] Cameron: Tony, Tony’s a legendary guitarist and vocalist, man.
[00:34:01] Steve: I didn’t know that.
[00:34:01] Cameron: to the QAV theme song at the beginning of every episode.
[00:34:05] Cameron: That’s Tony,
[00:34:06] Steve: Well, I better get on that. Alright.
[00:34:08] Cameron: Legendary. Um, well, the point I wanted to make though, getting back to Ray, is in his TED talk, he said some people are predicting AGI
[00:34:17] Cameron: within two years, and he’s saying Elon, others are saying, and he said that’s possible.
[00:34:22] Tony: A GI or a, so sorry, A SI or a
[00:34:28] Tony: And is that the equivalent of a SI? Is that the artificial super intelligence.
[00:34:32] Cameron: no, just AGI, which again, I’m going to get back to Sam Altman’s definition, when the AI system is generally smarter than most humans at
[00:34:41] Cameron: most things.
[00:34:42] Tony: so that, why isn’t that, why isn’t that a SI,
[00:34:46] Cameron: well, ASI is when it’s like 10, 000 times smarter than humans.
[00:34:53] Cameron: So the,
[00:34:54] Cameron: um, The AGI, um, you know, the AI that we have today, ChatGPT 4, et cetera, Claude 3.
[00:35:02] Cameron: 5, Gemini 1. 5, whatever it is, are really, really good at lots of things and really bad at other things. Like, I’ll give you an example if you, and I did this the other day to demonstrate this to somebody. You open up any one of those tools, and I, and add perplexity to this, um, and you ask it to multiply two four digit numbers.
[00:35:25] Cameron: You will get wrong answers over and over again. Uh, actually with most of it, ChatGPT 4 is the worst. It gave me like five wrong answers. I had the calculator in front of me and I kept going, Nope, Nope, Nope, Nope, till I finally got it right. The other apps I used took one or two, I took two, uh, two goes to get it right.
[00:35:44] Cameron: The first answer was wrong. And then I’d say, Nope. And it’d give me the right answer. So they’re bad at that. They can’t play chess. Um, you know, the coding that they do sometimes is terrific. Sometimes it’s shitty. Um, so they’re really, really good at lots of things some of the time, and then really, really bad at other things all of the time or some of the time.
[00:36:05] Cameron: AGI, in my mind, is when they’re right about nearly everything, nearly all of the time, and highly, highly reliable. And, and so the predictions for that, from everyone who’s in the field, Um, all the guys who run all the AI labs, and Kurzweil, and Elon, and Gates, and you know, Wolfram, is two to five years from now, is when we get AI systems that are massively more competent than most humans at most things, and Uh, extremely reliable.
[00:36:43] Cameron: Sam’s saying QPT 5 is going to be extremely reliable. We don’t know when that’s coming out, but the assumption is in the next year or so. And this is important when we get to the next part of the, the, the debate, the discussion, there’s around what happens to the economy when we have these systems readily available at relatively low cost.
[00:37:03] Cameron: That’s where I want to get to, um, with the show. Before we do further that, I want to touch on some of Kurzweil’s past predictions. In The Age of Intelligent Machines, again, which he wrote in 1990, he said, I placed the achievement of level four intelligence sometime between 2020 and 2070. Now, level four in that book was passing the Turing test.
[00:37:27] Cameron: And he still is saying in his latest book that it hasn’t done that yet. Steve and I disagree. We think it has blown away the Turing test, because it can answer pretty much any question as well as a human can, and in fact, the problem now, of course, is that it answers it too well. The way that you know it’s an AI is it’s able to answer it far better than most humans could.
[00:37:50] Cameron: But anyway, so, in 1990, he predicted that, and I’d say 2020 was pretty much right on the money. Like, ChatGPT 4, ChatGPT 4 Let’s say came out 2023. Um, that was pretty much, you know, he was a couple of years out there. He really said between 2020 and 2070. So it was on the lower end of that scale. This next one, not so good.
[00:38:11] Cameron: So in Age of Spiritual Machines, 1999, he wrote, if we apply the same analysis to an ordinary personal computer, we get the year 2025 to achieve human brain capacity in a 1, 000 device. Now, Apple is coming out with Apple Intelligence in an iPhone, which is a little bit more than a 1000 device, it’s like a 2000 device, depending on what model that you get, and it is going to have some level of AI built into it, we don’t really know how much it’ll be able to do until we see it, But, he may not be far off, that’s a year from now.
[00:38:48] Cameron: But then he went on to say, The memory capacity of the human brain is about 100 trillion synapse strengths, neurotransmitter concentrations at interneuronal connections, which we can estimate at about a million billion bits. In 1998, a billion bits of RAM, 128 megabytes, cost about 200. The capacity of memory circuits has been doubling every 18 months, thus, By the year 2023, a million billion bits will cost about 1, 000.
[00:39:17] Cameron: However, the silicon equivalent will run more than a billion times faster than the human brain. There are techniques for trading off memory for speed so we can effectively match human memory for 1, 000 sooner than 2023. So I asked ChatGPT about this. And it said he was slightly out on this one. Um, 1, bits of RAM is about, uh, 1 petabit of RAM, which is about, uh, 1, 000, bytes.
[00:39:51] Cameron: Uh, current RAM prices for DDR5 are about 3 per gigabyte in the US. So that would be about 350, 000 basically for a petabyte of RAM. But as it then goes on to say, not really Ray’s fault because it was market driven dynamics for that, but that’s one of the problems with projection, you know, projecting into the future, right?
[00:40:16] Cameron: You don’t really know what the market, we probably could build that amount of RAM, but there’s just no market for that amount of RAM in consumer devices because we have had really nothing to do with it You don’t need that much RAM to run a super fast iPhone or a super fast PC. We do need it to run a super fast AI.
[00:40:36] Cameron: So we may catch up, uh, with that. Um,
[00:40:40] Tony: I can just throw 2 cents in here, cam. I think the beauty of kurz while’s predictions and, and you know, you and I talk about the inability for people to predict on our QAV show all the time when we talk about the stock market and what companies will do and where the economy’s going, et cetera.
[00:40:55] Tony: But the thing I’ve always admired about Kurzwell is it’s rooted in this curve. The idea of the curve. Here’s what’s happening. Here’s what I’ve observed. I’m simply extending that out using the simple maths of plotting it on a graph. That’s, that’s such a powerful framework for, for anchoring his predictions in.
[00:41:13] Tony: That’s why he’s got longevity in the space, I think,
[00:41:15] Steve: I think too though, Tony, it is Use that word anchored. It’s anchored in the economic incentives, which ties into the QAV thing, on people building computation capacity and getting the price cheaper and all of that. So without of that, I mean, it’s not just like a law of nature. It’s how we interact in the market, which makes these things more available at better prices, but, but like the share market, and, and I remember in one of his presentations, it was when he, um, he did a presentation at Google Talks, it was around about 2000, I want to say 2010, when his last book, whenever that came out was, or 2013, and he went through the logarithmic charts that show it, Like the share market, they don’t move in straight lines.
[00:42:00] Steve: So you might have a, a period of flatness or decline and then, and then it goes up when there’s a, there’s a curve and a, a bit of a jump in, you know, technology which allowed those curves to over the long term be look like they’re predictable and straight line, but they, they bubble around before they hit those points.
[00:42:18] Steve: And sometimes you’re a little bit behind schedule and then there’s a curve jump in the tech, bam. And GPUs might be one exemplar of that as well.
[00:42:25] Tony: Yeah, and that’s why I think Kurzweil’s still pretty accurate, even though the price of that memory chip is 350, 000 to replicate the human
[00:42:33] Steve: Yeah, it’s like, you know, the, the, the all ordinaries and, you know, the S& P 500 are averaging, you know, 10, 11 percent or whatever it is over, you know, the longer term. Yeah, you have your ups and your downs in that, but over the long run, it’s kind of what you
[00:42:44] Steve: get. so
[00:42:45] Tony: taking a dot plot and attempting to put a curve through the middle of it. But the dot plot never follows the curve exactly.
[00:42:53] Cameron: That’s the name of Swivelhead’s first album is the Dot Plot, by the way. Um, this,
[00:42:58] Steve: I’m going to make that song this afternoon.
[00:43:01] Cameron: is near 2005, he said, based on the above analyses, it is reasonable to expect the hardware that can emulate human brain functionality to be available for approximately 1, 000 by around 2020. As we will discuss in Chapter 4, the software that will replicate that functionality will take about a decade longer.
[00:43:22] Cameron: However, the exponential growth of the price, performance, capacity, and speed of our hardware technology will continue during that period, so by 2030 it will take a village of human brains, around 1, 000, to match 1, 000 worth of computing. By 2050, 1, 000 of computing will exceed the processing power of all human brains on Earth.
[00:43:44] Cameron: Of course, this figure includes only those brains still using only biological neurons. Anyway, my point is that 2020, uh, Kurzweil’s been saying 2020 since at least 1990 and, um, he’s, like, you can debate what we get for, in terms of, for 1, 000 worth of computing, but the overarching idea there, I think, has always been that by 2020 we would have the ability to start to create forms of artificial intelligence.
[00:44:15] Cameron: And he was right on the money. Like ChatGPT, it’s almost like, you know, they were using Kurzweil’s timeline as a guide for when to launch this stuff. But, uh, it’s kind of astounding. So, when he says now AGI by 2029, when he says ASI, Buy, uh, well he says the singularity by 2045, I think, in his latest book, uh, let me just scroll down, yeah, 2045, and he defines the singularity as when we increase intelligence a million fold.
[00:44:49] Cameron: Now, you know, I’ve been arguing on this show that we’re already in the singularity, because I don’t think I can accurately even begin to predict what the world’s going to look like ten years from now. And to me, that’s always been the definition of the singularity, when change is so rapid, And so massive that we don’t really know what it means to be human 10 years from now.
[00:45:12] Cameron: We can’t really imagine what that would look like, but, um, he’s, he’s pushing it out a little, you know, another sort of couple of decades. Um, moving right along guys, uh, Steve, uh, I know you’ve wanted to talk a bit about, um, The Australian media versus Meta, and I see, um, your good mate Albo came out over the weekend and stuck the boot into Meta.
[00:45:39] Cameron: Um, I find this all pretty hilarious, but, uh, why don’t you throw your two cents in?
[00:45:44] Steve: well, I’m just interested and we don’t have to labor on it, but I think it’s important, especially for, for the Australian listeners. Um, there’s been a media bargaining code where. Uh, tech companies like Google and Meta have to pay for the content that they absorb, uh, into, um, their systems or, or, or into their formats where people can read the news and so on.
[00:46:12] Steve: And they reluctantly pay, who knows, you know, tiny amounts. Um, And we can see why there would be a protest against it, both from Meta, who are like, Hey, we’re just distributing stuff, we shouldn’t have to pay this. And then you’ve got News Limited and Murdoch and all the others saying, You’re just sucking up our content, and we should be paid for what we’re creating.
[00:46:31] Steve: I’m just really interested to hear what you two think about it, because I don’t know who’s worse. I don’t know if the Murdoch press is worse, or if, uh, Meta and Google and the others. I just don’t know. But, well, the one thing that I do know is that the Fourth Estate is important. As Cameron pointed out on a text message to me, he said, Fourth Estate?
[00:46:49] Steve: What’s Fourth Estate? We haven’t had one for a long time. But I’m really interested because It’s really, really important. We’ve already talked about the elections today. So,
[00:47:00] Tony: Yeah, I’m surprised Meta hasn’t offered some kind of revenue split for the ads that go with the news. as a way of paying back rather than try and have the government impose a solution on it. Because I, you know, I think you’re right, Steve. I think, um, certainly in the last 10 or so years, the level of journalism has declined as social media has taken off.
[00:47:22] Tony: Um, and you know where is that we’re evidencing in the US candidates who are standing for election, really. Um, there’s, there is no, Yeah, there’s no, you go back many years, there’s no, um, who was the guy from Thank You and Good Night? The, anyway, you know, the old sort
[00:47:38] Steve: Clooney.
[00:47:38] Tony: yeah, well, the character George Clooney played, you know, there’s no sort of,
[00:47:42] Tony: um, independent journalist holding people to it, holding truth to power, or telling truth to power these days.
[00:47:50] Tony: The journalists we have these days are so
[00:47:53] Cameron: Well, there are. Julian Assange, they
[00:47:54] Tony: it’s true. Yeah, they are there. True. They are
[00:47:57] Steve: the other guy that they just locked up in Australia who let out some military secrets, whatever his
[00:48:02] Cameron: Pride,
[00:48:03] Tony: McBride, yeah. Yeah, no, that’s a good point. But, um, but, you know, gone are the days where journalists, there was enough money in journalism that
[00:48:11] Tony: they could stand up to government.
[00:48:13] Tony: Now they’re pretty much beholding on government, right? So they become friends of government and it’s a bit of a dance. It’s, so it’s a problem.
[00:48:20] Cameron: So, like, it just amused the hell out of me when it was
[00:48:24] Cameron: News Limited. Uh, coming out and saying that Facebook was toxic, uh, toxic influence in society. Like, really? Really?
[00:48:35] Steve: No, but it absolutely is. This is the point, Cam, is that yes, it is. Right? They
[00:48:41] Cameron: Yeah. Both things can be true.
[00:48:43] Steve: that said, and we too are toxic. Like the point is, there’s a lot of toxicity here. Shout out to System of a Down.
[00:48:50] Tony: Hey, we know Toxic. We know toxic. We invented
[00:48:52] Cameron: Yeah,
[00:48:53] Steve: toxic. Don’t question us at Toxics. A lot of people, they don’t understand how toxic the mainstream media is.
[00:49:01] Steve: Very good friends of mine. Very good friends.
[00:49:04] Cameron: It’s pretty good, Steve. That’s probably the best amateur Trump impersonations I’ve seen. Um, Yeah, look, the media has, you know, I’ve been, I’ve been banging on this for a long
[00:49:17] Cameron: time now, since I started podcasting, really. Like, the media has, uh, run, deliberately run itself into the ground over the last couple of decades by, you know, cutting costs, getting rid of journalists, getting rid of fact checkers, cutting, cutting, cutting, cutting, uh, partly because of economics, because the economics of media retention fragmented with when the internet happened and, and, and ad spend fragmented and they didn’t, uh, uh, adapt.
[00:49:52] Cameron: Fast enough, so they got replaced by Seek, and Google Ads, and all of these other platforms that took their revenue away. But they keep trying to maintain their relevance in society, whilst at the same time laying off most of their journalists, and changing their content from any sort of hard investigative journalism to just click baity bullshit.
[00:50:20] Cameron: And, you know, they’ve been losing money hand over fist. Um, I, I did some quick research over the last couple of days. News Corp in Australia recorded a 75 percent drop in full year profit last year. Nine Entertainments saw a 38 percent decrease in net profit. So they’re, they’re just basically floundering.
[00:50:44] Cameron: And if, and if Meta, Stop paying them the millions that they’ve been forced to pay. By the way, I love this, like, the media narrative is Meta, uh, trying to, um, you know, bribe the media companies. It’s like heavy handed tactics of Meta. It’s like, really? You’re the ones who went to the government and got the government to pass laws to face, to force Meta to pay you money But they’re the ones that are being the bullies and the heavy handed in the marketplace.
[00:51:18] Cameron: I mean, I think Meta’s playbook here is like, you know what, we don’t need you, but you need us. And if you don’t have our revenue, you’re fucked. You’re already fucked, but you’re even more fucked if you don’t have our money. So they’re in the power position. And these media companies know it.
[00:51:34] Tony: Yeah. Except meta. Meta. Meta does need the content, so it does need the.
[00:51:37] Steve: it does need it a bit. Uh, look, it
[00:51:39] Cameron: Does it though?
[00:51:40] Steve: it, it doesn’t need it. But here’s what they should do. If, if the, if they don’t pay the, yeah, they, they definitely should shut it off because then at least what you get is something needs to break for something better to arrive. Okay.
[00:51:53] Steve: The system is already broken, so
[00:51:54] Steve: let it really, really break. If meta and meta don’t need it, you’re right. But let’s
[00:52:00] Steve: Good. Shut it off because then
[00:52:01] Steve: something has a chance of happening. Right? That’s better
[00:52:05] Cameron: already done this in Canada, they’ve shut the, they’ve, they’ve deleted the news from Canada, uh, when the Trudeau government tried to implement some similar sort of, uh, bully boy tactics. They just went, okay, we’re not carrying Canadian news anymore, so there’s no news in Facebook in Canada as I
[00:52:24] Steve: Yeah. And I think one of the interesting things is the definition of, the definition of news is, is interesting as well, right? Because the lines have been
[00:52:32] Steve: blurred. You could argue that the podcasts that, the things that. We write are news. to an extent. And last year meta, you know, really, really went dirty and blocked off local sporting clubs and all of that kind of thing.
[00:52:42] Steve: So that,
[00:52:43] Cameron: Yeah, I got blocked. All of my podcasts got blocked when they did that. We were considered news. I couldn’t post a link to any of our podcasts.
[00:52:50] Steve: It’d be interesting to see what happens, but I would like to see it break so that there’s a chance for something else to emerge. I don’t know what needs to emerge or where we go, but it’s really, really important. And it couldn’t be more important right now in the era of AI building the ability to create fake anything.
[00:53:07] Steve: It’s like, where is the trusted source and how do you get access to it? And distribution is the most powerful thing in business, I think, more powerful than product. All so often, so many companies do well because they have distribution, you know, whether it’s Meta or whether it’s Coca Cola or whatever it happens to be.
[00:53:25] Steve: And if we can’t distribute valuable
[00:53:27] Steve: ideas and news, you know, we’re going to be in some,
[00:53:31] Steve: we’re going to be in a singularity again, where the future is unpredictable.
[00:53:38] Cameron: I mean, there’s this hearing going on at the moment. I, I, I, I’m guessing this parliamentary inquiry into MEDA is, uh, been carefully timed to coincide with the, uh, re signing attempts of this media bargaining document. But, um, Albo, our Prime Minister, came out over the weekend and was criticising META, um, over the social, over the harm that social media is doing to the children of Australia and how META aren’t taking responsibility for that.
[00:54:14] Cameron: As a parent
[00:54:15] Steve: It’s not meant as responsibility. It’s the government’s
[00:54:18] Cameron: never thought it’s, no, it’s the parents responsibility. I’ve always thought, it’s not Meta’s job to keep Fox off of social media, it’s my job
[00:54:27] Cameron: and Chrissy’s job?
[00:54:27] Cameron: to keep Fox off of social media if we don’t
[00:54:29] Steve: No, and the government’s. We pay them,
[00:54:31] Cameron: Why is it the government’s
[00:54:32] Steve: up until Tuesday afternoon to pay them to legislate against things for our own benefit. I know what you think of that, but I think it’s, there’s a responsibility for the government to take steps to protect their populace. You know, whether it’s all the way up to the military, to local bylaws on roads and safety of children, absolutely, and, and, and the parents, more so the parents, but the government has, the last person that has
[00:54:58] Steve: responsibility for anything is a for profit corporation, right, and if and that’s my pet hate on all of this, is, oh, the, government, corporations have got to be more responsible, no they
[00:55:12] Cameron: Well, it depends on what they’re doing. If they’re, if they’re putting pollution into the rivers,
[00:55:17] Cameron: or into the land, or into the sky,
[00:55:20] Steve: into our minds,
[00:55:22] Cameron: Well, I have more control over what goes into my mind than I do over what goes into the river in, in my suburb, right?
[00:55:31] Tony: This is an age old debate, though. There are some children who don’t have two parents, there are some children who don’t have parents who care about social media or know the issues. So you need both. You need government legislation or frameworks and you need good parenting. You can’t always rely on good parenting, unfortunately.
[00:55:47] Steve: have some people who drive responsibility and some that don’t. And the rules are there to protect those against those who don’t
[00:55:54] Steve: do the responsible thing. Just like the parents who don’t do the responsible thing with their
[00:55:57] Steve: children.
[00:55:59] Cameron: that’s a reasonable argument. But I I don’t
[00:56:01] Cameron: think it’s META’s responsibility
[00:56:03] Steve: I agree with that.
[00:56:04] Cameron: the minds of our children. It would be nice if they cooperated, but,
[00:56:08] Steve: Well, this is the thing we’ve got to start, we’ve got to start doing regardless of what it is. You don’t ask for cooperation, you demand it. Right, and it’s the same when it comes to taxation and multinational tax avoidance, which just shits
[00:56:20] Tony: I agree, Steve.
[00:56:21] Steve: It shits me to tears. Guess what,
[00:56:23] Steve: government? It’s really easy. You’re sovereign. Guess what? You’re my new
[00:56:27] Steve: taxpayer. Right, here’s a revenue assessment tax the same as land tax. Congratulations. You’re my newest taxpayer. You don’t like it? Fuck off. You’re
[00:56:34] Cameron: Well, Steve, they’ll all leave the country if we do that. If we, if we, if we’re mean to them, they’ll all leave,
[00:56:41] Steve: Good, fuck off. They
[00:56:42] Tony: We’ll, we’ll end up like Tasmania and be great. Well,
[00:56:47] Cameron: argument with, uh, you know, uh, taxing mining interests. People will say, well, if we tax them, they’ll all leave. I’m like, good. We’ll nationalize it. Fuck off. I don’t care. We’ll just nationalize it.
[00:56:58] Steve: with them. Mate, we should totally nationalise all of the minerals in our ground and or, and or, have significant royalties on all of it. It is so easy, and it is such a simple decision
[00:57:11] Steve: to make. You don’t need a dollar. from any corporate donor because for once you’ll be making a decision for the fucking people you’re meant to represent.
[00:57:20] Steve: Easy. I’m fucking running for politics. I’ve had
[00:57:22] Cameron: They’re like, oh, the
[00:57:24] Tony: national, national, nationalized, they’re owned by this, they’re owned by the states. They just, they just
[00:57:29] Steve: Yeah. And they give them away and they give them away, Tony.
[00:57:33] Tony: they charge a royalty on their extraction, but it, But sometimes it looks like a giveaway, like in WA compared to Queensland, where they charge a lot more on the royalty, a col.
[00:57:42] Steve: We, we, we have given net proceeds to companies like Santos
[00:57:46] Tony: Mm. No, I agree,
[00:57:47] Steve: what we,
[00:57:48] Steve: what we, what we get in royalties versus what we give the fossil fuel industry. They actually get more from the government. It’s incomprehensible.
[00:57:56] Tony: No, I agree.
[00:57:59] Cameron: Steve, you wanted to talk about, uh, the European commission and Apple.
[00:58:02] Steve: Well, I just wanted to touch on it briefly and especially because Tony is in the house, you know, I Apple’s in breach of the EU anti competitive laws and they’re putting up a case against them. I mean, I think Cam and I have discussed that these big tech companies aren’t corporations. They’re basically modern era utilities, which are unavoidable and have unassailable competitive modes where I don’t think anyone can catch up to them.
[00:58:30] Steve: We’ve discussed that in great detail. And I just would like to understand Tony’s viewpoint on this. What sort of an investment is big tech? In the face of potential antitrust, whether it’s fines or splitting up, does it still make them a good investment? Does it tell you how powerful they really are and they’re still a good investment?
[00:58:52] Steve: If they get split up, where historically when monopolistic firms, whether it was Standard Oil or whether it was Bell AT& T got split up, They actually created more value post split. I’m just really interested from an investment viewpoint, given the power of big tech and what is it? The top seven tech companies have a market cap of what?
[00:59:11] Steve: It’s something over 15 trillion now. I think it’s crazy. Might even be 17 trillion.
[00:59:16] Tony: Yeah, well, I’m a value investor, so I don’t really follow big tech as investments, because I think it’s a bubble, um, and you look at NVIDIA, I saw
[00:59:24] Steve: answering the question then. This is, this is good. This is what I want to learn.
[00:59:27] Tony: I saw, I saw something on, um, I think it was in today’s paper, uh, NVIDIA lost 430 billion last week in market cap, so it’s,
[00:59:37] Steve: It’s just insane. It’s such a big number. You know,
[00:59:40] Tony: it is, isn’t
[00:59:40] Cameron: Yeah.
[00:59:41] Steve: companies only have a market cap of 30 or 40 billion, and then they lose 10 X that in a week. And it’s like, I mean, it tells you something, right?
[00:59:49] Tony: yeah, and I’ve got a chart I was going to use when we record QAV tomorrow, Cam. Someone produced a graph I saw on the weekend, which said that when, uh, I think it was the top 10 companies in the U. S. had such a concentration of market cap, it was a preceder to a crash, and it happened at this level in 1929 and in 2000.
[01:00:11] Tony: Um, so when you get this kind of, Money following the leaders, um, it, it tends to crash and revert back to the mean. So, uh, you know, I, I think that’s coming. I think it’s a bubble. I think whenever anyone asks if it’s a bubble, it probably is a bubble by, by definition. Um, I don’t know if that answers your question though.
[01:00:31] Tony: Like they do, Apple does have an unassailable moat, although there’s Google, Android, et cetera. Duopolies actually, duopolies can be as, as profitable as monopolies. You know, we see that in Australia all the time. Coles and Woolworths are very profitable. So yes, they’re great investments, but at the moment I think they’re in a bubble.
[01:00:53] Tony: Should they be broken up? I have some sympathies to the EU argument because Apple controls via the App Store most of the developer market. in the world probably and charges at 30 percent on that and there’s no very little competition and yeah that’s a monopoly which you there is an argument to say it
[01:01:12] Tony: should be broken up and as you say though that may not that may create more
[01:01:16] Tony: value um for investors so you have to watch that space Ha!
[01:01:19] Tony: Ha! ha! ha! ha! ha!
[01:01:23] Cameron: right. Moving right along then. Um, I want to, let’s skip over the next couple of stories because I really want to get into the deep dive and we’ve already been talking for an hour. Um, the deep dive. So let me, I’ll restate my thesis and then we can get into it. So the thesis I’ve been trying to figure out for the last month or so on my, on my blog and on the show is what’s gonna happen.
[01:01:50] Cameron: in the next five to ten years in our world, but particularly in our economy. And my, stated simply, my thesis is this, um, AI is going to continue to improve rapidly to the point where in the next five years, We will have a form of AGI that is going to be widely available to businesses as well as consumers and governments.
[01:02:22] Cameron: And we’ll be able to do most things better than most people in terms of knowledge work, information work, and will also be integrated into a wide variety of robotic devices, including humanoid robotic devices, which, according to Goldman Sachs and McKinsey and organizations like that, will be available for purchase in the vicinity of starting price 10, 000 to 20, 000 in the next decade.
[01:02:58] Cameron: So a humanoid, a humanoid robot with an AI built into its brain, maybe commercial versions of that might be a little bit more expensive. But my theory is that when a business can buy a robot with an AI brain for Less than the cost of the annual salary of a human, uh, with all of the incumbent issues that come with humans like HR costs and health and safety costs and all of that kind of stuff, that they will start to replace manual labor with robots as well.
[01:03:32] Cameron: And so over the course of the next five to ten years, I expect to see a lot of jobs being taken first by software, AI, then by robots. And I think this is going to lead to large scale unemployment. And when I Think about the traditional, what’s happened historically is when technology has replaced a sector, um, those people or the generation that comes after them in some cases have found jobs in other industries that may not have even have existed before.
[01:04:08] Cameron: But when you have AI that’s smarter than all humans and robots that can do anything a human can do better, More cost effectively, more efficiently, I can’t imagine what kind of jobs could be created that AI and robots won’t immediately be able to do better than any human. So then, if, uh, you know, there is large scale unemployment, by large scale I’m talking 20 30 percent unemployment, you know, the last time we had 20 30 percent unemployment we had the Great Depression.
[01:04:42] Cameron: And what happens when you have that level of unemployment, obviously, is economies meltdown, there’s not enough cash being spent, uh, so all wealth decreases. You know, there’s this idea that, oh, AI and robots will be available to the rich and the poor people will get screwed. I keep saying, if the economy melts down, everyone’s screwed.
[01:05:03] Cameron: There are no rich people when people can’t afford to buy goods and services. Businesses fail, small businesses fail, that causes bigger businesses to fail because the small businesses can’t pay their bills to the bigger businesses and there’s the trickle up effect, right? And governments don’t have enough income tax to come in to pay for services.
[01:05:24] Cameron: So you end up with austerity measures like they’ve had in the UK for the last 15 years and we’ve seen what’s happened to the UK. My hairdresser, believe it or not, I just got a haircut a couple of days ago. Um,
[01:05:42] Cameron: Steve couldn’t cope with that, he had to walk away from the mic. Uh,
[01:05:45] Steve: you didn’t! Listen, we’re trying to
[01:05:48] Steve: make this
[01:05:49] Tony: Which hair did you have
[01:05:49] Steve: truth, about the future, and then you just pull a Kurzweil on me and tell me how to haircut?
[01:05:54] Steve: You and Ray need to have a cup of coffee and sit down and discuss things.
[01:05:58] Cameron: it was a good inch longer last week and he put layers, layers, Jerry, layers. Um, he just got back from London, he’s visiting his grandmother, he’s a
[01:06:08] Cameron: Pom, uh, who’s lived here for 25 years, and he was like, dude, it’s so fucked over there, like, she’s 90 and she had to go to a hospital to get a brain scan done.
[01:06:17] Cameron: He said there were like, staff fighting over the two available wheelchairs they had, Old people lying in corridors just drooling without being, getting any attention for five or six hours. He said plaster falling off the walls in the hallways of the hospital. He said it looked like the Soviet Union in the 60s.
[01:06:35] Cameron: Like it was just that decrepit. He was shocked. It was his first time back in quite a while. And he was like, dude, that country is fucked. Fucked. And this was in London. Like, we’re not even talking about, was it, um, Manchester or Birmingham that’s just gone bankrupt? Uh, like eight cities in the UK now have just, have gone bankrupt because they can’t afford to pay their bills for their services.
[01:06:58] Cameron: Anyway, my, getting back to my theory is I am predicting and forecasting massive eco Economic and social upheaval in the next five to ten years, Tony thinks I’m bonkers. Discuss.
[01:07:15] Steve: I’ll let Tony go first, I’ve got a whole flowchart on.
[01:07:18] Tony: I think you, I think, well, inherently predicting is difficult. I’m not seeing a curve for your prediction, a Kurzweil curve for your prediction. But let me just, to me it defies the laws of commerce. So if you can buy a synthetic human for 20, 000 and a starving human will work for 19, 000. So there’s a race to the bottom that goes on in commerce in terms of cost cutting.
[01:07:43] Tony: So I don’t think you’re going to see widespread unemployment. We’ve seen this happen before, right? Mexicans cross the border and do gardening jobs cheaper than what, um, the locals do it. We outsource things to China because the manufacturing is cheaper. The, the production of a robot that can do A human’s job is always going to be tested against the cost of being done by a human.
[01:08:08] Tony: And if humans are
[01:08:09] Tony: being displaced en masse, like you’re saying, they’ll work for food they’ll they’ll lower their price, if it’s an issue. But you also said when we, okay, I’ll stop there. I’ll stop there.
[01:08:20] Cameron: humans working for food still doesn’t sound like a healthy functioning economy.
[01:08:26] Tony: Yeah, but the economy will reset as well. I mean, if, when there’s less purchasing power in the economy, prices come down. So food will get cheaper. It may not get better. The quality of the food might decline dramatically. Um, and we’ve seen this as well. I mean, why is McDonald’s successful? It’s because it can feed a family for 20 bucks.
[01:08:47] Tony: It may not be a good feed, but it can. And. Yeah, it’s calories, exactly. You know, how does the, how does the Chinese person survive in the factory? Well, they eat rice and they get their calories cheaply. So, it’s kind of this law of commerce or the economy that, that, um, the robot’s got to compete for its job, even though it’s maybe fantastic and can do it better.
[01:09:11] Tony: It’s still, it’s still got to pass the economic test.
[01:09:14] Steve: Well, I’ll just, I mean, that’s true. I just want to point out that
[01:09:18] Steve: the name of our second album of the Swivel Heads is going to be called Working For Food. I just had to point that out.
[01:09:24] Tony: We’ll have playing for food. We’ll play
[01:09:26] Steve: going to be our follow up album. Let’s hope it’s as successful
[01:09:29] Cameron: After the,
[01:09:30] Steve: of the dot plot. So yeah. I think it’s going to really have a nice flow on in terms of album names.
[01:09:38] Steve: It’s right up there. Um, it’s right up there with Garbage’s 2. 0, version 2. 0, which I really loved as an album, uh, circa 99. Um, I think I agree with you, Tony, is, is that. The tech adoption, there’s an economic paradigm that is required. Like what is the cost of a robot? That said, let’s assume that the robot or the AI or the combination of those two things is better than a human, can work 24 hours a day, doesn’t need annually, all of those things, which if they’re going to be, you know, the cost of AI online, which is kind of free, or a humanoid robot, which is, you know, 10 to 20 grand, the cost of a small car, geez, I tell you what, the economics of that, it doesn’t seem like it’s going to be hard.
[01:10:21] Steve: For it to exceed human capability. But for me, there’s a fundamental flaw in this idea. Um, and, and the number one thing that I think that really matters, and it circles back to the Apple thing, is competition is the most important thing when you have an economic upheaval. You need Large amounts of competitive players.
[01:10:42] Steve: And I’m going to explain why in my economic thinking on this is you have technology adoption. It doesn’t matter whether that’s a combine harvester or AI robotics. The reason a company adopts the technology is to reduce its costs. Right? Or to have greater output. One of those two things. Reduce cost, increase output, which reduces costs anyway.
[01:11:04] Steve: Um, so that’s why you would do it. And why would you do that? To make things cheaper. And you make things cheaper for one of two reasons. And this is a fork in the road. One is to sell more to more people because you’ve reduced the cost and increased your market scope. Reason A, or reason B, don’t increase your market scope necessarily, but increase your margin on what you sell because it costs you more, but you don’t put your price down.
[01:11:29] Steve: Now, if option A happens, right, that’s great, because you reduce, you increase the market scope, you reduce the cost, more people can afford it, and that frees up capital of the purchasers, the people buying things, to invest in emergent industries of which we don’t know what they are.
[01:11:47] Steve: But, if we go to that second fork in the road, which is, no, they don’t increase their market penetration, all they do is increase their margin, then we’ve got a problem, because what happens is, they become more profitable, money isn’t freed up to go into new industries, and the only reason Part B can happen, companies Choosing the profit, profit and margin path instead of market penetration is if there’s no competition in which margins, and this is microeconomics theory 101, uh, increase, uh, access to the market, compete away profits because they all have the same new price.
[01:12:21] Steve: And if you don’t put your price down because you’ve got a lower cost of production, your competitors will, and you lose market share and people will substitute across. If, if we don’t have competition, then part B can happen. If you do have competition, it’s impossible for companies to increase their margin at the expense of consumers. that’s actually the most important thing in this right now in your thesis, Cameron, is whether or not we have competition. If we have competition, it’s impossible for the economics to work because That margin gets competed away, which frees up capital with humans who can spend money on new and emerging industries and then populate those places with jobs.
[01:13:05] Steve: It’s the same as like what happened with the music industry. Classic example, right? I spend 30 on a CD for 12 songs. Where does the 30 go? As soon as I can download or steal music, that 30 goes into data, an iPhone, all of those other things, emergent new industries. But now those emergent industries for the music industry are all big tech and big tech is an invasive species sucking up everything.
[01:13:30] Steve: If big tech choose the profit
[01:13:31] Steve: path with automation and robotics, the whole
[01:13:34] Steve: system collapses. So for me, the number one issue on this is competition. It’s actually not the technology. That’s my theory.
[01:13:43] Cameron: problem with that, Steve, is you’re saying prices get cheaper so people can redeploy
[01:13:47] Cameron: their capital in other places, but I’m saying that people are unemployed, so they don’t have an income to redeploy anywhere. And with Tony’s point about it becomes in terms of the humans will drop their price, you know, if you can buy a robot for 20, 000, a human will do it for 19, 000.
[01:14:07] Cameron: That may be true when the quality of the output of the robot is roughly equivalent to the quality of the output of the human. But if the robot, slash the AI system that you’re using in a knowledge worker sense, is twice as good for 20, 000 as a human would be for 19, 000 or three times or five times as good with its output for speed, reliability, etc.
[01:14:37] Cameron: There has to be a breaking point where as an employer you go, humans are just difficult. They need sick days. They need managing. Um, if I get it, if, if I have a, let’s create a hypothetical company that has a department with 10 knowledge workers, let’s say a marketing department. So you’ve got 10 people in the department.
[01:15:01] Cameron: You’ve got a manager overseeing those people, um, that, that has a salary. Um, if I can get rid of. All of those people and have an AI do all of that work. I can probably have a single AI to replace all 20 of those people. If I get rid of those people, I don’t need a manager, so I can get rid of the manager.
[01:15:22] Cameron: If I have, uh, uh, uh, an executive manager overlooking all of the managers and I’ve gotten rid of all of the managers, I don’t need the executive manager overlooking the managers anymore. There comes a point where the AI systems are just, Too good, the economics of keeping humans there and then you need a HR department and you need a building with light and power and you need a car park and you need, you know, to pay holiday leave and all this kind of stuff.
[01:15:47] Cameron: Surely there has to come a point where businesses go, you know what? Humans just don’t cut it anymore. I’m going to replace them all with AI and or robots.
[01:15:56] Tony: And then.
[01:15:56] Steve: that path for a long time. Cameron, though of, of systems replacing multitudes of humans, whether it’s paperwork and the, the software’s been eating that for a really long time. In terms of the, the idea
[01:16:11] Steve: that the adoption and they move to other areas. If they’ve, if they’ve lost their jobs, they can’t pay.
[01:16:16] Steve: That always happens, but it doesn’t happen rapidly. I don’t think. It might do this time. It could be different this
[01:16:21] Cameron: Yeah, but
[01:16:22] Steve: But it happens like this, Cam. It happens like that, and a few people lose their jobs, but as that product gets cheaper, as people lose their jobs, that money goes across to this burgeoning industry, and then the people
[01:16:33] Steve: move across to that.
[01:16:33] Steve: You’ve always got that structural shift. It doesn’t go, boom, like that, or this one comes down, and all these people are It hasn’t so far. It might do. It might do. But so far, it
[01:16:42] Cameron: never, yeah, but in the past, we’ve built, okay, we’ve built a, let’s say an ERP system,
[01:16:49] Cameron: which has replaced people
[01:16:50] Cameron: were doing resource planning on pen, on pads with pens, and those people could be redeployed. We had, we built a system here, but we didn’t build a system that could do everything all at once.
[01:17:02] Cameron: That’s the thing with AI, is we’re building a system that can do everything. All at once, at the same time, better than any human. We’ve never seen it. It’s like the, uh, the ultimate machine.
[01:17:15] Steve: you’ve got to remember, the people Who are being displaced also have access to this ultimate machine. And that thing that they had in their head that they were going to invent and create, they couldn’t create, but then now they can, and then now they’re going to go
[01:17:27] Steve: out and go, you know what? I’ve been ready to put, I’ve got this AI.
[01:17:29] Steve: I can just talk to it with natural language and it’s amazing. And I’m going to create this new revenue system in this new industry or job or company.
[01:17:38] Cameron: but I don’t, but I don’t have an income, so I can’t pay my rent. I can’t buy
[01:17:41] Cameron: food while I’m doing that. And no one’s going
[01:17:44] Steve: you got a redundancy and you got like six or 12 months money and it’s cause a big corporation and, and then you, but you do, you
[01:17:51] Cameron: Yeah. Okay.
[01:17:52] Steve: And then you go
[01:17:53] Cameron: pay them all out. and so some people take that money and use it to become entrepreneurs.
[01:17:59] Steve: Some do.
[01:18:00] Cameron: Some will,
[01:18:02] Steve: Maybe, maybe, many, many of them do it. And we end up with like a, a
[01:18:09] Steve: system where we have
[01:18:14] Steve: lots of, let’s just call them freelancers and independent workers. Like, like we’ve had pre industrial era where most people, sorry?
[01:18:23] Cameron: Isn’t that the gig economy?
[01:18:24] Steve: Well, yeah, yeah.
[01:18:26] Tony: a, that’s a curve
[01:18:27] Steve: that does happen. I don’t know.
[01:18:29] Tony: I think the trends,
[01:18:30] Steve: gig workers pre industrial.
[01:18:32] Tony: see, I, I still, you’ve jumped to the endgame, I still think about what happens between now and then as AI increases computational power, and so it’s still got to be, it’s still got to be sold to somebody. So, you know, who pays for ChatGPT, who pays for, you know, because what’s going to happen, I think is that, and it’s already happening, is that, um, you know, Silicon, Silicon Valley loves to have the frat boys throw money at these things, even though they’re not profitable, like Tesla, for example, and, and someday it’s got to make money for the serious people to keep putting money in.
[01:19:10] Tony: So, you probably see that. Companies like, uh oh, I see. The different, I guess the different difference in this is that you’ve got, say, apple developing in ai so they’ve got a cashflow to support it. Um, so people will still keep investing in, in ai, but I guess what I’m trying to say is AI will need to support itself economically.
[01:19:30] Tony: And if it’s, um, and that’s gotta happen on the road to this perfect. device that can do all jobs cheaply. Um, and there’s going to be so many secondary effects to that that you may never get to the utopian world of a 20, 000 robot doing all jobs because, um, you might have, uh, it might break down along the way.
[01:19:51] Tony: It might not get to 20, 000. It might, um, because as Steve says, there might be lack of competition in the marketplace and it’s never gets below 100, 000. because Apple want to keep
[01:20:02] Tony: making a, profit margin, um, there’s all sorts of things that have
[01:20:05] Tony: to happen economically or not happen economically to get to that utopia.
[01:20:10] Tony: And I, I find,
[01:20:13] Tony: you know,
[01:20:13] Cameron: Actually, I think it’s a, I think it’s a
[01:20:15] Tony: dystopia. Yeah. Well,
[01:20:16] Steve: thought the exact same thing to get to that
[01:20:18] Steve: dystopia. I love it. Cam will never let you get away with that one, Tony. Cam was on that.
[01:20:24] Cameron: Well, look, it’s going to be, I think there is a utopian version of the future. And that’s what Kurzweil is pitching. He’s a utopianist, that everyone’s going to be great and we’re going to be immortal and every, all the work’s going to be done for us. And I’m trying to figure out what does the economy look like in that scenario?
[01:20:45] Tony: my, my starting question is who pays
[01:20:48] Cameron: for what?
[01:20:48] Tony: for your, for Kurzweil’s utopia? Who pays? Right? Because,
[01:20:53] Cameron: question. yeah,
[01:20:55] Tony: that’s, and that’s why I’m saying I don’t know if we actually get there. It’s like, if you think about where we are now, someone’s still got to spend 2000 bucks on the Apple iPhone, um, if that’s where technology is, you know, along the curve now.
[01:21:09] Tony: So, which means they need jobs and they need to be able to, you know, save and buy that. So, it’s a bit like the old Henry Ford thing. He paid his workers enough so they, they could buy the product, the car. So, you, There is a bit of a circularity to this argument that, you know, I know that if we get to utopia, then you don’t need money, but how do we get there?
[01:21:32] Tony: I just can’t see, I mean, the current laws of commerce and the current laws of human nature suggest that we may not get there because we’re, there’s always someone trying to get an advantage and make some money out of it.
[01:21:44] Steve: I mean, the question that Cam raised last time we chatted about this is Do the current laws of commerce apply? Cause maybe we’re going into uncharted territory.
[01:21:52] Tony: Yeah, and that’s where I come, there’s, in terms of extrapolating the curve, we’ve never had a situation where they haven’t applied. So, it’s, yes, they may not apply in the future. Um, I give that a relatively low probability because they’ve always applied up to then. And having an AI, think about the power that goes with having an AI or an ASI if you get there.
[01:22:12] Tony: It’s, it’s really valuable and you can charge a lot for it rather than give it away for free and make this utopia on Earth.
[01:22:20] Steve: Well, there’s also the emancipation. If you make it available to the large populace, then would have to assume that there’ll be a whole lot of new roles and industries that are invented because people have superpowers where things that they were incapable of creating, they can now create, whether that’s an industry or a company or just working for themselves.
[01:22:43] Steve: If it is. Uh, as emancipating as we think, it doesn’t just emancipate the organizations that create it, it’s those that they sell it to, which then it gets a distribution through the marketplace, which redistributes some form of the funds, so long as there’s access and competition. They’re my two things, is access and competition.
[01:23:02] Steve: If we have that, we’ll be fine. If we don’t, then we could be stuffed.
[01:23:05] Tony: I think we’ll extrapolate where we are now, where if you compare our lives to what they were like 100 years ago or 200 years ago, we’re miles ahead. But do we feel like we’re in utopia? And I think it’ll be the same in 10 years time.
[01:23:18] Steve: You think we’re in Utopia, Cam?
[01:23:21] Cameron: Yeah,
[01:23:21] Tony: Well, Cam gets to sit
[01:23:22] Steve: Really?
[01:23:24] Tony: all day and
[01:23:24] Steve: Because after you
[01:23:25] Tony: playing with Chrissy and Fox. That’s utopia.
[01:23:27] Steve: but listen, after you eat and sleep and do a little bit of recreation, like, I feel like our minds have this heavy toll to carry. We’re carrying around this cognitive burden of uncertainty and, you know, too much information.
[01:23:41] Steve: It’s just too much of everything. I, I really feel a burden on, on carrying. All of this. I don’t feel, and I’m more well off than I’ve ever been in my life. I’ve got everything I’ve ever hoped for. I’ve got my arcade machine there, my BMX, my surfboards. It’s all, I’ve got everything I ever dreamed of. And I feel, I don’t feel as calm or as relaxed or, or
[01:24:02] Tony: never produces that result.
[01:24:04] Steve: Well, I’m not saying that I just go out and chase money. Come, comes from Tony, mate, Mr. Investor Extraordinaire. Like, and I go surfing and I do think, like, the point is, is that, I, you know, I can afford things, but I’m not saying that I was chasing, like, material but I feel like the cognitive load. In the modern era and the uncertainty and all of that is, is a real burden that reduces happiness.
[01:24:31] Tony: It’s always been there though. Like I was, I brought up, I was brought up in the era of the Cold War where every day you woke up was a good day, you weren’t bombed by nuclear war. So I think that underlying anxiety might be a condition of human nature. You’ve just got to learn how to control it.
[01:24:46] Steve: Okay. Thank you. Well, now they’ve just ruined everything for me. I just want to point out that I was having a really good time now. And now what you’re telling me is just, just roll with the anxiety and the unhappiness that is a natural human extension and gets worse as I get older. Thank
[01:25:00] Tony: ignore the noise. Ignore the noise.
[01:25:03] Steve: Very ha easy to say, hard
[01:25:04] Steve: to do. It’s like anything in life. Well, how do you get fit? You go to the gym and you eat less. Easy to know, hard to do.
[01:25:09] Tony: Hard work. Yeah.
[01:25:12] Cameron: Okay, so back to the future. So let’s think about the next couple of years. So. AI is going to continue to improve, dramatically, I believe, in the next few years. It will start to take jobs. Do we agree on that?
[01:25:29] Tony: Uh, yes, but, but that may not be a bad thing. They might as Steve says, supplant people into other
[01:25:36] Cameron: Okay, maybe. So,
[01:25:38] Steve: Take jobs and invent jobs, as it always has,
[01:25:40] Tony: Yeah. That’s a good way to say it. Yeah.
[01:25:43] Cameron: right. So, initially, the sort of jobs I think are going to go in the first
[01:25:48] Cameron: trench. And I’m talking the next couple of years. At first, slowly, while we’re testing the capabilities, the reliabilities of AI, and assuming that they stand up to that test, I think they’ll go quite quickly. Low level knowledge worker jobs and artistic jobs.
[01:26:07] Cameron: Graphic design, gone when you can just create something, um, using an AI in a second. Um, you know, low level sort of data analysis jobs, uh, gone. Um, writing. Anything that involves writing marketing copy or anything like that is gone. Secretarial work gone when you just talk to your computer and it can rewrite it in any fashion that you want.
[01:26:36] Cameron: I mean, I already dictate most of my podcast notes to my computer. Um, and if I want to sharpen it up, I’ll just then, it already is translated via an AI from voice to text, but then if I want it cleaned up, I’ll just dump it all into something and say clean this up for me and it will. Um,
[01:26:56] Tony: Can I just stop you there? I think that’s very simplistic. I think that there will still be some humans involved in each of those industries for different reasons. Um, partly because in the next five years I don’t think the AI will be perfect enough to replace holus bolus everyone. But just take the secretarial support.
[01:27:14] Tony: So you’ve got an AI controlling your calendar. I’ve got an AI controlling my calendar. Um, we work for large organizations and something fucks up and the meetings clash and you know, it’s human nature to want to open your door and yell at the secretary to say, sort it out for me. There’s still, there’ll still be someone sorting out problems with the tech.
[01:27:34] Tony: So he wants to plant every role, every job. in secretariat, in
[01:27:38] Tony: the secretarial
[01:27:39] Steve: Well, so Tony has solved it here. Basically, AI won’t take over because humans want to yell at humans and a lot of people don’t know that. That is the savior! To
[01:27:49] Cameron: I think that tells us, tells us more about Tony than it does about, uh,
[01:27:54] Cameron: Humans in the future.
[01:27:56] Cameron: Last thing I want to do is yell at someone and you’ll just yell at your AI. I mean, you’ll go, Hey, I mean, who hasn’t yelled at their AI before?
[01:28:05] Steve: And, and of all the things Siri isn’t good at, given the woke organization running, I should work. Well, I’m not one of those people, but, but the idea that says, sorry, I won’t answer that if you use, uh, abusive language.
[01:28:20] Cameron: racist
[01:28:21] Tony: don’t be, it’s mine says don’t be like that. If you went through my Siri searches, it would just be fuck off Siri.
[01:28:28] Steve: I say that too, and it says, I won’t answer such profane language. It actually does that to
[01:28:32] Steve: me. And, and, and as Cam, Cam has said, you got to be nice to the
[01:28:35] Steve: robots. Cam’s getting in early. Cam is getting in early.
[01:28:39] Cameron: People living across the road from my mom have got like a robot. Roomba mowing their lawn, which I’ve never seen in the wild before. Just all day, it’s just pottering around mowing their lawn. Then it docks on the side of the house to recharge. Super crazy. Anyway, uh, coding, obviously, in terms of the early jobs, and look, I’m not saying, Tony, that all the jobs are going to go immediately in these sectors.
[01:29:01] Cameron: I’m saying, okay, if you’ve got a team of five coders and a manager, you’ll, you’ll lose one, then you’ll lose two,
[01:29:08] Tony: Yeah. So there’s an, no, you’re right. There’s an efficiency there, but that’s my point is, is what does it cost to replace those people? How much is Sam Altman going to charge
[01:29:17] Tony: for the coding AI to replace the people who are now coders?
[01:29:22] Tony: And then compare that cost to people in India.
[01:29:25] Tony: Who do it for like two bucks an hour.
[01:29:27] Steve: So this is the competition
[01:29:28] Cameron: but they won’t be anywhere near as good as the AI.
[01:29:31] Tony: Oh, it says you.
[01:29:33] Cameron: I’ve hired people in India and I’ve used AI and already the AI is better than
[01:29:38] Cameron: the people
[01:29:39] Steve: Cam, that’s so racist. Cam is so
[01:29:41] Steve: racist. just saying, brother. No, well,
[01:29:44] Steve: but this is the competition thing. How much will charge? How much will Sam Altman charge? It depends on how much competition he’s got.
[01:29:52] Cameron: Yeah.
[01:29:53] Steve: humans and from other AIs.
[01:29:55] Cameron: but we’ve, they’ve already established sort of the price point, right? We’re already paying
[01:29:59] Cameron: 20 to 30 bucks a month for AI,
[01:30:01] Steve: Pretty good value.
[01:30:03] Cameron: premium level tools, uh, you know, whether you’re looking at U. S. or Australian, um, which will do all of that for you. They may have a higher level of access, but we’re talking.
[01:30:15] Cameron: You know, not a lot. I mean, they’re not going to start charging all of a sudden 10, 000 a month for it to do what they’ve already shown you it can do for 20 a month.
[01:30:26] Tony: Well, they might because, I mean,
[01:30:28] Tony: we’re getting into the area of prediction, so this is a bit of what if,
[01:30:30] Cameron: Yeah, yeah,
[01:30:31] Tony: like, what if the ChatGPT general’s 20 a month,
[01:30:36] Cameron: Tony, the show is called Futuristic!
[01:30:39] Tony: but they actually,
[01:30:39] Cameron: all we do on this show is predict the future, Tony.
[01:30:43] Tony: some little gremlin in Sam Altman’s corporate structure says, Hey, you realize if we actually package up the AI to say it’s a coding specialist, we can charge 1, 000 a month for it to replace coding people.
[01:30:57] Tony: Um, so there could be
[01:30:59] Tony: specialization in AI where
[01:31:00] Tony: they charge a premium, and then that’s got to be, you know, Competed against other providers in the
[01:31:05] Tony: economy.
[01:31:07] Cameron: Anyway, my point is that I think the first layers that are gonna go will be low risk, high cost, high benefit jobs, right? So you can take out a coder here, you can take out a customer service person, replace them with an AI, you can take out an analyst, you can take out a writer, graphic design, industrial design.
[01:31:30] Tony: Look, I agree. Call centers are toast. They’re gone.
[01:31:33] Cameron: So still initially it’ll be incremental, you’ll take out one, then you’ll take out two, then you’ll, you’ll still have managers overseeing the work and
[01:31:41] Cameron: that kind of stuff until
[01:31:43] Steve: big, big end too. Lawyers, investment bankers, a whole lot of
[01:31:46] Steve: analysts, like, we’re not just talking low end call center, you’re talking a whole cohort of, you know,
[01:31:52] Cameron: I think
[01:31:53] Steve: in a big law firm, where they’re doing discovery and all of that. Bam, AI, it’s done it.
[01:31:57] Cameron: Well, I think that’s the next level. So I say the first level is low risk. If I get rid of a graphic design person and then the
[01:32:04] Cameron: AI produces a bad graphic design thing, I can just say, do it again, do it again, do it again. If I have them, if, you know, if I replace a lawyer and they come up with a bad contract, um, that could be a bigger issue.
[01:32:16] Cameron: But I think the first iteration, we will be testing it. In the workforce, and they already are, but I mean at a higher level, more visible level, and when we have the ChatGPT 5 level of reliability that we believe is going to be there, they will start to go, you know what, we’ve been testing it, replacing a human for six months, for a year, It’s been terrific.
[01:32:41] Cameron: Okay, now let’s replace 10%, 20%, 30 percent of the team until it just replaces nearly everyone. Then around 27 to 2030, I sort of predict is the next layer, the second layer, when we’ll have AGI sometime around this timeframe as well. Higher level jobs will start to be replaced, middle management, because they’re going to be less people to manage.
[01:33:04] Cameron: So, you know, I can get rid of every second middle manager and just, you know, group all the people that we still have under this one manager. I think it’ll start to impact legal, accounting, HR, again, because there are less people to hire and manage, recruitment for the same reason, psychologists. I think more people will start to use AI as a psychologist.
[01:33:27] Cameron: And that may, I mean, we already have too few psychologists in the marketplace, so that may not have a short term impact on the, uh, incomes of psychologists in Australia. But eventually, uh, people won’t study to become psychologists if everyone they know is using an AI. So as a psychologist, um, medical, everyone’s going to have a free GP.
[01:33:49] Cameron: Again, this is going to be, it’d be a good thing because it’ll take strain off the existing medical infrastructure initially, but are you really going to study to be, are you going to go in a medical school today and study to be a doctor? If you’re looking at where AI is going in the next 10 years, you really think it’s worth 10 years of study to become a doctor when we’ll have AI in 10 years that can answer any medical question better than a human can?
[01:34:14] Cameron: I’m not sure that people are going to, that it’s going to be a logical step. Animation will go, business strategy and then acting and all sort of work in TV film production, I think is going to go.
[01:34:28] Steve: Well, Sora, I mean, just extrapolate out what Sora can do. But also with acting, that’s interesting though. And I’ll tell you, acting, you might say, We’ll go away. And we’ve seen that there are virtual influences that already exist. This is where, do we want to see something, human creativity because a human is doing it?
[01:34:51] Steve: Like that’s actually an interesting question. I don’t know the answer to do. We just know we just want to see creativity and I don’t care if that
[01:34:59] Steve: actor doesn’t exist in the real world. If I like their
[01:35:02] Steve: persona And it’s not Tom Cruise, it’s Billy Bloggs, who is an AI actor. Who’s in a whole lot of starring Billy Bloggs in the next movie.
[01:35:09] Steve: I don’t know.
[01:35:10] Cameron: films have already answered that question, Steve. And
[01:35:13] Steve: Yeah, good, good point. Although, although, we don’t just have, uh, Avengers movies. We have movies with real humans. The other contention
[01:35:24] Tony: that’s a note I’ve made, Steve. Sorry to interrupt, but, um, there’s an artisanal element. So, again, extending the curve, um, if AI gets so smart at feeding us, for example, that we get to drink a bowl of gloop
[01:35:39] Tony: every day, because it’s the cheapest, best way of getting our nutrients, then there’s going to be artisanal cafes pop up where they bake the bread by hand.
[01:35:47] Tony: Um, so I think,
[01:35:49] Cameron: no one can buy the bread because no one has any money
[01:35:51] Tony: well you say that, but, but who, AI, AI has got to be paid for, so people have to have jobs to pay for AI, otherwise it doesn’t exist. It’s, you know, Microsoft don’t half of ChatGPT, they’re not going to give it away. They’re going to, and that’s the other, other problem I’ve got with getting to where you want to get to, is that there’s a growth imperative in these big companies.
[01:36:12] Tony: Companies, they’ve got to keep growing. So if everyone subscribes to ChatGPT for 20 bucks a month, where does the growth come from? There’s got to be, there’s got to be, there’s got to be add ons and other services, etc. Like the doctors, etc. They’re going to be charged to allow Microsoft to keep increasing its revenues every year.
[01:36:33] Tony: It’s got to come up with new products and charge more for
[01:36:35] Steve: tend to get bundling and unbundling. Like, this happens again and again and again. And we’ve seen this in computations. Like, at the moment, we’re moving to a bundling phase in tech where everything comes under the one banner. And then it fragments out because people do better versions of this and it’s, that’s the doctor AI and this is the architect’s AI and this is, it’ll be interesting to see if that happens.
[01:36:56] Steve: I think that’s inevitable because it’s a greater way to make money by refragmenting the market, then re bundling the market. And it’s just basically, it’s the breathing effect of how the markets respond to maximize revenue over the longer term by changing their product portfolio.
[01:37:09] Tony: Yeah, and that maximizing revenue is really
[01:37:10] Tony: important in this discussion. It’s not going to, AI is not going to be given away. You’ve got to pay for the psychologist on
[01:37:16] Tony: your phone. It’s not going to be free, and therefore not everyone will use it.
[01:37:21] Cameron: 60 percent of the AI models that have been released in the last year were open source. And A lot of those are very, very,
[01:37:30] Cameron: close in terms of their performance capabilities to the top end models, the ChatGPTs, the Geminis, and that kind of stuff. There is a theory that we will have a lot of freely available open source models.
[01:37:45] Cameron: At the moment, you can’t really run them on your desktop PC easily. They require a lot of, uh, Technical know how to get them up and running and trained and all of that kind of stuff, but all of that will go away at some point where it’ll be click install and you’ll be able to run your own AI locally.
[01:38:00] Tony: Yeah, but that’s like, we’ve seen that curve before as well. It’s like Linux, right? It’s, it’s an open AI operating system, but
[01:38:08] Tony: Microsoft was so
[01:38:09] Tony: much better at giving you
[01:38:11] Tony: one that worked a little bit better and it was free, it was cheap and came with your laptop, that it won the day.
[01:38:18] Cameron: Yeah, because It didn’t have an AI built into Linux. When you have an AI built into Linux that can do, you know, Everything that Microsoft can do.
[01:38:26] Tony: It still may not win the day. It probably
[01:38:28] Cameron: It may
[01:38:29] Steve: gonna, who’s gonna own the data center? I mean, you might end up with someone like Amazon Cloud and others. You know, basically this idea that computing really becomes in the energy business and the energy they provide is the energy to run the data centers and the information in, in processing the data, which then has AIs, which are open source and cheap, but any commercial AI fails.
[01:38:50] Steve: I don’t know. You could get a whole shift in the computation, uh, infrastructure,
[01:38:57] Tony: The other thing that I wanted to add to this debate is you’re talking about professional jobs being replaced by AI, which could potentially happen, but the personal insurance industry or professional identity insurance industry has to get comfortable with that before it will allow it, and they allow it by, they won’t allow it by charging unreasonable premiums to, until I get comfortable with it.
[01:39:17] Tony: So it takes a long time for the insurance industry to see that AI is going to be effective as a doctor and not kill people before it will bring the premiums down to a low enough level to let AI operate at a cheaper rate than the current medical staff. So it’s going to take longer than five years, I
[01:39:34] Tony: think, just from the insurance point of view.
[01:39:37] Steve: but, but
[01:39:37] Cameron: who’s buying the professional indemnity
[01:39:40] Tony: The Doctors. Doctors currently, one of their biggest costs currently is to get insurance, so if they make a
[01:39:45] Cameron: Yeah. But if, if I’m asking ChatGPT, my medical question, where does professional indemnity come into it?
[01:39:53] Tony: Oh, well, good question, but the provider of ChatGPT could be sued by the family of the person who was injured by the medical advice.
[01:40:03] Steve: They’ll just take what, they’ll just do exactly what big tech has done up until now, which is yeah, well, we’ve got 5 billion customers and we’ll go to war with whoever because
[01:40:11] Steve: our pockets are full and we might lose a few cases here and there, just like Mark Zuckerberg, you know, yeah, let’s go for live streaming, shoot up some people live, it’s all good, whatever.
[01:40:19] Steve: That’s, I think, what will happen there. But, but that
[01:40:22] Tony: But that’s the Wild West of
[01:40:24] Cameron: ever shot any people live on camera, which is what you just inferred there. I,
[01:40:28] Steve: didn’t infer that he should, I said he let people video it.
[01:40:33] Tony: Wild West of medicine though, right? So medicine has
[01:40:36] Steve: You know I love Mark.
[01:40:40] Tony: So that when I, so that when I go to the doctor, I know the doctor has been taught, has been qualified, has the right insurance, um, will likely provide the right advice ’cause I’ve gone through all these gates along the way.
[01:40:53] Tony: The wild west of, of
[01:40:54] Tony: medical advice is gonna be, no, I’ll just ask my phone. , I don’t care if it, you know, may gimme the right answer. It
[01:41:01] Steve: doing it. We’re already doing it though, aren’t we? Pretty quickly.
[01:41:05] Cameron: yeah,
[01:41:05] Tony: I think it’s more likely that, I think it’s more likely that the phone will be able to provide me with prescriptions up to a point. You know, so there’ll be parts of the medical industry that
[01:41:16] Steve: I don’t think the medical profession goes away. It just shrinks. So, the first gate you go through is the AI gate, and then you take What you
[01:41:24] Steve: and the AI have agreed or understood or communicated to the doctor who then goes, Okay, yeah, I like what the AI has done and it just truncates and expediates the entire process, I think.
[01:41:34] Cameron: it’s the shrinking that I’m.
[01:41:36] Cameron: It’s the shrinking that I’m talking about. All of these sectors are going to
[01:41:40] Cameron: shrink and we have to figure out how we’re going to redeploy the shrinkage. Shrinkage, Jerry, shrinkage. Um, we have to,
[01:41:51] Cameron: like a turtle. We have to, we have to figure out how we deploy the shrinkage.
[01:41:55] Cameron: And my point of all of this is I think this is going to happen way faster. than governments are prepared for. I think it’s going to happen way faster than industry is prepared for. And I think it’s going to happen way faster than people are prepared for. I think it’s going to happen in this decade, massive shrinkage.
[01:42:15] Cameron: And we’re not talking about it, we’re not prepared for it, it’s gonna be, it’s gonna be like when the pandemic hit, and we were like, oh shit, we have not prepared for this, and the world went into chaos mode for a year. I think this is gonna be a similar sort of thing when it hits, cause no one’s, no one’s really taking it seriously.
[01:42:38] Cameron: very seriously. And I’m starting to feel like St. Paul running around saying the world’s about to end and no one’s listening, except in the difference being I’m right and he was wrong.
[01:42:49] Tony: I don’t know if you are right. I don’t think it’s going to hit us like the pandemic hit us. It’s not all going to happen at once for a start. Not every industry will be disintermediated, disintermediated,
[01:42:57] Tony: at once. Like you said, it’ll start off with graphic designers, etc, etc. And I take Steve’s point, if we extrapolate again from the curve, there’ll be a bigger gig economy when that starts to happen.
[01:43:08] Cameron: But when I say it’s not going to happen at once, I’m not saying there’s going to be a five year gap between level one and
[01:43:12] Cameron: level two. I’m saying a year between level one and level two. 2025 will be level one. 2026 to 27 will be level two.
[01:43:23] Tony: Yeah, I don’t see it happening that quickly, Ken. Level two. So are you saying that Judges are going to accept me defending myself with a phone so I can go in the court and say, Hey judge, listen to this. Here’s my argument. And Siri reads out the case precedence and the judge goes, yeah, okay. That’s just going to take a lot longer than two years, even five years for that to
[01:43:48] Steve: You’re saying that the human gatekeeper is more poignant than we think and will stay longer within commercial settings, legal settings, governmental settings.
[01:43:59] Tony: I’m saying two things. I’m saying, yes, that there’s a human nature to this, but there’s also an economic component to this. And I don’t think it’s going to be such a quick evolution. I think it’s going to be a lot of competition and
[01:44:12] Tony: what’s more efficient and who’s paying, what price do you pay for this and how do
[01:44:17] Tony: they make money and who are they charging?
[01:44:19] Tony: All the stuff that goes on in the regular economy has got to play out as well,
[01:44:22] Steve: Well, certainly areas
[01:44:24] Cameron: I’m not suggesting that Sorry, I’m not suggesting trial lawyers are going to disappear. Again, I’m not saying that all jobs in all
[01:44:31] Cameron: sectors are going to disappear on day
[01:44:33] Cameron: one. I’m saying shrinkage. So, lawyers, low level lawyers, legal clerks, you know, the, the, your local guy that does a real estate contract or a conveyancing contract or divorce agreement or an employee contract or All of those people will go first, not all of them, but increasingly they will all be replaced by an AI which can write a watertight contract faster, cheaper than any human can.
[01:45:06] Cameron: And then there’ll be a trickle on effect as, uh, as people and businesses become increasingly confident That, oh yeah, this is robust. Yeah, we can’t pick any holes in this. We’ve had all of our, you know, top legal experts try and find flaws in the cases that the AI is providing and we can’t find one.
[01:45:27] Cameron: That’ll hit the media. People will go, Oh, did you hear that? Yeah, the AI is now better than any lawyer on the planet. Um, why are we still paying lawyers 800 an hour to represent us? It’s going to happen pretty quickly, but it won’t happen All at once, it’ll be like, it’ll be an avalanche, but it’s going to be a very fast avalanche, is my point.
[01:45:49] Steve: I think it’s going to
[01:45:50] Cameron: years, not decades.
[01:45:52] Steve: if there’s an avalanche, there’s going to be two sides to this mountain. You’re going to have the commercial, let’s call it unregulated avalanche, which organizations get to make the choices on who they deploy to get the work done. And then you’re going to have things which have a higher regulatory hurdle.
[01:46:12] Steve: And the doctors is a good one that’ll take, I think, a lot longer. And this is a classic of Amara’s law, which states that the impact of a technology is usually overstated in the short run, but understated in the long run. Uh, and the example that I will cite there is that it took us 12 years before the government in Australia allowed us to see a doctor on the phone just to get a script renewed or something.
[01:46:35] Steve: It wasn’t until COVID that they did that, but the technological capability was there from about 2010. And it wasn’t 2021 until we deployed seeing a doctor on your smartphone.
[01:46:46] Tony: Yeah, and that’s why I think this will all roll out, Steve. That’s exactly, really good analogy. It’s um, The tech will get there before the economy does, I guess is what I’m saying. And that gives a bit of breathing space for the economy to work out what it wants to do with disruption and displacement.
[01:47:02] Steve: But I think what should happen is they should pay people like the three of us to give advice while we don’t know what’s going on to just come. And so just sing it, just sing out and we’ll just, we’ll
[01:47:12] Tony: yeah, and
[01:47:13] Steve: and just pay us an inordinate amount of money to pontificate on these points.
[01:47:17] Tony: and the background cam can ask ChatGPT and we can sell that one off as well. We can feed
[01:47:21] Steve: We’ll just have an earpiece in and Cam will just be asking it the question. We’ll just be saying, here’s what we think, because no one knows what we think anymore. We’re just asking the AIs in secret,
[01:47:31] Tony: But extrapolating the curve, right, we’ve seen, we just spoke before about the newspaper industry and how it’s disrupted. It’s just like, there wasn’t mass unemployment, people moved into other jobs, you know, because a lot of times these jobs like conveyancing or contract writing, they’re done by people who don’t really want to do those jobs.
[01:47:50] Tony: They don’t want to be treated like a robot just sitting
[01:47:52] Tony: there doing nothing. You know, turning out the same boilerplate every day. They, they see themselves as being better than that. So
[01:47:59] Cameron: but why are they doing it?
[01:48:03] Steve: because they’re not better.
[01:48:04] Cameron: can’t,
[01:48:04] Tony: or they want to
[01:48:05] Cameron: not better, they can’t get a better.
[01:48:06] Tony: it’s the first step along a promotion curve for them is often the reason.
[01:48:14] Cameron: and I’m wrong, genuinely. But that’s the same thing I say to
[01:48:17] Cameron: Christians, uh, when they say, we’re gonna spend eternity in paradise with Jesus. I hope you’re right and I’m wrong, but so far, I’m never wrong, so.
[01:48:28] Steve: Cam, I’m never wrong. I love I’m never wrong. But what I, what I love though, is I, I think that you’re keeping your hair like that. So when Jesus sees you, he knows that he can be in a rock band with you because you have a similar haircut.
[01:48:40] Tony: I think you went to the wrong
[01:48:41] Steve: Alright, and that’s what you’re doing in
[01:48:43] Tony: where you’re wrong.
[01:48:44] Steve: You’re aligning.
[01:48:44] Steve: You’re aligning physically so that he may accept you despite the documentary films you’ve made about him not existing. That’s all I’m
[01:48:53] Cameron: When I first came into this, this is my old bedroom I grew up in,
[01:48:56] Cameron: by the way, to record this morning, this light was on above
[01:48:58] Cameron: me, and I thought that was too much of a halo effect, so I actually, I turned it off, so I didn’t, uh, accentuate the, uh, prophet esque, uh, look of my hair. Thank you guys! Look, we’ve been talking for two hours.
[01:49:12] Cameron: Um, I appreciate both of you coming in and talking me out of my, uh, state of panic. Uh, Tony, thank you for coming on and, um, bringing your, uh, vast intelligence to, and experience to this. Steve, as always, your vast intelligence.
[01:49:29] Tony: You know, that’s the other point I wanted to make before we go is we’ve seen super intelligence before, but it’s still subject to commerce and human nature. Look at Einstein. He discovered the laws of physics which led to the atomic bomb, but he couldn’t stop it from being used.
[01:49:43] Tony: It’s the similar sort of thing with when we get a ASI, I think,
[01:49:47] Tony: it’s going to have,
[01:49:49] Tony: it’s going to butt up against governments, it’s going to butt up against commerce before it
[01:49:53] Tony: eventually does what it
[01:49:54] Cameron: well, my final note is my, my problem with that is, I think, The idea that a superintelligence will be beholden to the desires of humans is like saying humans will be beholden to the desires of ants.
[01:50:13] Tony: No, I know, but what happens if the Chinese superintelligence gets built first and the Americans feel the need to build one, and there’s a superintelligence
[01:50:21] Cameron: I think that’s a very likely outcome
[01:50:25] Tony: it may mean that they cancel each other out.
[01:50:29] Cameron: Well, cancel everything out if that happens.
[01:50:32] Tony: Possibly. yeah,
[01:50:37] Cameron: a super intelligence, once we have a machine intelligence, that’s a million times more
[01:50:42] Cameron: intelligent. And the thing that Kurzweil, sorry, points out in this latest book is, uh, maybe even in earlier books, um, the, the, Turning point will be when the AIs are able to code themselves better than humans can code themselves.
[01:50:59] Cameron: When the AIs are in control of their own coding, that’s when they go from being smarter than us to a million times smarter than us. Really, really and quickly, there’s this recursive loop that happens with each generation and the generations are days apart, not years apart. And that’s what they call FOOM in the AI industry.
[01:51:23] Cameron: Is that hard take off? FOOM! Like in a superhero comic when a jet takes off? FOOM! It’s when you, and this is the sort of stuff that Eliezer Yudovsky is scared of, is when the AI goes from being just a little bit smarter than us to 10, 000 times to a million times in a month, and just because of its coding optimization that it does for itself,
[01:51:49] Tony: that’s the dark, that’s the dark forest problem, isn’t it? Because really, what that, what that, what that is, what that ASI is,
[01:51:56] Tony: is an alien. And, you know, again, we have a paradigm for dealing with aliens. We, we destroy them
[01:52:04] Tony: before they have a chance to grow up and destroy us.
[01:52:07] Cameron: but we’re not going to do that.
[01:52:09] Tony: Why not?
[01:52:10] Steve: Because they’re our children. And,
[01:52:12] Tony: To our commercial interests.
[01:52:14] Cameron: Yeah, the same re I mean, this is the question, like, Eliezer Yudovsky and the like have been saying this for the last year or
[01:52:21] Cameron: two now is, We need to stop this right now! stop
[01:52:24] Cameron: building AI! Stop build Stop spending hundreds of billions of dollars a year to build something that’s gonna kill us!
[01:52:31] Cameron: And everyone’s like, Nah, nah man, we can’t.
[01:52:35] Steve: Elliot, have you seen how good that last email was to my manager? I don’t think you understand the implications here, brother.
[01:52:42] Cameron: yeah, the love poem that I just wrote to this girl that I like. Come on, man. It was like Pentamic It was a
[01:52:50] Steve: seriously,
[01:52:51] Cameron: I think you know, if we know anything about the laws of industry and commerce is that if there’s a buck to be made, we will push ourselves to the brink of extinction to make that buck. Um, hence, uh, the nuclear bomb that you said before is exactly the point. We built that to stop the Russians. Why? Because they were threatening the capitalist economy.
[01:53:15] Steve: on that capitalism thing with the AI war across nations, one interesting thought, and Cam, you’re probably the best person to ask this, given your work on the Cold War podcast and so on. The idea that different nations are building out AIs, I do wonder if the AIs from nation states have some sort of a systemic competitive advantage.
[01:53:40] Steve: Evolution, where it’s commercial AI, which has all of the world’s knowledge and data in it, versus the Chinese AI, which is segmented in the great firewall of the internet. Do they train their AIs on the type that they deploy in their own economy? Or do they stealthily bring in everything from the world and the West to train their AIs?
[01:54:00] Steve: Because you might end up with two typologies. A communist typology AI, which sort of has a very draconian viewpoint of the world, versus the open source. A. I. model from the U. S. which has everything in it from fake news to whatever to train it. Like, do you end up with two typologies of
[01:54:17] Tony: has a draconian worldview of the world as well.
[01:54:20] Steve: Well, yeah, but do you end up with two different A.
[01:54:22] Steve: I. s? And if you do, is one more intelligent and defeat the other? Is it kind of like a
[01:54:27] Tony: that’s Elon’s. That’s Elon’s.
[01:54:28] Steve: competing
[01:54:30] Tony: He wants, he wants to build an AI because ChatGPT is too liberal.
[01:54:35] Steve: Well, I’m just interested in that, if you end up with this AI Cold War with them
[01:54:39] Tony: Yeah, it’s an extension of human
[01:54:41] Tony: nature. But of course, ASI may just, just go off on a different, might go off and swim
[01:54:46] Tony: with the whales and forget about us. Who knows?
[01:54:48] Steve: Yeah. Yeah. Like on her.
[01:54:50] Cameron: well, I, I want the communist AI to win, because I want Star Trek communism, um, in place, where everyone is given everything that they need, and no
[01:55:00] Tony: China doesn’t have Star Trek communism.
[01:55:04] Cameron: Not yet, that’s why they’re building the AI, to get us there.
[01:55:08] Tony: Oh, and, and the,
[01:55:09] Tony: the head of the CCP will just hand over power to the AI, will he?
[01:55:14] Cameron: That’s what Xi Jinping’s lifelong mission has been, is to give the world Star Trek communism, that’s his,
[01:55:20] Steve: Did he really say that?
[01:55:21] Cameron: and, no, I said that, but, you know, I’m telling you. Xi Jinping,
[01:55:25] Steve: I said it.
[01:55:26] Cameron: Xi Jinping is a true believer in, uh, the, the
[01:55:31] Cameron: utopian communist, uh, ideal, as am I.
[01:55:35] Tony: Yeah, but he’s wearing a red shirt and we all know that red shirts never survive. Star Trek episodes.
[01:55:41] Cameron: As for the ASI just leaving us all behind, Tony, I, I, I, I’ve thought a lot
[01:55:46] Cameron: about that argument and I don’t think it’s necessarily an either or. The ASI can go off and explore the universe and turn all of the planets into computronium, but they can stay here as well and keep running things here. I don’t think it has to be an either or.
[01:56:02] Cameron: Anyway, fun chat. I, I really enjoyed talking to both of you guys. Um, nice for Tony and I to be able to spend two hours talking about this instead of the half hour we normally do at the end of QAV these days. I’ll talk to you on QAV tomorrow, Tony and Steve. Um, I’ll talk to you whenever I talk to you. Have a good week, both of you.
[01:56:20] Tony: Thanks guys. Well done.