Share Futuristic
Share to email
Share to Facebook
Share to X
By Cameron Reilly
The podcast currently has 11 episodes available.
In this episode of Futuristic, Cam and Steve discuss OpenAI’s groundbreaking o1 model and Advanced Voice Mode. They voice concerns over social media’s impact on youth, touching on legislative measures and the Instagram Kids launch. Cam talks about how he’s used ChatGPT on his weight loss journey. The episode also looks back at the 1973 launch of the first personal computer, MCM 70, the relevance of Moore’s Law and Marvin Minsky, using of AI to digitise handwritten notes and Sam Altman’s vision for The Intelligence Age.
02:36 The Impact of Social Media on Youth
15:46 Using GPT to lose weight
21:07 OpenAI’s New Model: O1
34:31 Advanced Voice Mode in ChatGPT
41:03 The Future of AirPods: Integrating AI
43:54 OpenAI Leadership Changes and Implications
46:43 The Evolution of Silicon Valley
56:45 The Bystander Effect and Social Media Legislation
01:03:43 Sam Altman’s Vision for the Intelligence Age
01:16:15 Technology Time Warp: The First Personal Computer
FUT 30
[00:00:00] SS: Freestyler, futuristic microphone, straight from the top of our domes. We got the Cam Reilly, a Rocker microphone and a Sammotron, s s s s s sammotron.
[00:00:17] CR: Uh, welcome back to Futuristic, uh, episode 30. I think this is, it’s been a while, Steve. Our last show was August 3rd. We’re recording this on August the, uh, September the 27th. So almost two months since we’ve done a show and partially because you had a lot of stuff going on, partially because I had a lot of stuff going on and partially because there was a lot going on in the.
[00:00:47] CR: AI world up until about a week ago, and then everything happened. So, you know, look, there’s a lot of, I follow a lot of, uh, AI based YouTube shows and podcasts. And a lot of time it’s like watching cable news. There’s just. filler stories happening and I’m like, really, this isn’t really, there’s a lot of gossip, there’s a lot of rumour, there’s a lot of minor news and I’m like, nah, I’m not going to waste your time, my time, our audience’s time with minor stuff.
[00:01:21] CR: I’m really here to talk about the big stuff and so if there’s no big stuff, I’m saying, we don’t do a show. There’s nothing really worth talking about, but of course in the last week or two, the big stuff has hit. Stuff that had been rumoured for a long time. Some of it finally landed and it’s definitely worth talking about.
[00:01:40] CR: What do you think, Sammartino?
[00:01:43] SS: You made a really good point on filler stuff in media. And so much of our social feeds are like that as well. And even podcasts that I like to listen to, I’m like, yeah, it was a whatever episode, cause they got to do one every week. You know, it’s all the game is frequency. We used to say that columns need to be filled and now it’s, you know, pixels need to be put across screens, but I like that we’re pretty straight up on it, but definitely a lot happening now.
[00:02:09] SS: And it feels like. AI has kind of crossed another bridge where it’s starting to piece together the islands and the archipelago of intellectual riches that AI is becoming.
[00:02:25] CR: Whoa, there’s some wordsmithing from Steve Sammartino. It’s beautiful. There’s a, all those years of getting up on stage, spitting your rhymes. Uh, let’s, before we get into the news, Steve, let’s, uh, talk about what’s been going on in your personal life from a futuristic perspective. What do you want to, how do you want to start off the show?
[00:02:47] CR: What insights do you want to share with
[00:02:50] SS: I’ve got, got a few insights. My little boy’s been a little bit ill, um, which has kind of kept me out for four weeks. When he was ill, he was using his smartphone a lot because it was one of the only things where he could have a reprieve because he was quiet. Um, you know, without going into too much detail, couldn’t move around much at all.
[00:03:07] SS: Um, and. Yeah, talking to a lot of the doctors and, uh, and various psychologists and people recently, I think we don’t take social media’s impact on youth seriously enough. Now, I know that in Australia and in the U. S. they’re looking at passing legislation. I actually still don’t think It, despite them looking to pass legislation, know the potential serious impact this could have.
[00:03:36] SS: I feel like we might be heading to a quasi lost generation of sorts. The Australian government has looked to ban social media for children under the age of, I think it was touted as 14 and potentially 16. It’s got to be 18 because I’m convinced That this is potentially more dangerous than underage drinking or underage driving.
[00:04:02] SS: Uh, we know that children’s brains and certainly boys brains don’t develop till they’re about 27. The impact it has on the frontal cortex and the lack of ability to delineate what’s worth watching, the fact that the algorithms put them down wormholes of terrible information, uh, misogyny, uh, rude, Uh, get some even quick viral loops.
[00:04:25] SS: One of the other things that we see a lot is that the speed of content and the truncation of content length is having a really significant impact on I won’t be surprised if in 10 years we say, wow, that has really had a developmental issue on the youth’s brain and their ability to think and their ability to Maintain an attention span, which is required for a lot of things to function as an adult human.
[00:04:53] SS: Uh, it’s not just a joke and I’ll make something short and make sure you have a hook. Yeah, that’s all good to get views. Um, but a little bit like junk food, like alcohol or driving dangerously, I would be flummoxed. Um, if we don’t come back and find out that that’s really bad. I did one LinkedIn post which had so much resonance, um, which I, I said, and I’ll, I’ll read it pretty quickly for the audience here.
[00:05:20] SS: The words that I used was, I’m just going to say it. Social media could be more dangerous for our youth than underage drinking or driving. Both of which are age gated at 18. Kids spend five hours plus a day, which is a stat based on Australia, it’s probably more in some areas, on tech purposely designed to create addiction.
[00:05:37] SS: And I said that I’m, I’m going to write a manifesto on how easy this is to fix, which I’m going to do this week. And I’ve said that we just need the political and social will to do it, to care about our kids more than we do about corporate profits. And the reason that that’s really important is that it had like six reposts, more than 10, 000 views, hundreds of comments with.
[00:05:55] SS: Anecdotal stories of so many, uh, kids and families having major issues with this. And, you know, it’s touched me personally in my house. And we had kept my boy away from social media for a long time. And it really just opened my eyes. The conversations that I’m having are that this is really causing a whole lot of social issues.
[00:06:21] SS: So ironic, social
[00:06:23] CR: Yes. And I, and I see it at two different age levels, right? Cause my older boys, uh, are nearly 24 and Fox is 10. And Fox doesn’t have a device of his own, but he borrows our devices to do things with. He used to have a device, uh, for, for a couple of years, he had devices, but then he broke his last iPad and we said, that’s it, you’re not, not getting another one for a while anyway.
[00:06:51] CR: And so, but he borrows our things, but we, we try to limit. He’s a, like one of the things that I limit a lot is YouTube, because he will just go down, it’ll take him down rabbit
[00:07:05] SS: Bad rabbit
[00:07:06] CR: he’s sitting there watching something that is, I deem is good for your brain content, I’ll let him watch it, but if I leave the room for half an hour and I come back and he’s watching shorts, I’m like, dude, this is just bad for your brain.
[00:07:20] CR: Get off it right now. You know, this is rubbish. And we don’t let him on, you know, TikTok or, or anything like that. My older boys, Hunter and Taylor, you know, they, you know, they, they sort of grew up in the early stages of this stuff. Um, and of course they’re both, they, they both TikTok world now. Hunter’s a creator, Taylor’s a manager of creators and You know, we have constant debates about short form content and the value or the danger of it.
[00:07:53] CR: They tend to agree by the way, that there’s a lot of bad stuff associated with this. Um, but you know, I see them like they don’t read. They struggle with long form content, paying attention to it. Um, and they’ve both had a certain amount of success in their early twenties. Um, but, which is great. You’re like, they’re doing okay in many ways, but you know, they, they, they suffer, Hunter suffers a lot from.
[00:08:24] CR: Loneliness. Um, and it’s cause he’s in his room all the time making content for strangers, for 2 million followers, but he doesn’t have any real friends.
[00:08:33] SS: That’s funny because my boy, um, is really struggling for connection. He’s, he’s been quite isolated through his illness and the thing that he’s struggling for his own offer with the people in his house, and yet he looks for it from strangers. There’s, there’s some really interesting element there and the struggling with the short form content.
[00:08:50] SS: You know, we know that it has like a dopa cycle and you’re getting into a dopamine hit. I wonder how similar it is to the physicality of junk food and, and fatty and sugary foods, where it gives you a short term hit, but you know that the broccoli and the protein, long form food, let’s call it, and long form content.
[00:09:08] SS: I do wonder again, now we’re expert in this, whether or not there’s some sort of physical analogy where those things, even though one is food related and one’s content related, whether or not they have the same chemical impact. Um, on your physical development.
[00:09:23] CR: I don’t think, I think the science is already in on that. I don’t think there’s any debate about that. I mean, the brain has a, um, limited number of hormones and neurochemicals that it uses to motivate behaviour and the social media companies have, Paid lots of money to consultants to, and gaming development companies.
[00:09:45] CR: It’s all shit, not just social media companies, to develop the technology in a way that creates those addictive responses. So you want more and more of it. It’s been, as you said earlier, deliberately designed to hack. Our brains to bring us back, to keep us on the devices. No doubt about that. And I can remember early days of this sort of stuff, like 2007, 2008, when Twitter was fairly new.
[00:10:16] SS: Yep. I
[00:10:16] CR: I can remember spending a lot of time on Twitter, having conversations, having debates, and when I wasn’t on my phone, when I was in my car, driving somewhere, feeling the need to have Twitter open to see what the last comment was, or the last part of this conversational debate, and feeling the need.
[00:10:36] CR: Increasingly icky about that. Like the same, the same feeling I get if I’m playing an Xbox game, you know, you get addicted to an Xbox game or something, and then you realize you’ve been playing it for two hours and you’re in a frustrating level and you’re trying to get through the level. I remember this most recently with Red Dead Redemption, which is a terrific, Red Dead Redemption 2, which is a terrific game, really beautiful, well designed game.
[00:11:05] CR: But I remember getting to a point where, um, I just felt kinda queasy after playing it for an hour, and I’d be like, you know what? Fuck this. I’m off. And you know, I got the same with Twitter 10, 15 years ago, whenever it was, where I just was like, you know what, this makes me feel bad. I, and I had to go cold turkey off of it. Like, you know what, I don’t want to, I don’t want this in my life. I don’t want to be part of this. It was a sort of a, a cycle. And I, and I’ve said this, Chrissy and I have had these debates for years and years, the pros and cons of these new technologies. You know, when they’re new as adults, we can get.
[00:11:46] CR: Swept up in them, but then as adults, if you, if you aren’t malfunctioning, I think you have the ability to go, you know what? This isn’t good for me. It’s like eating Tim Tams. Not good for me. I’m gonna step away from this and do something healthier. Kids! You know, as you said before, we know boys brains don’t fully mature until 25, something like that.
[00:12:08] CR: Particularly younger kids don’t have the ability, their brains don’t have the, literally don’t have the ability, to make wise decisions, why we don’t let them drink, why we don’t let them drive. We’ll send them off to war, but outside of that,
[00:12:21] SS: Look, look, I’ve always said Cameron. A toy is his gun, and all he’ll ever know is to fight forever, fight forever. Little bit of the noise works, circa 1987, welcome to the world. I’ve always said that,
[00:12:33] CR: And we’ll, we’ll teach them religion when they don’t have the ability to discern between what’s true and what’s bullshit. Nah, look, I’m, I, I’ve seen the gnashing of teeth over the social media ban, and I’m actually all for it. I, I, I, I don’t have any issue with a ban on this stuff for kids. I don’t think it’s good for their brains.
[00:12:53] CR: I don’t think it’s good for any of us, quite frankly. Um.
[00:12:57] SS: it’s actually really, really easy to do. And this is, and, and, two things I want to say. The first one is, What did I despise? And as you know, this is the futuristic, we talk about the future. And people often say to me, I thought you would be all for technology. Aren’t you a futurist? Isn’t that about technology?
[00:13:16] SS: And my answer is, well, we need to be better than that and think better. The future is about creating human flourishing. Which may involve new technology and may involve some of the things which are the oldest things we’ve had. Like, uh, slow food is a classic example. In the 60s and the 70s was irradiated food and packaged goods and just add water and that was the future.
[00:13:40] SS: And then we went, actually, I like slow made coffee and grow our vegetables. Like we learned that with food, right? And, uh, you know, meditation, thousands of years, like, A flourishing future isn’t just about technology, often it will, and technology, as, uh, we hear from Kevin Kelly, is 51 percent good most times, but what we need is the wisdom to delineate where technology use is good, and where we need to say, actually, now we’ve learned this, we need to, Uh, moderate that, or legislate, or put regulation around it, like we did with cars, like we did with factories, like we did with so many other things, and for some reason, society have this view that you’re a Luddite if you don’t embrace all new technology, no, no, no, I’m far more thoughtful than that, and I’m certainly not a Luddite, and in fact, not being a Luddite is finding civilize the technology, and that’s one of the things that we need to get better at.
[00:14:35] CR: Look, I’m, I’m very cautious about, um, government affairs in personal lives and censorship and having to legislate behaviour. Uh, you know, I, I think it’s necessary in certain Places for a safe and civil society, but you want to be really careful about how much of it you allow and where the line is. It’s something that needs constant vigilance.
[00:15:03] CR: I would rather see parents be responsible for their children’s access to this sort of technology. The parents be the gatekeepers and decide what’s best for them. what is good or what is bad for the children. But if parents aren’t doing that because they themselves are addicted to this sort of stuff and they’re not able to turn it off, then perhaps there is a role for the future.
[00:15:27] CR: Because if the parents aren’t protecting the children, it’s a bit like, uh, I don’t know, paedophile priests. If the parents are allowing the paedophile priests
[00:15:37] SS: Wow,
[00:15:38] CR: close to
[00:15:38] SS: you just Cameron Reilly,
[00:15:40] CR: then the
[00:15:41] SS: Reilly will go
[00:15:42] CR: to step in and do something about it. Well. On my front, Steve, um, since we last spoke, in the last two months, I’ve lost 12 or 13 kilos.
[00:15:53] CR: And I have to say that you are partly the inspiration and my mentor in this. We had this conversation on the show a while back. I said, how do you stay skinny? You said, I don’t eat until the afternoon. So. I started doing that. I started just not eating until after lunch, but the other side of it is Kung Fu.
[00:16:16] CR: I’ve got a grading coming up in a couple of weeks and I decided that I wanted to get down to a lower weight for my grading, uh, for my fitness and my health, but also ChatGPT. Now, as you know, cause we’ve talked about this on the show, I started using ChatGPT as my calorie. tracker about a year ago because I wanted to be more conscious of how much I was eating for my health and to lose weight and it wasn’t working.
[00:16:41] CR: I wasn’t losing weight or even with the amount of Kung Fu that I do, which is insane. Like I’m doing eight to 10 hours of Kung Fu a week. I still wasn’t losing weight. So, A couple of months ago, I jumped into GPT and I said, look, I’ve been tracking my calories for the last 10 months, nine months, whatever it is.
[00:17:01] CR: This is the amount of calories as my input. This is the amount of Kung Fu that I’m doing. I’m not losing weight. What should I do? And it said, well, try cutting your calories back a little bit extra. I was like, Okay, I said I want to lose about a kilo a week. Can I do that? Yeah, you can do that. You have to cut it down by about 1, 100 calories a day.
[00:17:25] CR: So I was eating about 2, 800 calories a day on average. I cut it back. Um, to about 1700 and weight just started dropping off me. So I’ve lost 12 kilos. I’ve lost eight centimetres off my waist, jumped into my swimming trunks for the first time in six or seven months on the weekend. And they fell off. I went into the, I went into a clothing store the other day.
[00:17:53] CR: I’ve dropped down from an XL to an L. in two months. Um, and you know, again, partly because of you, partly because of Kung Fu, largely because of ChatGPT’s given me the ability to track that. Like I used to use MyFitnessPal and some of these calorie tracking apps, and you have to look up everything that you’re reading and, you know, plug it in.
[00:18:16] CR: And it used to take, longer to plug in what I was eating than it actually took to eat what I was eating. And there’s a certain level of commitment that after a while just gets difficult. ChatGPT, I just go, yeah, I had a plate of this. I had some chicken and some vegetables and a bowl of lentil soup and a slice of bread with butter and honey and a coffee and whatever.
[00:18:38] CR: And it just works it out. Now is it completely accurate? Probably not, but it gives me a rough indication. Yeah, so when I said, hey, I need to cut back a bit, it goes, okay, well try cutting back the yogurt and frozen berries and honey that you’re having for dessert every night and replace X with Y. And, you know, not eating until after lunch helped as well.
[00:19:00] CR: And so, although now Tim Ferriss is for our body, I’m having two boiled eggs first thing in the morning when I get up, he reckons protein first thing in the morning, um, puts your body into a different mode. So when you have carbs or sugars later on in the day, it doesn’t, uh, start off your insulin response if you have protein anyway, so that’s been.
[00:19:21] CR: How I’ve used technology most in the last couple of months is just to lose weight. And it’s been fascinating because I’ve been trying to lose weight for 15 years. I haven’t been this light for 20 years. And it,
[00:19:36] SS: it’s easy. Like, I mean, the basic thing is you can never exercise away a bad diet. That’s number one. Often when you’re hungry, you can’t, you just can’t do it. You can no amount of exercise, right? Diet is everything. Because I think 80 percent of your calories are just keeping your body alive.
[00:19:54] CR: Yeah. Just your base
[00:19:55] SS: yeah, just basic,
[00:19:56] CR: through the day, right?
[00:19:57] SS: metabolism. A couple of other things, just simple hacks, is often you think you’re hungry when you’re really thirsty, so have a big chug, a big thing of water, and you won’t be hungry for an hour. So, that’s a good one. And the other one is the density of your food. So, food density. So, for me, high fat, high protein works.
[00:20:15] SS: And I think it works for most people, but most people just don’t have the discipline. If something’s, you know, Carbohydrates are low density foods, like breads and all those other high density. So, you know, broccolis, even bananas, they’re carb based foods, but they’re fine. Um, and, and basically, yeah, the less processed it is, the better it is.
[00:20:32] SS: But I just have a high protein. I can lose a half a kilo a day if I have a bad week. Pretty easy. Because I just not really, I’ve never really counted calories. But I just don’t eat until the afternoon and I just have protein only. Won’t really have any carbs. I just avoid bread, pasta, um, rice. And I eat as much fat as I want.
[00:20:52] CR: Yeah.
[00:20:53] SS: And
[00:20:54] CR: There you go.
[00:20:54] SS: that. Anyway.
[00:20:55] CR: you’re a good looking skinny dude. Let’s get, let’s move,
[00:21:00] SS: with skinny.
[00:21:01] CR: speaking of Kung Fu, I’ve got a hard time, I’ve got to get out of here to get to training soon. So let’s, uh, get into tech news. Steve, uh, obviously I think the biggest thing that’s happened in AI news in the last, This month has been OpenAI finally releasing the model formerly known as Strawberry, uh, which they have called O1. Now um, O1, Strawberry, we started hearing about Strawberry in November last year when Sam Altman was fired from the company. And there were these rumours that Ilya saw something, and there were rumours coming out about something that was codenamed Strawberry or Q Star, and we’ve been hearing rumours about it ever since, and it was this A new model that they had developed that could do thinking and reasoning, but they were very tight lipped about it until it finally dropped a week or so ago, O1 Preview and O1 Mini.
[00:22:12] CR: It’s called Preview because it’s not the full model, there are strict limits which they have increased, but I think you get 50. Queries to O1 preview a week at the moment and 50 a day for O1 mini. And then you get cut off and you have to wait a week. But these are the models for people that haven’t heard about it or haven’t played with it yet.
[00:22:37] CR: It does advanced thinking and reasoning before it answers. And look, it’s not perfect. And in fact, on the launch day, Sam tweeted, ah, it seems impressive when you first use it, but over time, not so much, but they’re also calling it. PhD level intelligence and
[00:22:58] SS: I’ve been using that on stage. They stole my bit. You’ve all got a PhD in every single subject now,
[00:23:05] CR: they, they did a put out a bunch of launch videos for, with, um, physicists and chemists and biologists and coders talking about, uh, how impressive they had found they’d been early testers of the model and what, you know, I’ve seen some other videos.
[00:23:23] CR: There’s a guy who gave it, uh, as a physicist on YouTube who gave it a. Question out of a PhD level physics textbook, which the textbook said it should take a week and a half to complete and O1 did it in 122 seconds and got the
[00:23:45] SS: Now we’re all, now we’re all goodwill hunting. Physicists and mathematics experts just on a chalkboard. I’m just going to, I’m going to get ChatGPT in my pocket and just become a janitor at MIT or Harvard and just try to smash up on the chalkboard some answers. Just bam.
[00:24:02] CR: like apples? Do you like apples,
[00:24:04] SS: like apples?
[00:24:05] SS: That’s what they should call the next one. Do you like apples? That should be called, Do you like apples? I
[00:24:12] CR: Uh, and the fascinating thing, uh, for me playing with O1 is you can watch its reasoning, particularly O1 Preview. You can give it a question and then watch it go through the reasoning steps, how it’s thinking about it. And the, the physicist I mentioned who did the textbook example. He showed the reasoning.
[00:24:34] CR: It started to try and solve this problem, and it spent like a minute going down one path, and then it went, no, no, this is the wrong approach. So I went back to the beginning and started again. So, uh, now they haven’t explained how this works, how they’ve done it. I think the QSTAR methodology came out of a university originally, and I haven’t bothered to drill into how it works, but
[00:24:57] SS: was just about to ask you to take me through the QSTAR methodology. You can’t drop that on me and not take me through it, Cameron.
[00:25:03] CR: man. Nah, nah. I mean, I haven’t had the time to drill down on that yet. I do plan on trying to get my head around it though, but it’s a huge step forward in the ability of LLMs and people like Sam Altman are now saying they have reached the conclusion that there is no upper limit of what we can do with LLMs.
[00:25:25] CR: That LLMs are now capable of way more than just, uh, fancy autocomplete, as people were saying two years ago. It’s,
[00:25:35] SS: That is just a bugbear of mine and Hinton and others, some of the others say that. It’s like,
[00:25:41] CR: oh, Hinton
[00:25:42] SS: what all intelligence is. An advanced level of autocomplete. You work out what comes next. That’s like, it’s just so mind numbingly frustrating that people do not understand. Sorry.
[00:25:54] CR: Cory Doctorow, not Hinton. Geoffrey Hinton’s,
[00:25:56] SS: Oh, sorry. Not Hinton.
[00:25:57] SS: Doctorow. My bad. My bad. Doctorow. Stand corrected. Which I love Doctorow too, by the way, which shows you can still love someone and disagree with some of their bits, right?
[00:26:06] CR: should be able to. I’m exactly in the same camp. I love Ayn Rand. I love Ayn Rand’s novels. I’ve read her letters. I love Ayn Rand. I think she was completely misled and wrong on lots of things. But then, you know, I appreciate a lot of the stuff that she
[00:26:20] SS: Yeah, exactly.
[00:26:23] CR: Um, okay, so let me read a little bit from OpenAI’s launch document, uh, for O1.
[00:26:30] CR: How it works. We train these models to spend more time thinking through problems before they respond much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes. In our tests, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology.
[00:26:52] CR: Note PhD students, not PhD graduates, PhD students. Big difference between somebody who’s had a PhD for 20 years and somebody who’s trying to get a PhD, right?
[00:27:09] SS: Yep, okay.
[00:27:11] CR: We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad, GPT 4. 0 correctly solved only 13 percent of problems, while the reasoning model scored 83%. Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions.
[00:27:34] CR: Now, Um, you know, there’s been some benchmarking of this done by other people, they disagree or agree to varying degrees, but everyone seems to be blown away by what O1 can do. My personal experience with it from a coding perspective is some days I’m super impressed, other days I’m not so impressed, which was also true of, say, Claude, 3.
[00:28:02] CR: 5 Sonnet, which I was using before O1 came out. Extensively. But, uh, there’s a lot of, if you go into Reddit and look at the OpenAI subreddits, et cetera, people tend to be fairly impressed with what O1 is able to do. And again, it’s only sort of a week old, but, uh, people are pretty impressed. September 12th, it came out.
[00:28:26] CR: So, uh, yeah, two weeks, maybe. Mm
[00:28:29] SS: Yeah, I mean, the coding thing is just so often now, I’ll just get it to create like a little script for what I need. Even this morning, I’ve been trying to get my substack back and start publishing again on a weekly basis. And I’ve got some old MailChimp emails, uh, that people have been subscribing to there and not my substack.
[00:28:49] SS: And I couldn’t get things uploaded, so I just grabbed the data and just said, Write me a little, you know, write me a little CSV, get me some scripts, extract these emails from this, double checks, some things that you would have to muck around a hundred times within the substack or the MailChimp format to find its formatting.
[00:29:05] SS: I just grabbed raw data and then asked it to format in what I needed and I also asked it What is the best format for Substack? And it said, this is the best format. So can you make it? Yes. And it just starts writing the Python script and then it, and then it executes against it, just doing that scripting in there for you really is a magical feeling.
[00:29:23] SS: Cause it’s not like you have to download it onto your client, just getting it to do it in Python. In the interface is really kind of special. It reminds me of the early days of the first web browsers where it does it in the browser, where it writes the script, executes against it and gives you the thing that you need.
[00:29:39] SS: I don’t need it downloaded on my client. I don’t want to have that script sitting there and then have to put data in it and get it to execute against it. It’s a really empowering feeling. And that’s like something I think we should be getting people to use in companies. And one of my clients, I’m like, well, take information from your different data, like dump it in there and, and just get it to execute against and find the links that you need.
[00:30:00] SS: It’s really good at that. And the answer that I always get back is, oh, we’re too worried about the, uh, The safety and security concerns. I’m like, listen, no one’s really worried about your company. You’re going to go in there and steal your little piece of sales information. Seriously dudes, just, just be that, just be that company that waits 10 years.
[00:30:17] SS: That’ll work out. That always works out. What a great
[00:30:19] CR: that always works out. That’s never a bad idea. Yeah, I’ve used it. I’ve had a lot of, um, server issues. Um, I’ve increasingly over the last six to 12 months, my server that I host all my shows on has been struggling and it got to a point where it was going down two or three times a day for five to 10 minutes at a time.
[00:30:37] CR: My IT guy, the company that hosts the server that’s supposed to do support has been completely useless. Um, not able to give me any sort of guidance or fix anything. It’s in the too hard basket for them. And I tried to use GPT in the past and, and didn’t get very far. I jumped into O1 and I said, here’s the situation.
[00:30:59] CR: It said, download your log and give me a look at your log. So I download the log file. Which, if you’ve ever seen a log file, it’s full of unintelligible bullshit. Uploaded that to it, it goes, okay, I can see you’re getting hit by bots, there’s certain processes that are running cron jobs that are taking up a lot of Bandwidth, a lot of, uh, CPU time.
[00:31:25] CR: It got me to run some reports. It read the reports. It gave me the fixes and then told me what to do. All right, install this plugin and set it up like this. Go into CloudFare, set up these rate limiting options. Do this. I spent a day following its instructions and so far, so good. Fixed all of my, um, server performance issues.
[00:31:45] CR: So it was like having a. First tier, um, sysadmin to walk me through the, it couldn’t do it for me. And that’ll be the next stage is when it’s integrated into our OS’s. Yeah. But it could tell me what I needed to do, guide me through it, answer my questions, hold my hand in a friendly way, and enable me to solve problems that I would, no way could I solve them myself.
[00:32:14] CR: It’s like, again, it’s, I said this many times. It’s like having a superpower. Yeah.
[00:32:18] SS: Yeah. The way that it’s sewing together, the pieces of the puzzle, and I used in the first inferences, bridges between those islands of what it can do. Uh, the fact that it, you can ask it, how do I solve this problem? And then it comes up with the idea and then it helps you execute against the idea.
[00:32:35] SS: That’s super powerful, but it makes me feel that, you know, And I’ve, I’ve been reading a lot about the emergence of agents and I’ve been using agent G-P-T-A-A little bit and I still demo it on stage in keynotes to show how it’s objective driven rather than task driven. I mean, I think that’s gonna be the next phase at some point in 2025.
[00:32:55] SS: ’cause so many people are talking about AI agents and it seems like. It has all the pieces of the puzzle to make agency where someone actually manages a situation and a problem for you, just, just for the listeners to understand. Instead of doing discrete tasks, it accumulates all of the tasks to get from point A to point Z, let’s say.
[00:33:16] SS: And we’ll do all of those that, that achieves the objective. And that move towards agency based AI is going to be, I think, the really big part. And that’ll be big for a couple of reasons. It enables individuals to have an agent infiltrate their OS and have a look at all the various data points and find those.
[00:33:34] SS: You won’t even actually have to upload necessarily, oh, upload the file like you’ve just described. You’ll say, here, are you in there? It’ll go, oh, have a look around. And then it finds it. Think about how many calculations it can do per second, you know, trillions of calculations. It’ll be able to find it and find the pieces of the puzzle that it needs and then execute against it.
[00:33:53] SS: And you, Do that in a corporate sense. It’s really the administrative burden of modern life. You would have to feel like 90 percent of that is just going to go away. Like do my tax, go through everything I’ve spent, understand where it was spent, the places, what my benefits are, where I go, what can I claim back?
[00:34:12] SS: I mean, that’s a really complex task that you feel that agents would be able to do.
[00:34:17] CR: Yeah, particularly when they have access to your bank accounts, your emails, your calendar. They know all about, you know, your Xero, when it’s integrated into Xero, you know, all that kind of stuff. So the other thing that happened, Steve, this week is that OpenAI released what they call Advanced Voice Mode for nearly all Plus subscribers. This has been coming for months.
[00:34:47] CR: People have been complaining endlessly that they hadn’t launched it as fast as they suggested they were going to months and months ago. I think they’re still yet to roll it out to the EU for some unknown reason. But, uh, there’s a whole bunch of new voices. Now the voices have supposedly got a lot more capability of expression and personality in them.
[00:35:11] CR: People have been doing all sorts of demos of getting them to sing or to whisper or to do accents, but I just tried and I can’t get my, uh, ChatGPT to do any of those things successfully. So. I don’t know if I don’t have that new functionality, if it hasn’t rolled out into this geography yet and we’re just getting the stock standard voices.
[00:35:34] CR: So Kim in the editing booth here, folks, um, I did play around with chat GBT on my desktop. When we were recording this and tried to get it to do the advanced voice mode stuff. And there was really no difference between it and the normal voice mode, except there were new voices. However. After we finished the show, I jumped onto my iPhone version and oh yeah, it worked. It was far more expressive and I could interrupt it in the middle of it talking. Which is actually kind of useful.
[00:36:10] CR: I don’t know if you’ve ever been using the voice mode and then you pause to think about something and it assumes you’re finished and it starts replying. This new version does the same thing, but you can just stop it and say, hold on, I wasn’t finished and it will pause and you can finish your conversation. The answers are far quicker than they were before.
[00:36:30] CR: It used to be. You would talk to it, then you’d wait. Five seconds and then it would reply. Now it’s almost instantaneous. And I played around with the usual stuff. I asked him to whisper. Something to me. And it did. Um, my son, Taylor said he tried to get it to do an evil laugh and a did Ivan played around with that? And I like a lot of that stuff is kind of gimmicky right now, but over time, I’m sure that we will find it. Just more human, like to have an AI that has a full range of emotion and expression. So, if you haven’t played around with advanced voice mode on your phone or your iPad yet, uh, Definitely give it a go. I stay away from the desktop version for now. Stick with the, um, uh, the, the phone.
[00:37:20] CR: Uh, versions of it. Uh, have you had a play around with any of the advanced voice stuff?
[00:37:27] SS: I’ve had a play around with the advanced voice stuff, uh, not any of the singing or that, the hacks or tricks. I didn’t notice a huge difference. In the way it interacts, it’s pauses and it’s intonation. Um, because it was launched, I played with it more than I have previously. Normally I would only use it really, uh, to talk to it when I’m on my mobile, but when I’m on my desktop, I would always type.
[00:37:53] SS: But the one thing that I did notice is that when you get conversational with an AI, it creates almost like a collective thinking process. You know how you and I, when we chat on here, we’ll, we’ll propose different ideas that put us on different trajectories. When you’re typing, I think it slows down what you’re doing.
[00:38:13] SS: You become more specific and put more pieces into the puzzle that you want to be considered. But when you’re speaking, that has this live thought process where there’s an interchange. And I actually think it helps you go on different paths just by having that. And also the pauses of the AI, to me, just.
[00:38:34] SS: It says, and I do wonder if it is actually thinking harder and looking at more of the data to give its answer or whether it’s just been designed that way to give it a human feel. I don’t know the answer to that. Uh, but it does, I feel like, put you down on a different trajectory that you otherwise might be on because of the way that thoughts happen when they’re expressed verbally versus written.
[00:38:58] CR: Yeah. I don’t know, but I think you’re right. I, even though I’ve had the basic voice for quite a while now, I tend to I’m going to use typing more than I use voice. I use voice in the car. Sometimes I use it in the kitchen if I’m cooking and I want to ask it something, but I will, I will default to using text more than voice for some reason.
[00:39:23] CR: Now, this advanced voice stuff is supposed to And when they did the demo of it, uh, whenever it was, months and months ago, it was supposed to be much smoother in the conversation, not as many pauses. You could interrupt it in the middle of a conversation just by talking. Uh, it, it had, it was multimodal, so it could see what was going on in the front and back cameras of your phone.
[00:39:51] CR: And you could integrate those into the conversation. I haven’t noticed any of that in this new rollout. Really, all I’ve got is new voices. There’s, uh, nothing really that different that I can tell about how it, uh, is performing. So, I don’t know if they’re doing it in increments. Releasing the voices first and then there’s additional technology and different geographies as they go.
[00:40:20] CR: Maybe it’s just a bandwidth thing, they’re rolling it out slowly and incrementally, I don’t know.
[00:40:26] SS: Yeah, I didn’t notice a huge amount, but it does seem, and I’ve always kind of thought that I’ve always said that language is our killer app. We’ve just got talked about that a lot. And also the ability to speak is just so quick. And I think it’s the required element. Again, we talked about screen and addiction and, uh, earlier on, but that idea that you can talk to something and have an intelligence, uh, for me is really interesting.
[00:40:53] SS: Uh, and I, I think that this speaking thing could potentially be an interesting play for. Apple. And the reason I think it could be interesting for Apple is that the AirPods are such an extraordinary product. And the revenue, I can’t remember what the revenue was, but it was, would, would make it, I think, in a top, just the revenue of Apple AirPods alone would make it a top 20 company in the world.
[00:41:20] SS: Just that. You talk about extraordinary product launches. I can see you’re wearing yours. I wear mine all the time. Imagine if you had an intelligence in your ear. You don’t have to touch anything, talk to anything. You have to. Pull your phone out, uh, even potential for little cameras to be on the side of the earpod, sort of, you know, looking forward, um, pointing it with my hands from my ears for everyone who’s just listening.
[00:41:44] SS: It would be an in an incredible form of AI if Apple could get that right, uh, with the voice and integrate that into the AirPods. That would be An absolute product necessity. Certainly delineate it and differentiate it from all the other white AirPod copycats out on the market for, you know, 20 in a Mars bar.
[00:42:07] SS: Feels like that could, that could be extraordinary.
[00:42:10] CR: yeah. Speaking of Apple, my son Taylor got the new iPhone 16 Pro Max the other day. And, uh, after he’d had it for a few days and he’d travelled with it, he’d gone down to Melbourne to meet Katy Perry. And he came back, I picked him up from the airport. I said, uh, what do you think of your phone? He goes, don’t get it, man.
[00:42:32] CR: It’s like, there’s nothing. His battery life was a bit better. He said, instead of it being a. 30%, it’s at 50%, but, uh, apart from that, he’s not, I’m not impressed with the level, but you know, he got it because he knows Apple intelligence will be rolling out and he wants to have that when they finally start to deploy it.
[00:42:51] CR: But right now he said, it’s just, he’s gone from a 13 He goes, yeah, it’s barely noticeable.
[00:42:58] SS: Oh yeah. Yeah. They’re definitely in their bumper bar era. Uh, 100%. Well, sales of AirPods were 14. 5 billion in revenue in 2022, according to the SEC. I mean, that’s,
[00:43:12] CR: and I live in our Airpods. Our Airpods are working most of the day, you know.
[00:43:19] SS: Yeah, so I, I think that that’s a really interesting potential overlap, uh, whether or not Apple, and I think that they should, because they’ve got every resource in the world, they should be doing whatever they can to make an open AI like Uh, AI Engine, and then if they plug that into both their phones, then I think even in their, uh, earpods, that’s kind of, I think, where we would go, and that would enable us to have everything that we need, you know, without the screen necessarily, I think.
[00:43:50] CR: Yeah. Well, I’m sure those sorts of things will happen. Meanwhile, more OpenAI news. Mira Murari, the Chief Technology Officer of OpenAI, who’s been pretty much the face of them from a technological front since Ilya. Well, actually, for the last year since the attempted coup against Sam, just announced yesterday that she’s leaving the company as well.
[00:44:19] CR: So, of the, uh, sort of original team that was there a year ago, um, Sam, Greg, Ilya, and Mira, Now there’s only Sam left and he’s not a technology guy. He’s a marketing, venture capital CEO guy, which I’m not
[00:44:40] SS: I didn’t know, I thought it was a, I thought it was a developer as well. Like
[00:44:44] CR: No, Sam is not a developer.
[00:44:47] SS: like Jobsy, ask the average man on the street and they, they think, they think that Elon Musk has got the, the, the Sigweld out, and he’s out just welding up a couple of rockets, and Steve Jobs had the soldering iron just putting a couple of microchips in the back of a smartphone, that’s what people think, ask the person on the street, he’s a genius, he invented everything,
[00:45:06] CR: he is a coder though. He is an engineer and a coder. He’s not
[00:45:10] SS: uh, Elon is.
[00:45:11] CR: Altman. Yeah.
[00:45:12] SS: yeah, Steve Jobs wasn’t, but I just always loved that Bill Burr when he said Steve Jobs genius, he told other people what to invent.
[00:45:21] CR: That is the genius though. Yeah. That guy, you want to, have a look at what Apple did in the 10 years Steve wasn’t running the
[00:45:31] SS: know. No, he definitely did great
[00:45:33] CR: it did before and after.
[00:45:35] SS: Yeah,
[00:45:36] CR: You can’t, you can’t
[00:45:37] SS: no, no. They’re cool. Well, I’m not dizzying him. I’m just trying to, I’m just trying to, I’m just, I didn’t know that Sam was a marketing business guy. I actually thought he was a developer, but anyway, that’s okay. You learn something every day, don’t you?
[00:45:49] SS: Can
[00:45:50] CR: Sam’s just an entrepreneur. And, and, but look, again, I’m not taking anything away from that. You know, the suggestion seems to be that, I mean, Ilya obviously left because of the political situation, but he’s since announced his own startup. Probably Myra is going to like, she must be getting, You know, unlimited amounts of money thrown at her to go and do something.
[00:46:14] CR: So it’s going to be interesting. It is this Cambrian explosion already happening in the AI space. Now we’re going to have the startups and even we already have that within Anthropic. And Anthropic is run by the people that put out Claude, is run by ex OpenAI people. And then the ex OpenAI people are all ex Google DeepMind people in a lot of cases.
[00:46:35] CR: So Google spawned OpenAI, OpenAI spawned Anthropic, and now you’ll have people leaving from there to go and do stuff as there’s just unlimited money.
[00:46:43] SS: was it, was it the Fairchild 8? Was that what they were called?
[00:46:47] CR: Yeah, I think so. Yeah. That started the entire Silicon Valley
[00:46:52] SS: so Silicon Valley emerged out of Fairchild, was it Fairchild Semiconductor? Which was one of the first microchip plants, which just so happened to be in a particular area near some orange groves, uh, a valley, uh, in California.
[00:47:06] SS: Uh, which then spawned, uh, Intel, and a whole lot of other technology firms, and Hewlett Packard, and all of these guys emerged out of that.
[00:47:17] CR: the, the traitorous eight. We’re the guys that left Shockley Semiconductor in 1957 and founded Fairchild Semiconductor,
[00:47:30] SS: Right.
[00:47:31] CR: right?
[00:47:32] SS: then there was those that left Fairchild and founded Intel. And it is really interesting. And this hearkens back to one of the things that I say to people, I say, how do you get such a good view of the future? And, and, and, and I know that you do the same Cam, you more than anyone, is to be a student of history while the technology changes, the patterns don’t.
[00:47:54] SS: Because humans haven’t had a software upgrade in 200, 000 years. Although, it looks like we’re about to have one with some BMIs coming in real soon. So we’re about to have our first upgrade when we merge with the machines, according to Kurzweil. So
[00:48:07] CR: Mm.
[00:48:08] SS: Uh, yeah, but it is interesting that we’re seeing this same pattern again with the AI Cambrian explosion.
[00:48:13] SS: And we saw that, I think, we definitely saw that with the semiconductor explosion. And we also saw that with, um, a lot of the big tech companies as well, where they all spawned out in the early 2000s and a lot of startups have since come from there.
[00:48:29] CR: Do you remember FLIR, by the way? F L A I R? FLIR
[00:48:34] SS: No, I don’t. You better tell me.
[00:48:36] CR: was the Fairchild Laboratory for Artificial Intelligence Research started in 1980 at Fairchild.
[00:48:46] SS: Wow. That’s
[00:48:48] CR: And you could probably draw a line from Fairchild to OpenAI, right?
[00:48:54] CR: so, uh, anyway, expect to see more announcements in the not too distant future from what, from Mira Murati about what she’s doing. Um, of course, the other big news that’s just happened in the last 24 hours as well, coincidentally, when she announced she was leaving, is that OpenAI, uh, really, really, really, really According to rumours, are removing their non profit control and giving Sam Altman, Altman equity.
[00:49:20] CR: Don’t know if there’s any connection between those two stories, but what we all knew was gonna happen at some point.
[00:49:26] SS: feel like there, there, there, there could be a connection there. One of the things that clearly has happened, OpenAI obviously started, uh, it was meant to be open artificial intelligence for all non profit, and then it just became incredibly profit centric. Uh, When there’s a lot of money involved, it’s very, very hard for something to remain non profit.
[00:49:46] SS: It’s a miracle that Wikipedia has remained, um, as is still, I think, a pretty important, uh, website. It’s the only one that I can really think of. Uh, but that non profit thing was inevitable. When you have that much venture money chasing it, remember, uh, venture capitalists only ever get married with divorce in mind and, and, you know, the prenup is big and the numbers are big when they make that split.
[00:50:09] SS: And I’ll tell you what, a non profit just, just does not Fit that at all.
[00:50:15] CR: Yeah. And you know, you’re right. You really got to give Jimmy Wales credit for what he’s done. The establishment of Wikipedia and how he’s maintained that. I know a lot of people, including Jason Calacanis, have been telling him for years that he should take advertising. He should take money and should do this and that.
[00:50:33] CR: I think Calacanis tried to set up his own competitor to Wikileaks, uh, Wikimedia, Wikipedia, 10 years ago or so, which didn’t get very far. But, uh, Jimmy Wales set up a good board of trustees that have maintained the
[00:50:48] SS: rare. I can’t think of any kind of commercial media format that is as powerful. And I think it’s still a top 10 website in the world as powerful. And there’s no doubt that if it was commercial, it would have been poisoned. No doubt.
[00:51:05] CR: And you got to wonder how long it has left now that, uh, you know, AI is where people are going to go to get their questions answered. Um,
[00:51:14] SS: good point. I find now just in hindsight that you’ve mentioned that, that now a lot of things that I would go to Wikipedia for I now go to ChatGPT for, because that’s actually what you’re after, is a truncated answer to a particular topic, which it can give you, but you can also guide which parts of that you want and how long you want it to be in the format.
[00:51:33] SS: Whereas Wikipedia is in a fixed format, let’s say, um, I guess you could take Wikipedia pages and summarize it. Uh, but that’s actually, I’d be interested to see if Wikipedia’s traffic has declined since then.
[00:51:50] CR: Yeah. But you know, you could also, uh, try perplexity if you haven’t. Are you using perplexity as
[00:51:57] SS: I have used it. I have used it, but not
[00:52:01] CR: I go to. I go to Perplexity if it’s something that I would Google because its answers are much more truncated than OpenAI’s, GHRGPT’s, but it, um, standard provides you with sources and links is how it’s been designed.
[00:52:18] CR: So you can ask ChatGPT for sources, but it won’t always provide them straight up. Perplexity provides it straight up every time. So you can go and check the validity of the information. Uh, Steve, moving right along. Do you want to talk about Instagram kids?
[00:52:35] SS: Yeah, look, we opened up on the idea that around the world, America and Australia are moving pretty rapidly to get some form of legislation. I just think it’s kind of disgraceful that Instagram kids Miracle. It gets launched on the eve of legislation. And what it says to me is that we have a very nefarious corporation that understands the dangers of exposing kids and teenagers to a product.
[00:53:09] SS: And at the 11th hour, they’ve. Buy Miracle started to put it in some of the features that we’ve been asking for a long time. It feels like a corporation getting on the front foot to further delay and obfuscate potential legislation when they know that they have a potentially very damaging product. Um, you know, some of the features that go into Instagram Kids.
[00:53:29] SS: Under 18’s web account set to private by default. Wow, what a miracle. And 16’s only allowed to say, change the permission, uh, change the setting with parental permission. Teens only able to receive messages. From people they’re already connected with, as opposed to all the sexual predators and, uh, scammers that they’re getting to now.
[00:53:49] SS: Content restrictions, blocking of offensive words and messages. To the maximum set possible. Notifications turned off between 10 PM and 7 AM because yeah, we know it’s bad for kids while they should be sleeping. A reminder to leave 60, uh, to leave Instagram after 60 minutes of use each day. Ironic we’ve, uh, given my boy a 60 minute screen time of all screens limit, uh, to 60 minutes while he’s trying to get better.
[00:54:15] SS: Uh, I mean, for me, this really tells us that we’ve had a flaccid set of legislators Who really should have acted a lot quicker and a very deceptive set of corporations, which are driven by what corporations are always driven by, which is to maximize profit and to obfuscate the difference between something that’s possible and profitable.
[00:54:40] SS: And, and if anything, it shouldn’t make us far more suspicious that it is even worse. than what we think, based on the fact that they’ve flicked a switch and put these forward. And anyone who went out there and clapped Meta Corporation for doing this, really need to have a good hard look at themselves.
[00:54:58] CR: But you can trust Zuck now, Steve. He’s got his, he’s grown his hair out
[00:55:03] SS: Well, listen, I think the fact that he’s, he’s got, he’s got the chain, the gold chain of the curly hair and, and you know, he’s a surfer, apparently he’s a cool surfer. And look, we just
[00:55:16] CR: between a surfer and a rapper.
[00:55:18] SS: And a wrapper, and then we just put Jensen
[00:55:20] CR: And there’s nobody we trust more right now than hip hop stars. I think if
[00:55:27] SS: They’re right up
[00:55:28] CR: us anything,
[00:55:29] SS: can we just, can we just divert to a little bit of P. Diddy? Look, P. Diddy wasn’t really on my radar, but it sounds like he’s really done some iniquitous work there. I mean, what’s going on here?
[00:55:44] CR: uh, yeah, let’s not go down that rabbit hole. Anywho, yeah, Instagram kids, like you’re right, these tech companies, um, A, can’t be trusted, B, are going to try and find ways to manipulate governments and, um, To, you know, get around any legislative changes that are made. Um, that’s what we expect corporations to do, right?
[00:56:10] CR: Is to try and manipulate things to their advantage, to enable them to keep. generating profits. And, uh, yeah, but again, I think it’s the primary responsibility in this area is and should be on parents to be the first line of defense for their kids. Uh, but then if parents aren’t doing the job, then, uh, I, I do think governments have a responsibility to protect, uh, the people
[00:56:42] SS: look, you’re right about the parents. You’re right about that. Parents, what is that social phenomenon where if someone’s being injured, people all stand around and no one does anything about it? It’s got, there’s a certain psychological phenomenon that has a name for it. I can’t remember what that is.
[00:56:58] SS: You can think of it while I explain why I think this is really important that legislators
[00:57:02] CR: Samaritan, you’re talking about the Samaritan legislation, Good Samaritan?
[00:57:07] SS: No, I’m not talking about good Samaritan legislation. I’m talking about the idea that when someone’s getting injured or something and no one acts and everyone just stands around and watches it, even though they know that they should be doing something, there’s
[00:57:19] CR: That was it.
[00:57:20] SS: for it,
[00:57:21] CR: That’s what the Seinfeld episode was based on, the Samaritan laws that were introduced. Yeah, if you stand by and watch someone get hurt, and you’re not a Good Samaritan, you don’t step in and help them, it’s based
[00:57:33] SS: But there’s a psychological phenomenon where there’s a number, a large number of people not helping. Uh, you can go to ChatGPT and ask that while I explain why this is important. So, so,
[00:57:45] CR: can ask ChatGPT, hold on, let’s ask ChatGPT. live
[00:57:50] CR: What’s the phenomenon called where a lot of people will stand around and watch somebody getting hurt and do nothing about it? Um, the phenomenon you’re describing is called the bystander effect. It happens when individuals are less likely to help someone in need when there are others around.
[00:58:07] SS: I thought it had a far more fancy name like that, like the Dunning Kruger or something, you know, where two psychologists named it after themselves, which if I ever invent anything, I’m going to call it the Sammartino. Although we’ve already got one. We’ve already got the Sammartino
[00:58:19] CR: Yeah, I coined that on your behalf. Yeah.
[00:58:22] SS: So look, I think what happens is you get this, um, bystander effect where it’s a real challenge socially. And Scott Galloway has said some really important things on this. He, I think is really courageous. Again, I don’t agree with everything he says, but on this one he said there’s a challenge because if anyone is on these social forums, the deep desire for connection for other teenagers to be on it and not be the one left out has this collective bias where it becomes increasingly difficult for parents to isolate their children because some are on it.
[00:58:55] SS: If you make it a law that none of them are on it, it makes it easier for the parents to implement that because they can use the law as a way to Show that this isn’t just us. This is what society agrees upon. These are the values of society, and there’s reasons behind this, as it is with alcohol, as it is with cars.
[00:59:15] SS: And I think all of the research shows that a large majority of parents wish their children weren’t on social media up to the age of 18. And this is where the law becomes our proxy to enable us to collectively implement something which is difficult because if you even get 10 or 20 percent of the cool kids or whoever on it, others feel isolated and left off and you get this social paradigm where they all go down into it.
[00:59:38] SS: So I think that this is where legislation is important because it’s not the government handing down a bunch of Uh, laws which reduce free speech or our ability to connect. This is what the government should do, which is implement against the collective value systems that we have so that we can have a more civilized society.
[00:59:57] SS: Rather than them deciding That they want that for their benefit. We, we collectively, and the research shows it, don’t want our kids exposed to this. The help of a government implementing laws that we would want would make it easier for us to develop a society that benefits our kids by reducing their exposure to a negative tech.
[01:00:16] SS: That’s my thought.
[01:00:18] CR: Yeah, but the, the, the, I agree. The challenge though, and um, I talked about this in the Psychopath book is we don’t have, let’s take Australia as an example, we don’t have a commonly agreed upon set of values. We don’t have a ministry of ethics or a ministry of values.
[01:00:39] SS: No, you never do. And it’s always a moving, it’s always a moving beast as well. Values
[01:00:44] CR: how do we know what the common values are of a society at any given point in time?
[01:00:49] SS: Well, this is what research is for, right? The way that we know is that we do, we conduct robust research where we can assign an entire population to this. And those research methods have been long established for over a hundred years where we can understand what the large majority feel and we need to govern to the majority.
[01:01:11] SS: I mean, that, that’s not particularly hard. Now, values as a term is difficult because you and I might agree on some things and not others, but on a particular singular issue that we can isolate like this, exposure to social media, it’s actually not that difficult. The only difficult thing I think in this legislation is what is social media?
[01:01:32] SS: The definition of that, I think is probably the most difficult thing. And I would take YouTube as the example. Is YouTube social media? I mean, that, that, that’s an, that’s an interesting question. There is social media is something where you can publish. Maybe if you can’t publish, if you can publish, then it is social media.
[01:01:50] SS: I mean, that, that’s actually the hardest part of this equation. It’s determining what it is.
[01:01:55] CR: You know, I would love to take it up a level and say that we need a Department of Ethics, and then it should be responsible for looking at Murdoch Media. Uh, you know, I mean, we already have Some bodies that are supposedly there to restrain media companies from stirring up the shit or doing falsehoods, but, uh, it’s not very effective.
[01:02:20] CR: I mean, there is, again, there’s a line between censorship. and holding media organizations, be they traditional one to many media or social media, to certain values and standards that we as a society believe are important to uphold. Not that it’s the government just implementing things because the people in government think that this is good or this is not good.
[01:02:47] CR: That we as a society, the majority of a society believe that in order to live together, there are always going to be people that are bitching on the sidelines about who want ultimate freedom of speech and ultimate freedom of behaviour. But as we were saying during the COVID times, I don’t have the freedom to walk up to your table at a restaurant, drop my pants and take a dump on your meal.
[01:03:10] CR: So are. Restrictions. If we’re all going to get along and live together in a society, there have to be some certain standards that we all agree to. And it’s the same when it comes to freedom of speech as it does in freedom of behaviour. You can’t do 150 kilometres an hour down past a school. I mean, for good reason.
[01:03:33] CR: And I think the same applies to what we allow publishing and what we should allow with social media. So anyway. Instagram for kids. What’s next, Steve? I wanted to talk about one other, I know it’s all open AI today, but Sam Altman published a piece on his blog a couple of days ago. It’s called The Intelligence Age.
[01:03:54] CR: I’m just going to quickly flick through it for people and then you and I can discuss. He says, In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents. This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time.
[01:04:12] CR: We can already accomplish things now that our predecessors would have believed to be impossible. I like to tell the story that my, my mother grew up in a house with no electricity until I think she was four years old when they got electricity. Now she has AI on her mobile phone. Like it’s an incredible change in 75 years.
[01:04:34] CR: This phenomenon, oh no, we are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us. In an important sense, society itself is a form of advanced intelligence. Our grandparents and the generations that came before them built and achieved great things.
[01:04:55] CR: They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out. on our own. The story of progress will continue and our children will be able to do things we can’t.
[01:05:16] CR: Um, and then later on he says, how did we get to the doorstep of the next leap in prosperity? In three words, deep learning worked. And this, I think, is the crux. In 15 words, deep learning worked, got predictably better with scale, and we dedicated increasing resources to it. That’s really it. Humanity discovered an algorithm that could really, truly learn any distribution of data, or, really, the underlying rules that produce any distribution of data, to a shocking degree of precision.
[01:05:53] CR: The more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is. There are lots of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge.
[01:06:14] CR: Deep learning works and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world. You know, it reminds me of conversations I was having with my friend Dennis Bastas.
[01:06:37] CR: 20, 25 years ago, Dennis is now billionaire owner of one of Australia’s largest pharmaceutical companies. But back then he was the CEO of a startup and we used to hang out a lot together in Melbourne. We used to talk about consciousness and, and, and, uh, artificial intelligence and how we thought it would play out.
[01:07:00] CR: And we, we agreed back then that. Consciousness and intelligence were probably just very, very high levels of computation in that any sufficiently advanced system of computation would exhibit something that looked like consciousness and or intelligence and it would be increasingly difficult to discern between what is really conscious and what is not, what is really intelligent and what is not.
[01:07:30] CR: Sam is basically saying that deep learning. works. Throw enough compute and enough data at a system and it becomes intelligent.
[01:07:40] SS: It’s funny. It’s kind of, when you describe it that way, this is where my mind went. Not to say that this is right, but it’s almost like consciousness is the space between thoughts. Just a general awareness of being like you have like a bunch of thoughts and then you kind of there, I’m here, I can think of, I’m not thinking of anything in particular, but I have this general awareness and it’s kind of almost, yeah, it’s almost this, it’s almost a bit ironic that We were talking about some of the recent evolutions in the large language models, obviously open AI and chat JBT of those thoughts and deep thought and thinking between spaces, it’s like it’s progressing towards that.
[01:08:20] SS: The way that you described it then kind of reminds you of the, the human neural network and how it has this idea of consciousness and it, I often think. About a lot when I’m gardening, Cameron, about a worm, I’ll dig up a worm. I’m like, does this worm know that it’s here? Like, does it, where does consciousness kick in, in terms of creatures, whether it’s plants or grass or worms or bees or birds or other mammals?
[01:08:46] SS: When do they, you know, what level does it kick in?
[01:08:52] SS: Because we already have computation, which far exceeds many organic species in their computations per second and the size of their neural networks in their brains versus what we have now in large language models.
[01:09:04] CR: Yeah. I mean, my thinking on this back then was led by Marvin Minsky’s book, The Society of Mind, which I read At some point in the 90s, um, Marvin Minsky, for people who don’t know, was one of the guys who was at the beginning of artificial intelligence research and cognitive science. Um, he was, um, at MIT, co founded MIT’s AI Laboratory.
[01:09:36] CR: You know, he was, uh, one of Kurzweil’s, um, Uh, teachers and mentors, uh, K. Eric Drexler, who wrote the first book on nanotechnology, came out of Minsky’s lab. Um, Danny Hillis and the, uh, you know, what’s his clock, the immortal clock or whatever it is that he set up. A lot of the greatest, uh, current thinkers around AI and cognitive science, um, came out of, Marvin Minsky’s lab.
[01:10:05] CR: And in his book, Society of Mind, he basically just talked about how he believed that the concept of self was just a society, it was a society of thoughts. Basically, the brain is just a whole bunch of single unit processes happening, billions of single unit processes that somehow become aware of each other.
[01:10:29] CR: And it’s what he calls the interaction between those. Agents is what he referred to them as, was what we call mind. It was a society of. agents, billions of agents operating in conjunction, which became a society like millions of people living together, doing their own thing, become a society. There’s like a, a meta level of behaviour that emerges, emergent behaviour that comes out of enough people doing their own thing that exhibits a behaviour of a society that the brain, the mind is really no different.
[01:11:05] CR: So, you know, these ideas have gone right back to. You know, at least his book in the 80s that he, you know, 86, he came out with Society of Mind, one of the foundational books on artificial intelligence. So anyway, I thought it was really interesting and, you know, I remember I was talking about this a long time ago that, um, in an early interview that I saw that, um, Jensen Huang at NVIDIA did with Ilya Sutskever.
[01:11:30] CR: Um, he said what was the most surprising thing about ChatGPT to Ilya and he said it was that it worked. Like, we had this idea that if you threw enough data and enough compute at a large language model that it would, that it might show intelligence and it did. You know, that was, that was still surprising to him, um, a year and a half ago.
[01:11:53] SS: Yeah. It’s interesting. I mean, again, in Kurzweil’s book, which I’m savouring the words, almost don’t want to read it because I’m enjoying it so much because once I’ve read it, it’ll be read. The singularity is near. I just love the idea that he talks about the connectionist versus symbolic AIs. And that, that was the first type of AI they developed in 52, but they just didn’t have the computation to make it possible, but they, they thought that it would be the right approach because it had a sense of biomimicry about it, which is just, if you have enough, enough, uh, computation and enough input, which is really all there is, it’s computation and input, that’s all really there is, input out, it’s like something to mix it up and, and, and the data inputs really.
[01:12:34] CR: but he talks about, um, how this, like what became the LLM model, which is where there is no hard architecture between symbols and concepts that you just feed data. No, everyone thought that was, uh, the wrong approach and wouldn’t work. And, and so for 30, 30 years, We just gave up on that.
[01:12:55] SS: yeah, for a really long time. I mean, the simple example is, uh, words that have the exact same sound and spelling, but two different meanings, like boot and boot. If you feed it enough information, boot, car boot, put it in the boot, um, the boot was full, it’ll eventually work out that that boot in the right context is to do with cars versus it’s climbing up a mountain with the boot, you’re chewy on your boot, all that, and it can work it out just by the words that are near it.
[01:13:23] SS: Yeah,
[01:13:24] CR: Yeah, so, like, but those three words I think are incredibly profound. As Sam said in the article, deep learning worked. Like that’s, it’s like one of the pivotal statements. It will be one of the pivotal turning points, tipping points of not just human society and human history, but the history of intelligence on this planet.
[01:13:54] CR: And potentially. In the entire universe, because we don’t know how much intelligence there is in the universe, we can’t assume that there’s anything outside of this.
[01:14:04] SS: Wow.
[01:14:04] CR: Deep learning worked. To me, like, that is as, it’s going to be as profound a discovery as the taming of fire. Or The Wheel, or Electricity, or General Relativity, or Quantum Mechanics.
[01:14:24] CR: It is one of the most prof and it’s just slid by and I don’t think, most, I mean, most people still don’t know how electricity
[01:14:32] SS: I too, I too am absolutely astounded by the capability and I think that anyone who isn’t maybe hasn’t really played with it enough. Really?
[01:14:42] CR: Oh, absolutely. Yeah, yeah, yeah. Anyone who’s dismissive. I had lunch, one of my listeners from Boston, shout out to Doug if he’s listening, was
[01:14:50] SS: Aggie boy!
[01:14:52] CR: He was here in Brisbane a couple of weeks ago. I had lunch with him and he, he runs a biotechnology company, which is really interesting and we should get him on to talk about the work that they’re doing in solving
[01:15:02] SS: that’d be cool.
[01:15:04] CR: But I asked him about how much they’re using AI and he was still kind of dismissive about AI and I said, have you played around with it much? He goes, no, not really. And I was like, man, I pulled out my phone and did my usual parlour trick, which is like, let’s ask him a question about, Oncology or biotechnology or cancer research and have it have a voice conversation.
[01:15:24] CR: This is a few weeks ago before advanced voice came in. Uh, I just think I said, have you played with the voice? He goes, no. And I was like, seriously,
[01:15:33] SS: Get
[01:15:33] CR: just have a listen to this. Talk to you, man. Like anyway. Yeah. People who aren’t blown away either haven’t played with it or don’t understand it. But as I keep saying to people, the most profound thing is didn’t know this would work.
[01:15:51] CR: We just had a, we, I mean, the people who developed it
[01:15:55] SS: Well, I mean, when you and I were developing the LLMs back in 2007, we couldn’t believe what we stumbled upon. Could we,
[01:16:02] CR: Using the, the broad we of humans,
[01:16:05] SS: way. Look, I’m part of the human species. I’m one of the developers. If humanity is the
[01:16:10] CR: we. Yeah. Well, yeah. All right. Let’s move on. Let’s move on. Technology Time Warp, Steve, what have you got for us
[01:16:18] SS: I just wanted to put in a titbit. Every now and again, I’ll just say today in technology, and I did find out that the first ever personal computer was launched in 1973 today, and it was the Microcomputer Machines of Canada, the MCM 70, which had the first 8008 processor in it, but it failed, and I just, no one really knows about the Microcomputer Machines of Canada Corporation.
[01:16:45] SS: Big companies, Chevron, Mutual Life Assurance, NASA, And the US Army were its first customers and it was launched at a, uh, programmers, microcomputers, programmers conference in Toronto in 1973. So interesting how often it’s kind of 10 years before it gets in the zeitgeist and a, and a product then becomes available to general users, late seventies, early eighties.
[01:17:13] SS: I just thought that was an interesting little titbit as a technology time warp.
[01:17:18] CR: The company, Microcomputer Machines, was founded by Canadian inventor, Merz Kutt. Merzlau Kutt, born 1933, still alive.
[01:17:30] SS: Oh,
[01:17:32] CR: Former professor of mathematics at Queen’s University. Um, worked at Phillips, IBM, and Honeywell. Wow. There was a professor of mathematics at Queen’s during the 1960s, served as president of the Canadian Information Processing Society, was the director of the university’s computing center, and then in 1968 he was interacting with punched card based input systems and thought Maybe there’s an idea to put a keyboard on one of these things,
[01:18:10] SS: it’s one of the crazy things about. All of the early computers, none of them had screens, and most of them just had tapes. And that’s, that’s the interesting thing, you would do little printouts of what you get, a little punch card analysis. Uh, which is an interesting idea, a screen, and then from a screen to a voice.
[01:18:31] SS: I mean, it actually is really interesting on how things have this anthropological Natural flow. First, it’s something physical, whether it’s a cave wall drawing or whether it’s a printout or a punch card, then it’s a screen, then it’s a voice. It’s really interesting in the way that that flows. And it almost shows that in the first instance, it’s hands, hands driven, it’s typing, punch cards, physicality, and then it’s screen, it’s visual, it’s eyes.
[01:18:58] SS: And our eyes are working before our language developed. And then it’s language. And there’s almost this anthropological pattern that technological develops have followed and they’ve done it with AI again. Now we’re at voice. Actually, that’s ironic. I didn’t plan this. It sounds like, sounds like I planned that, that this technology time warp, that it would fit into all the other things, but I didn’t, it just turned out that way, just like large language models.
[01:19:23] SS: I can’t believe it actually worked.
[01:19:26] CR: And now we have vision, like visual inputs into the AI as well. Um, not just a screen as an output, but you can, you know, I, I mean, I do this increasingly with chat. PT is take a screenshot of whatever it is I’m working on a report or a spreadsheet or an error message and just drop it in and say, what do you make of this, you know, screenshots.
[01:19:50] CR: We can’t do live video input yet, but that’s what we’ll have. Oh, you do it with
[01:19:54] SS: my scrappy notes. I’m writing a piece on dead internet theory and you would be astounded at the accuracy that it can decipher my scribbles, which I don’t think a lot of humans would be able to make sense of what that word is. And I, and I’m guessing that the reason that it can do that is that it looks at the words around it and guesses what the word should be, even if it’s not spelled right.
[01:20:18] CR: He just gave me a great idea. I’m going to scan all my old paper diaries
[01:20:23] SS: I do that. I do it all the time.
[01:20:26] CR: and say,
[01:20:26] SS: when I’m doing a briefing, when I’m doing a briefing with the client on the phone for an event or some consulting work, I’ll write it all down verbally. I take a, I take a photo of it, dump it into ChatGPT, turn it into a Word doc, bam, I’ve got it all done, and then I’ll file it.
[01:20:42] CR: Wow. Good thinking. Just gave me another project to waste my weekend on. Uh, by the way, that Uh, computer. We talked about the M cm 70 based of course on Intel’s 8,008 or 80 0 8 processor, one of the earliest eight bit microprocessors, which had 16 kilobytes of memory
[01:21:07] SS: it was.
[01:21:08] CR: introduced in April, 1972.
[01:21:13] SS: And my first computer was the 16K RAM Terus 80, which had the same microchip in it, which was way back in about 1981. So, The recursion then was a lot slower, wasn’t it? I mean, you were still selling the older, slower ones for quite a long period of time, whereas now I imagine, you know, things don’t even last 12 months before they’re off the shelf.
[01:21:33] SS: Uh, but that, that was interesting.
[01:21:37] CR: just the leap from the 8008 through to what Apple are putting in their iPhone 16s, the processing power and the chips, all through to the NVIDIA, the NVIDIA chips that are running AI now, like that is just, it’s just a mind boggling demonstration of Moore’s Law. I was talking to my mum about AI the other night, I was explaining Moore’s Law.
[01:22:04] CR: And how over the last 20 years, I, there’s been so many people saying, Oh, Moore’s Law is coming to an end. We can’t get the gates down small enough now for an electron to pass through them. And then everyone’s like, no, well, now we just put them together. You know, what NVIDIA is doing where they’re just slamming these things together.
[01:22:23] CR: Uh, Moore’s Law has not, uh, died. It continues to speed up our ability
[01:22:28] SS: Yeah. I mean, I always liked Kurzweil’s and I think he did it in How to Build a Mind, where he talked about Moore’s law being, or the transistor being the fourth iteration of what is truncated ways of transmitting information, where he starts with punch cards and then he goes from punch cards to vacuum tubes, vacuum tubes to, um, microprocessors, microprocessors to transistors, and that, you know, Eventually you’ll invent a new technology or a new way to weave information together to hold and process more information.
[01:22:58] SS: It’s not people confuse Moore’s law with the overriding idea of the law of accelerating returns where technology finds a way to fit more information into less. Um, you could probably go all the way back to cave wall drawings, you know, bits of
[01:23:12] CR: Yeah.
[01:23:13] SS: right?
[01:23:14] CR: think Kurzweil does, right? I think he goes back that far. Yeah. All right. Uh, well, how are we doing for time? An hour and a half, Steve. Geez. And we’re not
[01:23:25] SS: Jeez. Well, what happens when you have
[01:23:27] CR: a line under it.
[01:23:29] SS: we tidy this up,
[01:23:32] CR: Yeah. Like, um, I think we’ve, we’ve talked about social media gating anyway. I think that’s pretty much it.
[01:23:39] CR: Um, yeah. Big couple of weeks. Let’s see how long it is before we do another one of these, Steve. But, uh, good to chat and
[01:23:48] SS: Oh, so enjoyable. Look, just, just, you know, exploring things with you, uh, and hopefully, yeah, and hopefully Lewis has enjoyed some of the explorations we went on, but so good. Mate, appreciate the chat.
[01:24:03] CR: Yeah. Thanks for taking time, man. And, uh, anyone out there, you got any ideas you’d like to, like us to cover in future episodes, uh, let us know, reach out. You know where to find us. Thanks, Steve.
[01:24:15] SS: Thanks, Cam.
In this episode of Futuristic, we chat about balancing screen time with reading physical books, Mistral Large 2, another bloody AI pendant (The Friend), Chinese text to video app Kling, and the use of crypto financing in the U.S. Presidential election. Our guest Martin Murray, helps us understand the potential role of AI and humanoid robots in agriculture. Finally we discuss the role of open-source AI, Mark Zuckerberg’s push against Apple’s closed ecosystem and how this affects industries beyond tech.
00:59 Steve’s Desire to Reduce Screen Time
07:49 Cameron’s levelled up his coding skills
12:19 Mistral Large 2 for Coding
14:59 New AI Pendant “The Friend“
21:48 AI in Agriculture with Martin Murray
52:25 Kling, the Chinese TTV product
01:05:11 Crypto and Political Influence
01:11:26 The Future of Open Source AI
01:19:48 Nanotechnology and Radical Abundance
FUTURISTIC 29
[00:00:00] Cameron: Welcome back to the Futuristic Steve Sammartino, episode 29. You’re looking sharp as always, man. Look at you. You look like a, I don’t know, he’s just stepped out of a venture capital meeting in Silicon
[00:00:20] Steve: Finance, 6, 5, Trust Fund, Blue
[00:00:23] Steve: Eyes.
[00:00:24] Steve: really finance, 5 6. Finance, 5 6, not 6 5. I’m
[00:00:29] Steve: not 5
[00:00:29] Steve: 6, but anyway.
[00:00:31] Cameron: I look like I’ve just come off stage at a Grateful
[00:00:33] Steve: You actually do, you’ve got the, what have you got on there? The
[00:00:37] Steve: bandana.
[00:00:38] Cameron: It’s a beanie, cold enough. One of the few cold enough days in Brisbane to wear a beanie. I’m going to
[00:00:44] Steve: have you got on there? A
[00:00:45] Cameron: I’ve got a
[00:00:46] Cameron: got A
[00:00:46] Cameron: bit of,
[00:00:46] Steve: lovely 23 degrees or
[00:00:48] Steve: something.
[00:00:48] Cameron: yeah, that’s
[00:00:49] Steve: is real.
[00:00:50] Cameron: I got a, I got
[00:00:50] Cameron: a, got an annoying cough. So I’m going to have to keep muting my mic to cough. Hold
[00:00:54] Cameron: on. Yeah. All right. Uh, Steve, uh, tell me one. thing of note you’ve done that’s futuristic y since we last spoke.
[00:01:03] Steve: Futuristic y. Is, I’m trying to balance my screen usage. I’ve never really looked at how long, I’ve spin on the screen because it was always justifiably, well, it’s work for me. And to be fair, I don’t really spend a lot of time doing things on the screen that aren’t work. Uh, but I looked, and it’s an average of 5 hours and 46 minutes a day on my phone.
[00:01:27] Steve: It’s an average of 92 pickups. And so, I went out and bought a few books. This week I asked you for a couple of recommendations and I have some on the way and every time I go to pick up my phone when it’s not an absolute necessity to do a task on it for work, I’m going to instead pick up the book. Uh, because I actually think that there’s a really big difference between knowing about, hearing about something and knowing about it.
[00:01:58] Steve: And I think we hear about a lot of things and the screen lets you hear about a lot of things. I don’t think Enables deep understanding, like reading a book and really going deep into it, into a topic.
[00:02:13] Cameron: Well, one of the things I was going to add to our talking list, but I ran out of, well, we had too many things already, but I was started watching a video last night, uh, an interview with Dwarkesh Patel, who does a lot of AI videos on YouTube, but he was being interviewed on another YouTube channel. And he was talking about how he thinks you don’t learn very well from books.
[00:02:35] Cameron: And I think we talked about this in an earlier episode, and he’s come up, he said, like, when I interview people that are experts in a topic, I need to, you know, get smart on that topic really quickly, and often they’re topics I don’t know a lot about. Said he had a geneticist on his show recently. So he’s developed this workflow that he puts into Claude, his AI tool of choice at the moment, that creates, um, uh, spaced repetition.
[00:03:03] Cameron: And he said he can upload an EPUB. of somebody’s book into Claude and say, tell me what the key points are that I should know before I interview this person and then test me on those. Um, you know, give me questions and answers and test me on it to test my basic knowledge. And he’s developing a system, which I haven’t fully finished the video to integrate it yet, but I think, you know, we’re moving into an era where Books are going to be a component of how we learn, but there’ll be an AI layer wrapped around that, that will create a, um, educational context.
[00:03:42] Cameron: What do you call it when somebody’s an educator? You call it, uh, uh, there’s a word for it that’s escaping my memory. Hold on, I should ask GPT. Um, what, what, what’s, what do you call a system of education that, uh, leads you through the learning process?
[00:04:02] Cameron: Scaffold.
[00:04:04] Cameron: Or pedagogee was the word I was trying to think of..
[00:04:07] Cameron: He’s talking about it’s a scaffold
[00:04:09] Steve: yeah, and that’s a nice way, and that’s a nice visual of Understanding the elements and almost seeing a matrix design of where you could go to get different types of learning. And just, uh, look, I think that what you’re saying is right. There’s a multitude of ways of learning. I mean, I’ve always been very good at kinesthetic learning, where actually hands on doing, and it’s one thing with startups that I’ve always realized.
[00:04:30] Steve: It’s one thing to read about startups and it’s another one to get out there in the battlefield and try and get people to use your product and, and raise capital and, and, you know, exiting theory. is really important. Uh, and you always learn a little bit more by doing something than actually reading about how to do it.
[00:04:46] Steve: Like you can’t learn how to ride a bike by reading about it. You have to hop on the bicycle. You can probably get some tips. Sure. Uh, the context of what I was saying, and I, it’s probably good that you brought this up because I can clarify. I’m not saying that I want to read more books to learn more. What I’m saying is, of what I’m doing with reading, I want it to be physically on paper.
[00:05:11] Steve: So the context is the reading context. And I actually looked up some research on it because I was curious about it because I have this theory that I get more out of a book and we all know about the distraction side of it. It’s easy to get distracted when you’re on a phone and not stay in the book as deeply because it’s singular versus a phone which has a multitude of uses.
[00:05:31] Steve: But some of the research suggests that the subconscious impact on the light and tiring out your brain and the way that the words are absorbed is very, very different to ink on paper. And another one which blew my mind, I never thought of that, is that the pixels that are on a screen, uh, flashing lights really, really quickly.
[00:05:52] Steve: And apparently your brain doesn’t digest the words as well. So there’s some research coming out, uh, in relation to screens not enabling people to digest information as well as books. So there’s that part to it. And there’s also that you go deeper into a book than you do just scanning the news on tech.
[00:06:09] Steve: So I know about a lot of stuff or hear about a lot of stuff, but I want to know more deeply about
[00:06:13] Steve: it. So that’s that’s what I’m really trying to do.
[00:06:18] Cameron: I think you’re probably right. I mean, I think, hey, well, let’s, let’s break that down. Like, um, yeah, reading rather than scrolling. Um, you know, I, I’ve developed the habit over the last few years where if I have some spare time and I’m not working and I’m picking up my phone, it’s usually to do Duolingo, to practice my Italian,
[00:06:39] Steve: is great.
[00:06:40] Cameron: or it’s, or it’s chess.
[00:06:43] Cameron: I’ll be playing a game of chess against somebody on Chess. com and I’ll have to go and make a move. Oh shit, that reminds me. Oh no, I did make a move this morning. And. Or I’ll read. Um, sometimes it’s Reddit, um, but often it’ll be a book that I’m in the middle of. I’ve always got like ten books on the go, so it’ll be one of those.
[00:07:01] Cameron: Um, But for me, and I think you’re probably right in terms of reading paper, but for me, the upside of being able to, I’ve got a shortcut on my iPad that if I copy a, if I copy a block of text in a book and I run this shortcut, it’ll paste it into a running note that I have in Obsidian.
[00:07:19] Steve: got benefits that a book will never
[00:07:20] Steve: have.
[00:07:20] Steve: Absolutely. No doubt. For sure.
[00:07:22] Cameron: I can, I can pull up GPT and
[00:07:24] Cameron: say, what’s this word mean? Who was this person? What’s that I don’t know. So I think there’s, there’s pros and cons, but I, I, I think you’re probably right that ink on paper is
[00:07:33] Steve: Just something about, I just want to extricate myself from the screen somewhat because I just think I’ll get reading done as well because it’ll be more purposeful and less distracting. So there’s, there’s a multitude of reasons, but that’s, that’s what I’m trying to do more
[00:07:46] Steve: of and just pay attention to the physical world.
[00:07:49] Cameron: Well, I’ve, I, you know, I know every
[00:07:51] Cameron: time we do this, I talk about coding because that’s where I spend a lot of my time, but I really felt like I leveled up my coding in the last couple of weeks. There’s a particularly big thorny problem that I’ve been working on and getting nowhere for days and days and days on it.
[00:08:06] Cameron: And then I learned. To, if there’s a particular problem. See the way the AI tools work at the moment, if you’ve got a really big script and it’s doing, it’s doing like 25 functions, but one function’s not working properly, and you tell the ai, I got a, I got a an error code, I’m gonna try to do this. It’s immediate thing will, it’ll rewrite the entire
[00:08:28] Cameron: script and go, okay, run it
[00:08:29] Cameron: again. And so I’ll wait till it finishes and then I’ll run it again and it still won’t work, and I’ll give it the error code and it’ll go, oh, okay. Let’s put some more logging in. And then it’ll give you the whole script again, and you run it again, and you give it the log, uh, the error log. And he goes, Oh, okay.
[00:08:45] Cameron: Let’s put some more logging in. And you get in this loop and it’s really frustrating. So what I learned to do is to isolate the bit that’s not working and say to the AI, No, no, no, no, no, no, no, no, no, no, no, just stop. Write me a script that, that has three or four ways in it. To solve this problem, like the particular thing that’s not working.
[00:09:07] Cameron: And let me just run those independently. So I’m not running the whole script until, so I isolate, test, figure out the method that actually works, and then integrate that back into the bigger script as a new function. And, um, I did that over the last week or so, and it’s solved a whole bunch of problems.
[00:09:28] Cameron: Just isolate. Test and then combine or integrate it back into the main thread has really,
[00:09:35] Steve: it’s a really good point, Cameron. And I don’t know how the large language models.
[00:09:40] Steve: But it seems to me as though you cannot iterate verbally on the previous finding. And I know that OpenAI is quite guilty of this. I was doing a presentation, uh, getting ready to go to New Zealand early this week, and I had an image that I wanted in 16, or in widescreen format. Is it 4 is it? And I said, this image that you’ve developed, Now go,
[00:10:09] Steve: 16,
[00:10:09] Steve: 3, I’m way off, um,
[00:10:12] Cameron: It’s 3, 4 and 16,
[00:10:13] Steve: there you go, 9, okay. said, develop this image in widescreen format, redo it. It gave me a new one. I’m like, no, no, this image here, exactly the same, widescreen. Can I do it? It actually is. And I’m wondering if what you’ve found as well is it’s very, very poor at verbal iterations, unless you take out the piece of the puzzle and get it to reframe that exact piece and then insert it back in, it always goes back to the well and starts again.
[00:10:42] Steve: And I’ve noticed it can’t do the same thing twice. It’s very, very bad at doing the same, which is interesting and cool. Uh, but if you ask it to recreate an image or create that image again, it won’t do it. It’s always a little bit different. And I think that comes down to the probabilistic nature of it and the way that it works.
[00:10:59] Steve: And it’s really noticeable, especially from an image perspective, and I imagine from a code, based on what you’ve said, and it reruns the code base again, because it goes, oh, you didn’t like that, I’ll just do the whole thing again. I’m hoping that ChatGPT 5 and others come out with a greater iterative ability, and for it to understand what’s required within the iteration of the previous development, and it doesn’t seem to be there.
[00:11:24] Steve: And this goes to show What, what you’ve been doing and what I’ve been doing, and I’ve been saying this on stage so much is getting to work well with the AI is an absolute art form that cannot be understated. And the only way you learn that art form is by experimenting with it and playing with it. And I wonder, If that’s always going to be the case, even if we have extraordinary AIs that are close to the general level, because it, it doesn’t, it’s trying to satisfy your needs and it is not inside your brain.
[00:11:56] Steve: And so that is the complexity that requires the ability to prompt well and to work with it and iterate and, and craft the angles and take pieces of the puzzle out and then reframe those and then put them back into the puzzle. It’s a really interesting insight.
[00:12:10] Cameron: Mistral. Mistral. Large 2.
[00:12:15] Steve: You’re going to have to give me a download on this cam because I didn’t test
[00:12:17] Steve: it.
[00:12:18] Steve: I had a look and read
[00:12:19] Cameron: I’ve been using, I’ve been using Mistral for my coding. For the last couple of weeks, their latest model, Large 2, Mistral’s a French company. Um, I believe made up of former employees of Meta and Google, was founded in April, 2023.
[00:12:41] Cameron: And, um, they, uh, have a open source model. Well, we’re going to talk a little bit of open source on this show, if we have time. Their latest version, um, Large 2, absolutely. Stunning when it comes to coding. I haven’t used it for much else, but it shits all over GPT 4. 0 and 4. 0 when it comes to coding, just gets it right almost every time, understands what I want, nails it, so much less stress, uh, using it for coding and, uh, that in and of itself is great.
[00:13:23] Cameron: So if you, if you haven’t tried it, you can use it for free. If you go to Mistral. ai, M I S T R A L, they have something called Le Chat And le plateforme, le plateforme is the API. Weirdly, I signed up for the API and I haven’t been able to get it to work. Because one of the problems with le chat is, as my thread gets long, which it can often do with coding, it slows right down.
[00:13:52] Cameron: It’s certainly not fast enough. Fast. It gets really slow, like one letter at a time, typing out, like my, my, you know, my dad trying to type something on a keyboard 20 years ago. Um, uh, so I tried to use the API, and I can’t get the API to work for some reason, and their tech support is not, has not been great.
[00:14:12] Cameron: I’ve been asking their tech support for a week to Figure out what’s wrong with my API access. And they’re not really very responsive because they’re French. But the, uh, the, but the LeChat service in and of itself is pretty good. If
[00:14:24] Cameron: you don’t have massively long threads that
[00:14:27] Steve: in the chat. It’s right up So I highly recommend.
[00:14:30] Steve: I said it’s right up there with Ordineter. Which is what they call a computer because they’ve got the French language division which makes sure they don’t have Western words.
[00:14:40] Cameron: Ah,
[00:14:41] Steve: And that’s the computer, is it already
[00:14:43] Cameron: So anyway.
[00:14:43] Cameron: Mistral
[00:14:45] Cameron: 2 is great and if we have time after our guest comes on in six or seven minutes, I want to talk more about open source versus closed source because it’s been a big topic, uh, in videos I’ve been watching this week. Um, The Friend, Steve, another bloody AI pendant, uh, has hit the
[00:15:06] Steve: Yes,
[00:15:07] Cameron: by a 21
[00:15:09] Cameron: year old guy called, uh, Schiffman.
[00:15:13] Cameron: Uh, did you
[00:15:14] Steve: I did, he’s an interesting cat. You know, he, it was interesting. His turn of phrase was interesting. What I did like was that he said productivity’s over in his Wired interview, and I kind of, I do feel as though there’s so many productivity tools, we need productivity tools to handle the productivity tools.
[00:15:32] Steve: It’s like, it’s eating its own tail. And at least he had a point of difference where this. Is to help people have a friend and for loneliness, which is a real problem in the world. I mean, he just forgot to put the word in the name of the product imaginary before friend, which would have been so perfect that I would have loved that.
[00:15:51] Steve: And I think he really should have done it, missed an opportunity.
[00:15:54] Steve: Also really curious that he spent almost half the venture capital on getting the dot com.
[00:16:00] Steve: Uh, he spent 1. 8 million on buying friend. com. Uh, which I thought it was interesting, Look, this is one of those
[00:16:10] Steve: ones that I can see that there could be some utility
[00:16:13] Steve: for
[00:16:13] Steve: it.
[00:16:13] Steve: Definitely. Yep.
[00:16:14] Cameron: So tell people, tell people what
[00:16:16] Steve: So, it, it’s, it’s, interacts with you on your day to day where you can talk to it. It sends you messages which I think appear on your phone where it talks back to you. It might even just make comments without asking saying, hey it’s
[00:16:27] Steve: great to be outside. It’s good that we’re walking around getting some fresh
[00:16:30] Steve: air.
[00:16:31] Cameron: You wear it you Wear neck like a little necklace.
[00:16:34] Cameron: around your
[00:16:35] Steve: Round your necklace and it observes and listens. It’s always on, as is Siri and Google and everything else. It’s always on and it makes comments about your life and talks to you and send you text messages and interacts with you in a way that is kind of like, let’s say an imaginary friend or a little AI friend that comments on your day and has opinions and can summarize, Hey, you know, you had that chat with whatever.
[00:16:59] Steve: Did you follow up and do that email? I don’t, stuff like that. It seems interesting because I don’t think it’s over promising in what it can do or trying to replace a screen which none of them have been able to do. All of them have been technologies in search of a problem where the smartphone usurps their utility.
[00:17:15] Steve: This one, I think, I think it could work. I don’t think it’d be anything ever more than niche, something incredibly niche. I could, I could see it potentially working. Um, just because it’s a little bit more single minded on what it does. Because the smartphone is the ultimate Swiss Army knife that does it all and does it incredibly well.
[00:17:35] Steve: And it’s very, very difficult to see that being supplanted. But I think that the single minded proposition of it, commenting, talking to you, helping lonely people, You know, you can, I feel like it could have some utility. So, look, that said, do I think that having an imaginary AI friend is better than sitting across the table or me chatting with you or picking the phone up?
[00:17:59] Steve: No. But maybe some people don’t have that opportunity or that wherewithal or that confidence or have a base of
[00:18:05] Steve: friends. So that’s kind of my taking on
[00:18:07] Steve: it. Yeah.
[00:18:09] Cameron: And I don’t think it’s an either or. I think in the near future, we will have an AI friend or multiple
[00:18:15] Steve: Yeah, right. Yep.
[00:18:16] Cameron: and as well as real friends, if you’re lucky enough to have real friends. Personally, I don’t like real people that much. I like you. I like Tony. I like Ray. I like my wife.
[00:18:28] Steve: pretty sure, I’m not even sure if you like me. You say that you do, but you’re, you know, you’re
[00:18:33] Steve: an unusual cat, you know that.
[00:18:37] Cameron: There’s a small group of people that I actually like to hang out with. But, getting back, I look, I think, um, trying to sell these sorts of, uh, single purpose AI widgets is probably a path to nowhere. But, I do expect that in the very near future, my watch, or my phone, or my iPad, or all of those, will have an AI, or several AIs, that will be my friend.
[00:19:09] Cameron: We’ll be listening to everything that happens, watching everything that happens, and maybe I’ll have different friends. Maybe I’ll have, I’ll have the friend that lends a sympathetic ear when I’ve had a fight with Chrissy. Maybe I’ll have a friend that’s like, dude, you, you, you really weren’t very productive today.
[00:19:27] Cameron: You know, you know, I’ll have my coach,
[00:19:29] Cameron: you know, The one that tells me that,
[00:19:31] Steve: idea. Your own personal
[00:19:33] Steve: Jim Rohn, straight around the neck.
[00:19:36] Cameron: yeah, yeah. Although the one that’s telling me that, uh, I need to be, I need to spend more time with my kids. I’ve been
[00:19:43] Steve: You’ve only hung out with Ox.
[00:19:44] Cameron: or maybe it’s all wrapped up
[00:19:45] Cameron: into
[00:19:46] Steve: had one hour with Fox today, your average is 2. 3 hours, you’ve let him down.
[00:19:49] Cameron: I was thinking of my older kids, my 23 year olds, but, uh, yeah, they’re always calling me up like. Like, one of them said, I want to take you to Deadpool, the Deadpool film on the weekend. I’m like, uh, I’ll just wait till it comes out on streaming.
[00:20:02] Steve: go out and have some popcorn with your boy, what’s wrong
[00:20:04] Cameron: hang out, hang out with your sons. I was like, oh yeah,
[00:20:07] Cameron: okay, good point, good point. Um, so I do think we’ll have AI, uh, friends that’ll be listening. I don’t think these devices are really going to be where it’s at, but I do think we’re going to have these AI friends, coaches, mentors, guides, uh, counselors that will be there. end up playing a really significant role in their lives. They’re not going to replace people, but they will be on top of that.
[00:20:34] Cameron: It’ll be like having, uh, the smartest person that, you know, just hanging out with you all day. Oh, here’s
[00:20:43] Steve: can also see a B2B context for these being valuable. You can see a context where you might have an AI pin that is hanging around your neck because you’re a Boeing engineer and it’s like, Hey, did you do the X? Don’t forget the Y. And then that way we’ll have less windows pop out of Boeing 737 MAXs and that’s the kind of plan I want to be on.
[00:21:04] Steve: I’ve always said that. But, yeah, different AI contexts of some of these pendants and AI friends that could be useful, like you say, coaching, training, uh, work context, technical skills. Which are, uh, a fragment, uh, but a, a, a far more knowledgeable AI within a certain slither. And you tend to see those categories
[00:21:28] Steve: split down in, in ways as well.
[00:21:29] Steve: So that, that could, there’s some potential
[00:21:31] Steve: there,
[00:21:33] Cameron: He’ll be like, Hey Cameron, do you really want to buy that box of Tim Tams that you just picked up at Coles? Really? You know, you’re trying to lose 10 kilos right now. Do you really want to buy that? Is that going to help with your black belt grading if you get a box of Tim Tams? Maybe, maybe think twice.
[00:21:46] Cameron: Alright, we’ve got a guest! Uh, it’s a long time since we’ve had a guest on the show, uh, but Martin Murray has been, uh, tweeting up a storm from his farm in, I’m gonna say, somewhere in New South Wales. I’m taking a guess, and he’s been talking recently about his, how he was testing ChatGPT on the farm, and I thought, wow, that’s something I know
[00:22:11] Cameron: nothing about.
[00:22:12] Cameron: We should get him on. Welcome to Futuristic, Martin.
[00:22:17] Martin: Yeah, G’day, it’s good to be here, and yeah, you are right, northwest New South Wales, just around Dallungra, which is sort of between Armidale and Inverrell if you, sorry, between Armidale and Moree if you, yeah, you know that area at all.
[00:22:33] Cameron: I have driven through there a number of times. Yeah. It’s a lovely part of,
[00:22:37] Steve: It’s a New England highlight.
[00:22:38] Cameron: And you are talking to us, you were
[00:22:40] Cameron: talking to us from the cab of your tractor, by the looks of it.
[00:22:44] Martin: Yeah, literally in the
[00:22:45] Martin: spray rig. So, um, yeah, hopefully I didn’t ruin it when I jumped on earlier, your audio quality, uh, but I’ve shut it down, but kept the time booster on. So hopefully this conversation holds
[00:22:56] Steve: I have to ask the mandatory, uh, AI question. Are you on a John Deere with the right, no. Right. To fix it if it breaks down. That’s, that’s my
[00:23:06] Steve: question,
[00:23:09] Martin: Uh, can I swap the camera around? Because I am, yeah, you are banging
[00:23:14] Steve: there you go. What do you have the right to repair? Because that, that was something that was really, had a head of steam a few years ago for quite a bit. I don’t know what’s happened on the right to repair. There may be that, that’s an interesting starting
[00:23:25] Steve: point for us before we get into some of the depths of.
[00:23:28] Martin: To be honest, it’s not much of a, um, yeah, an issue for me personally. Cause if I had the right to repair, I still lacked the
[00:23:35] Steve: Okay.
[00:23:36] Martin: So,
[00:23:36] Steve: Yeah. Yeah.
[00:23:38] Martin: um, But, I think you are right, yeah, I think, like, in terms of diagnostic tools, and, um, all of that sort of software side of things, uh, which, I mean, these things are pretty well just computers now, computers with an engine.
[00:23:52] Martin: Um, yeah, we still don’t have the right to repair that and access that diagnostic software.
[00:23:58] Steve: think that’s it. Wow. With every sort of machinery, I think if you buy something, you should buy it. I don’t care if it’s got software inside it. The Digital Millennium
[00:24:05] Steve: Copyright Act is an absolute disaster.
[00:24:07] Cameron: Well, let’s, let’s move on to AI on the, on the farm, Martin, tell everybody about how
[00:24:15] Cameron: you’ve been playing with it, your motivations and your discoveries.
[00:24:21] Martin: Yeah, so, I’ve been a listener of your podcast for a little while now. Um, Oscar Pierce got me onto it maybe six months ago or so, a bit of a shout out to him. And just from, yeah, talking to him and listening to your show, I just, I figured I’d give it a go. Um, yeah. I was coming up to the current cropping season, got my soil test back, and look, I’m an agronomist by trade, like, that’s my background, so, I know what I’m looking at, I just thought, well, it’s a rainy day, I don’t have much else to do for the fun of it, let’s load these soil test results into ChatGPT. And, um, see what recommendations it comes back with. And I’ve got to say, I couldn’t quite follow the math on it. Um, I don’t know if it was using different formulas that are used elsewhere in the world to what I’m used to, but. At the end of the day, the end result was pretty well the same with what I recommended, um, putting in, you know, this is our crop, this is our target yield, this is our target protein percentage, all those other, those other inputs that you put in yourself, and, um, yeah, it came out pretty well bang on, and it just sort of went from there.
[00:25:31] Martin: I’m like, well, okay, this is the crop, uh, can you give me an agronomy plan, you know, how much starter fertilizer, how much urea, when should I be planting it, Uh, what sowing rate should I be at? And, uh, you know, what, what herbicides and fungicides am I likely to be using in the season? And to be honest, it was, you know, it wasn’t 100%, but it was pretty damn close.
[00:25:55] Martin: Like, it was, um, just an interesting experiment and we sort of took it from there. Had it, uh, Taking pictures, uploading pictures from the cattle yards, asked it to identify the breeds of cow, we’ve got a couple of different breeds, and, um, you know, things that are obvious and easy, like the Angus, it was able to pick them out pretty well, and then other ones that I thought might, you know, might throw it a bit, not as common breed, uh, like, uh, there’s a breed called Speckled Park, which is sort of black, but it’s got all these little white speckles through it, it was able to pick that out and identify it. yeah, it was just, just an interesting experiment and I’ve done a few other things with it, but that’s basically where it’s
[00:26:36] Steve: It’s so interesting.
[00:26:38] Cameron: Didn’t you also
[00:26:38] Cameron: ask it for like, uh, dry hire rates for a tractor or something?
[00:26:44] Martin: Yes, I have done that with it.
[00:26:46] Martin: Um, and it was on the money. Yeah, I think I put that one up on Twitter there a little while ago. Um, over selling, I hired a tractor off the neighbor and neither of us really knew the rate. Um, So we asked ChatGPT and we asked Twitter and then we eventually found someone that does commercially lease out the same horsepower tractor and the rate ChatGPT came back with was the same rate he was on.
[00:27:14] Cameron: Amazing.
[00:27:15] Steve: It’s, you know, oh yeah, I was just going to say, what I love about your approach. Is to ask it’s something you know the answer to already, because what that does is it gives you context of the type of accuracy you can expect, and then it gives you confidence to do the next thing. Like I haven’t, I’ve never done that on stage, but the idea of saying, well, okay, what’s something you have a lot of knowledge on start playing with the tool on that, because what that does is without you worrying about the information you’re getting back.
[00:27:46] Steve: You’re developing a relationship of usage, which gives you the back and forth to see how it works and the type of questions you need to ask it to get an answer that you already know. And I think it’s a really intuitive way to do it. It’s actually quite smart. And I don’t think we ever really teach anyone that.
[00:28:01] Steve: It’s really insightful. And then after that, you’re going to go, well, if I can do it for that, here’s the piece that I don’t know that I might be able to use it for. Because as, as you would know, as a listener, Cameron and I are talking about Whatever AI can’t do today can probably do it tomorrow. So, sort of with that in mind, what are some of the sticking points?
[00:28:20] Steve: Within your business on the farm that you’d like quicker solutions on, or it takes a lot of time, whether it’s administrative or, or, you know, planning what you do out in the field or with
[00:28:30] Steve: your crops or you, you mostly crops or it’s just always cattle.
[00:28:35] Martin: Yeah, so our business is primarily cropping. We do have, we’ve got like 30 cows, that’s not much, they probably make up, I don’t know, depending on the prices each year, they’re probably somewhere between like 1 and 10 percent of the farm’s income. But yeah, look, for me, I think with AI, it’s, it’s not so much what it can do for me now that I can’t already do, it’s what’s going to be coming, and, um, like in terms of regulation, red tape, reporting, emissions reporting, all of these sort of things, like biodiversity, reporting, um, like, I mean at the moment this is, Things that we’ve got a vibe are coming, but we don’t know what exactly is coming, and how it’s going to impact us, but it looks like, you know, whatever happens, there’s going to be a lot of office work, a lot of reporting, a lot of collating of data, uh, you know, how much herbicide have you used, how much, um, diesel have you used.
[00:29:38] Martin: Uh, just, just all this sort of stuff, you know, we’ve, like, my diesel records, they’re in, you know, my OB, they’re in the accounting software, because you pay for that by the litre, um, but I don’t record how that goes out on the paddock, um, you know, maybe there’s a way in the future that, that AI will be able to take that diesel input and go, you know, you’ve made 10 passes this year in the spray rig over this, This is a typical number.
[00:30:07] Martin: Um, you know, you’ve made your one sowing pass, your harvest pass. These are also typical fuel numbers and, and work out from the total, you know, diesel that you bought, um, what you’ve actually used in each paddock. And likewise with, with biodiversity, I was just talking to someone about this, um, before the podcast, you know, putting microphones and things around you, around your farm, um, and going, you know, uh, recording the, the sound of the boat, like.
[00:30:36] Martin: What, what’s, what animals are, what birds are chirping, what, what frogs are croaking, all of that sort of stuff, and then having AI analyze that, and then you get a pretty good picture of, of what’s on farm, and, you know, come up with a bit of a conscription of how to manage for those species. And again, you know, as I said, it’s not something that’s an issue now, but I can see it being an issue in the next 5, 10, 20 years.
[00:31:00] Martin: Something that we’re going to have to be on top of as land managers.
[00:31:03] Steve: funny because the administrative burden on small business is really been increasing, GST was introduced all of a sudden, you know, you have to, the administration needs to do with product services coming in and out in GST, which gave rise to. Uh, SESS for accounting because it just becomes too complex to do.
[00:31:22] Steve: But I like that idea of the internet and, and AI actually, not just being a filing cabinet where you can pull stuff out of it, you put stuff in, it’s a brain. So you’ve got to upload your data, which you can then analyze and you can give it directives on, okay, we’ve got this data here, what we need to find out is emissions, X, Y, and Z.
[00:31:41] Steve: Here are the data, here are the data buses, here’s the GPS from the tractors I’ve driven around. Of course if John Deere give it to you, here’s uh, here’s all of the raw materials that I’ve bought, here’s the head of cattle, here’s the output, and it can calculate all of those elements there and reduce the administrative burden because we tend to be looking at AI right now as a creation tool, like go and create more of the work that I do, but just that underpinning of putting up data to reduce administrative burden, um, it’d probably be more accurate than what we deliver today, I would imagine, you know, in, in, in calculations.
[00:32:14] Steve: Simple one is With your car, it’s like how many kilometers do you do on your little log books? It’s going to do a better job than you would do that just by looking at GPSs and petrol and acceleration and all of
[00:32:26] Steve: that stuff.
[00:32:27] Martin: Yeah, I’d imagine so. If it, if it’s got access to that data, it would definitely have more attention to detail than I would. I mean, no one gets into agriculture to be, uh, You know, an Auditor. It’s, you know, we’re here to grow things, uh, and drive tractors, chase cows, do whatever you love. You’re not getting into the game because you want to audit yourself and do all this reporting.
[00:32:50] Cameron: The outside of, uh, administrative red tape stuff, what are the, what are the two or three biggest issues that a farmer such as yourself is having to deal with in terms
[00:33:03] Cameron: of running your business every year, Martin? Yeah, I think, I mean, there’s a
[00:33:08] Martin: Um, yeah, uh, I mean personally my biggest issue is interest rates and I don’t know if AI can help me with that. But, um, but there’s I mean there’s other things, like other cases, I haven’t used this, done this personally, I’ve spoken to a fella that has, because he knew the parts catalogue was online, he took a picture of a broken part, and you can spend hours on the phone to dealers, like machinery dealers and, you know, repairmen, that sort of thing, trying to find a part number.
[00:33:38] Martin: He told me that he was able to take a, um, picture of this part, tell it where to find the parts catalogue online. And, you know, the make, model, everything else of that particular machine, and it was able to pretty quickly get back to him with the correct part. Um, personally, like, without giving ChatGPT a lot of direction, just taking pictures of machines, uh, around the farm, and asking it, you know, what’s this, what’s that, it’s probably about as good as my four year old.
[00:34:09] Martin: Like it could tell that it was a John Deere tractor and it sort of got the, like the family. Right. If that makes sense. But it, um, like, you know, say like you say you gave it a picture of a, you know, a 2002 Toyota Hilux, it would tell you it’s a Toyota Hilux, but it wouldn’t be able to quite pick up on, you know, that whether it was an Sr SR five and, um.
[00:34:33] Martin: You know, the thing with John Deere, and this is the thing that I found really surprising, is the model number is printed on the bonnet, and same with my case tractor, and that it wasn’t able to pick up on that, but it was able to tell me that it was a John Deere or a case, and give me the approximate, you know, generation, but not have the details, so I haven’t been playing around with it too much in that spare parts machinery place, but That is a big headache to solve if it can solve that.
[00:35:00] Steve: should mention that because one of the things that I’ve been doing lately is when ChatGPT, OpenAI, the one that I use the most, and I probably after today’s chat should go to Mistral, can’t do something, but I ask it to write code so that it can do that. It says, sorry, I can’t do this. I don’t know what the parts number, I would say, can you write some code to understand all of the part numbers or, uh, tractor numbers, whatever they’re called, or John Deere tractors, uh, go onto this website.
[00:35:28] Steve: Here’s where you’re going to find it. Do a screen scrape and visuals and then add into your database and create your own little, uh, GPT on it. So that you can use it to scan things and it will know it thereafter. So one of the things that, and this is this recursion, I’ve been thinking a lot about recursion lately, which is the idea that the technology can create technology that the technology can’t already do.
[00:35:49] Steve: Because it’s a functional tech, it’s not just like a steam engine where you turn it on. It can create new things that it can’t do. So you go in and ask it to write a script, To be able to identify X number of Toyotas or John Deere tractors, because that information is available on the web. It just hasn’t put it into the database in a format, which makes it findable.
[00:36:09] Steve: So you direct it to create that script and then go and do it. And then you ask it to put that in from now on, whenever you ask that, it can refer to that inside. Uh, your, your database. So that’s, that’s kind of one of the things that I think, and this is what agent AI is pretty good at as well. Uh, one that you might want to try out, which is pretty good, Martin, is, uh, agent GPT.
[00:36:32] Steve: And that does things like that. So you set it an objective. You could say, give me all the parts,
[00:36:37] Steve: numbers,
[00:36:37] Steve: and pictures of X and we’ll go out and do it.
[00:36:40] Martin: Righto, I’m gonna have to give that a go.
[00:36:42] Martin: Yeah,
[00:36:43] Steve: to wish you’d never talked to be mucking around on, you’re going to have crop
[00:36:46] Steve: failure. You’re going to be mucking around with AI for the next two weeks.
[00:36:49] Martin: yeah, nothing’s gonna get done but I do have a um, I do know, I’ve got a, well I’ve just actually got the replacement part, so I know the part number, I know where you can find the catalogue, um, so I’ll have a play with it and see if it can um, yeah, if it can do that.
[00:37:08] Cameron: lot of work if you, if, You know, to fuck around with it and try and make it do things that it can’t do. I guess we’re sort of assuming that the sorts of things that it struggles with today, it won’t be struggling with a year from now or two years from now. But I’m, I’m really interested in how AI is going to revolutionize different industries and agriculture being one of those.
[00:37:32] Cameron: I’m wondering, you know, there’s a lot of You and I were chatting on Twitter over the last couple of days. I mentioned that, uh, an old mate of mine from my Microsoft days, I had lunch with him a couple of years ago and asked him what he’d been doing. And he told me about a project that he had done for a client where it was, I think it was putting RFID chips into, uh, sheep and, and lambs, ewes and lambs.
[00:37:58] Cameron: Cause they, for some reason, these farmers needed to know or wanted to know which ewes belong to which lambs or vice versa. And they couldn’t tell, but they were able to figure out that by measuring proximity of the amount of time that they spend close to each other using the RFID chips, they could take, they could figure out whose LAMs belong to which use.
[00:38:20] Cameron: So he was building RFID just to capture all that data and then code to figure it out at the end of the day, write reports. I’m just wondering about all of the potential data. That can be captured on a farm these days with RFID chips and, as you mentioned before, visual, audio, um, and if all of that, uh, could be captured and just fed into an AI system that could come up with ways to help you be more efficient.
[00:38:48] Cameron: on the farm, look for ways of, uh, I don’t know, using less fertilizer or increasing crop yields or working less hours. And that’s even before humanoid robots. Elon starts, uh, giving you a Tesla robot. Uh, what, what would
[00:39:06] Cameron: a couple of humanoid robots mean for you on a property like
[00:39:11] Martin: Yeah, look, if I could get humanoid robots, um, that are, you know, just as competent as a human, you know, they’re able to react the same as a human, operate the same as a human, that would be brilliant, I mean, that would free up a lot of my time to go play with these sort of things with AI and whatever other tools are going on.
[00:39:32] Martin: Um, I mean, at the moment, like, you know, we’ve got GPS driven tractors, like, obviously, the one I’m in, you know, drives in a straight line forward and back, but I still need to be in here to operate, other controls, other important factors, there are now, um, you know, there’s, there’s Robots like Swarm Farm do swarm bots.
[00:39:50] Martin: They’re completely autonomous, like spraying, well they’re mainly used for spraying, but you can use them for other things too, uh, spraying robots. But at the end of the day, like, all my machines are set up for people. So, you know, we live in a human built environment, so if we, uh, well, an environment that’s built for humans, so if we can fit humanoid robots in it, it It would mean say instead of replacing, you know, three or four different machines, I’ve now got one robot that can drive all those machines.
[00:40:20] Martin: Uh, that would be a huge cost saving there. And at the moment, like labor’s hard to get. Um, it’s getting harder to get. Uh, it probably makes more financial sense to have a robot.
[00:40:32] Steve: such a strong
[00:40:33] Martin: And going back to
[00:40:34] Steve: It’s so strong. Just on that, I was Presenting humanoid robots in my keynotes lately, I show them now, the figure one and some of the humanoids, which are really extraordinary and Jensen Huang, we discussed on the podcast before says that before the end of this decade, they’ll cost, you know, like a small car, which is who wouldn’t invest in one of those? People ask me, why would we want a humanoid robot when you can have a self drive car? And if you look back to the old self drive cars in sci fi movies, they had a robot driving the car. And your answer is, is really the one, is that we have a human shaped world, right? And if you have humanoid robots, then they can have all of the capacity and work in the existing infrastructure that we already have.
[00:41:14] Steve: The insight that one humanoid robot would be a better investment than buying four new
[00:41:19] Steve: tractors that are probably 200 grand each. I don’t know, I imagine they’re quite expensive. There you go! So, the idea that you have a humanoid robot that becomes like a smartphone, a general purpose computer, but it can go into the shed and organize feed and then drive it.
[00:41:39] Steve: The tractor and, and then work with the cattle and walk around on the farm with humanoid gumboots. Like, that is actually a strong reason for humanoid robots. And I think that the marrying up of AI and physicality is a lot like when we used to have, you know, Uh, cattle pulling, uh, things along on the farm to, to, you know, grade the soil or before we had the combine harvester, it’s the idea of putting the engine inside the horse and cart or the engine inside, uh, the grader or whatever that you had on, on the, on the farm.
[00:42:15] Steve: I feel like the humanoid moment is the intelligence inside the dexterous physical robot is going to be maybe even
[00:42:22] Steve: the biggest part of the AI revolution.
[00:42:25] Martin: And yeah, and as I said, if they are
[00:42:27] Martin: just, you know, if they are as capable as a person, uh, it’s a no brainer. Even if they’re, you know, two, three hundred, four hundred, even half a million dollars. Um, I mean, that wouldn’t suit my operation, but if you’re a larger operation, uh, half a million on a robot, you know, compared to wages,
[00:42:46] Steve: Well, you’re 24 hours a day too.
[00:42:47] Martin: know, the the cost of, exactly,
[00:42:51] Steve: You’ve got the humanoid robots, uh, out on the front doing work 24 hours a day and just swapping over their batteries. Like, you think that through, it actually doesn’t have to be that cheap for it to be quite functional in industrial and commercial contexts.
[00:43:06] Steve: It’s not so much a
[00:43:07] Steve: domestic context, but it’s certainly in industrial contexts.
[00:43:11] Martin: No, you’re not gonna have it making you coffee and
[00:43:13] Martin: doing the laundry, but um,
[00:43:15] Steve: Well,
[00:43:15] Steve: it’s hard. That camera and I were like, wow, actually, yeah,
[00:43:20] Steve: mowing the lawns, yes, and putting the washing out,
[00:43:25] Cameron: but if they’re, if they’re 20, 30, 50 grand, 50 I think people, uh, will have them doing domestic stuff too, um, some people will. All right, well, if there was one, let’s leave it with this, Martin. If there was one problem that you wish AI could solve for you in the next, say, two years, if there was one sort of killer app for AI, not robots, just AI, on a property or running a farm,
[00:43:58] Cameron: what would it be?
[00:43:59] Cameron: One of the things we’ve already mentioned or something else?
[00:44:02] Martin: Well, yeah, I don’t know. I mean,
[00:44:05] Martin: I guess it comes back to you don’t know what you don’t know, like what, what, what, what? What insights haven’t you come across yet? Um, like one of the spots I’ve been thinking about, you know, a lot, I know I said the livestock make up a very small part of our business and, like, just business wise, we’re actually looking to change that so they’re a bigger part.
[00:44:24] Martin: Um, it’s what can we, where can we use AI there? What, what, what data can we get? What insights, what patterns can we, um, You know, find that we’re not seeing, we’re not looking for, uh, we’re entering a world now where, you know, quite cheaply you can have, um, weigh bridges or like weigh bridges for cows in the paddock.
[00:44:46] Martin: There’s a great company called OptiWeigh. Um, we’ve got GPS cattle tags which, uh, you know, they’re still a bit expensive. Um, like every cow has already has an RFID tag. Um, that’s, that’s a legal requirement for supply chain traceability, all that sort of thing. It’s now becoming a thing in sheep. But GPS tags for about 50 a tag, you know, it’s not quite economic, but what sort of insights are we missing out there by not having it?
[00:45:14] Martin: Um, like there was a fella on Twitter, Nigel Caron, um, brilliant farmer in the Central West, he’s got an Optiway, so an in paddock. Type way system, the cows just hop on, they weigh themselves, hop off, it’s all a bit of a novelty for him. Um, he was able to pick up, you know, from that, that all of a sudden they stopped gaining weight and he didn’t know They didn’t know why.
[00:45:37] Martin: So they did some fecal tests and found that they, they now had a word prob worm problem. And so just from that quick insight, they were able to get on top of that problem. And, um, he said what it would’ve otherwise cost them in lost weight gain. They were able to save just in that, um, that one worming, that one event.
[00:45:58] Martin: And it’s things like that. So like, I’d be keen for, you know, once. Once AI hits a point where it’s insightful and not just, like, where I feel it is at the moment, just regurgitating what it can find on the internet, it’s just a, you know, an enhanced search engine, it’s what is it going to be able to tell me, so similar to the tags you mentioned earlier, learning, you know, what tags are in proximity, that way you can come back to, you know, Which animals are your better performing animals, and which cows did they come from, so then you can breed more from those cows and get rid of your lower performing cows.
[00:46:35] Martin: Like, there’s, there’s a whole world of possibilities, and Yeah, it’s, I don’t know where the answer’s going to be, because I don’t quite know what questions to be asking.
[00:46:45] Cameron: I Sorry for the bit of a rant, but that’s,
[00:46:47] Cameron: No, that was perfect. I thought you were going to say the cows were stopped putting on weight because they were looking
[00:46:52] Steve: that’s what I thought I’d
[00:46:53] Steve: do it for.
[00:46:53] Cameron: fuck, I weigh, I weigh how much? Holy shit. I got to go on a diet.
[00:46:58] Cameron: That’s like
[00:46:58] Cameron: me
[00:46:58] Cameron: jumping
[00:46:58] Steve: I’ve got to get off
[00:46:59] Steve: the grass.
[00:47:00] Cameron: Holy shit.
[00:47:01] Steve: That’s funny. I
[00:47:01] Steve: also like, here’s
[00:47:02] Martin: I know, we keep that information from them. We can’t have them knowing
[00:47:05] Steve: another one I like
[00:47:06] Steve: too too much
[00:47:07] Steve: McKinsey cow strategy. We get rid of the bottom 20 percent of performers every year just like
[00:47:11] Steve: McKinsey.
[00:47:14] Cameron: Here’s my suggestion.
[00:47:16] Martin: and that is how you run
[00:47:17] Martin: a business. That is how you run an
[00:47:18] Martin: agricultural business. That’s why we don’t
[00:47:20] Martin: have the same cows that we had, you know, 50 years ago. Like, the same type of cows. We don’t have the same breeds of wheat from 50 years ago. It’s That is how it works.
[00:47:31] Cameron: So my suggestion, Martin, if you haven’t already done this, this is something that I do with GPT all of the time is I will ask it to tell me, uh, ideas of how, what I could be doing. So I’ll throw a whole bunch of data at it. Like I do this with my diet, right? I track my diet every day using combination of Obsidian, my notes app, and some shortcuts and GPT.
[00:47:55] Cameron: I have it all in a spreadsheet going back sort of a year now. And I’ll throw it into GPT every few months and I’ll say this is what I’ve been eating, this is what my exercise looks like, this is how much weight I’ve been losing, um, what should I be doing differently? Give me, give me some suggestions about ways that I can modify my diet, modify my exercise, et cetera, et cetera.
[00:48:17] Cameron: With coding, I’ll say, here’s, here’s all of the tasks because I track all my tasks every day and how much time I spend on them. I’ll throw it into GPT every now and again and say, here’s all the things that I do during a day and how much time I spend on them. Have you got any suggestions for how I can be more productive?
[00:48:33] Cameron: What, what should I be doing that I’m not doing? What should I do more of? What should I do less of? And it’s not, it’s not, Like it’s not, uh, a productivity, an amazing productivity coach. A lot of what it gives you is sort of generic stuff still, but every now and again, it does give me a really good idea, really good insight.
[00:48:55] Cameron: And it only, I only need one every now and again to make it worthwhile for it to get me thinking about something that I wasn’t already thinking about. And that spurs me to go, Oh, well. Not that, but what if I did this, what would that mean? And then it’ll go, yeah, that’s a great idea. And sort of build on it.
[00:49:10] Cameron: So it’s using it as sort of a, I use it a lot as a brainstorming tool to start me off with ideas and then build on them and build on them. And, uh, you know, even using a little bit of, um, hold on, I gotta mute while I A little bit of, uh, Edward De Bono lateral thinking with it too. Yeah, you start a trend, you say, okay, now give me like 10 really out there random suggestions and, uh, see what that spurs.
[00:49:42] Cameron: Like, just give me a whole bunch of random words and add that into the idea mix and see what it generates. It’s really good at doing things that humans struggle with, which is thinking outside of the box. Um,
[00:49:55] Steve: ask it, you have to ask it that. And I often use it as well to generate, like I give me 30 ideas in this topic here, and I won’t like
[00:50:04] Steve: the ideas, but those ideas give me an idea of where I leverage my
[00:50:07] Steve: depth.
[00:50:08] Steve: That’s what always happens.
[00:50:09] Cameron: I’ll tell you
[00:50:10] Cameron: something. I was having a marketing meeting with my son Taylor the other day about how to use TikTok to promote my QAV, my investing podcast. And he said, well, you need to be talking, you need to figure out a way to make it relevant to whatever’s big at the moment. And I was like, oh, sure.
[00:50:24] Cameron: Well, what’s big at the moment? He said the Deadpool film. So I said to GPT. How can I use the Deadpool Wolverine film that’s big at the moment to talk about investing? And it said, well, you could talk about what kind of investors, Deadpool and Wolverine would be in real life. Deadpool would probably be a high risk, high reward.
[00:50:45] Cameron: And Wolverine, because he’s a mortal, probably takes a long view and you know, he could have, you know, a long-term view in investing. He doesn’t have to hurry. And I was like. Oh shit, that’s a great idea. You know, that’s, that’s brilliant. I never would have
[00:50:57] Steve: the other one is too, you can break down how much a Deadpool cost to make. What’s the ROI on the average film? Is it better than average? Why do films like, uh, uh, animated ones keep getting recycled because it’s got a bit of ROI and there’s lower risk and like, there’s a lot of different things you can do.
[00:51:13] Steve: That’s good advice. That’s
[00:51:15] Steve: really good.
[00:51:16] Steve: Yes,
[00:51:19] Cameron: Yeah, anyway. Well, Martin, we’ll let you go. You’ve got food to make, um, so I can eat it.
[00:51:25] Steve: we need it.
[00:51:26] Cameron: Thanks for coming on and sharing that with Martin. Really interesting
[00:51:31] Steve: Keep us up to date.
[00:51:32] Martin: Nah, that’s all good.
[00:51:34] Cameron: If you have any more ideas or breakthroughs with the tech as it evolves, let us know. I’ll follow you on Twitter. We’ll continue to follow you on Twitter.
[00:51:43] Cameron: What’s your Twitter ID for people
[00:51:44] Cameron: listening? It’s just
[00:51:46] Cameron: martinmurray__ag, yeah?
[00:51:48] Martin: That’s it, Martin Murray Ag. That’s the one.
[00:51:52] Cameron: So they can go and follow you on Twitter and, uh, yeah, keep up the good work, Martin. Come and come back on and tell us if you, uh, have
[00:51:57] Cameron: any more ideas about AI and Ag, tech and Ag.
[00:52:03] Martin: Righto. Will do. Thanks for having me on. It’s, it’s been great. And yeah, been really good to talk with you too. And I don’t know what I’m doing on the next rainy day. I’ll be playing with a few of those brainstorming ideas. Cheers.
[00:52:14] Cameron: great. Let us know how it goes. All right. Take care, man.
[00:52:19] Cameron: Fun.
[00:52:19] Steve: good.
[00:52:20] Cameron: Cool
[00:52:20] Cameron: dude. Alright, let’s get back into news stories. Steve, Chinese text to video. Now, we’ve talked a little bit about where China’s at with, um, AI. There’s been a lot of, uh, talk about how they’re, you know, quite a, quite a ways behind some of the US companies, but making a lot of progress. A company called Kling, K L I N G.
[00:52:47] Cameron: Out of, uh, China, launched their text to video model, uh, in the last week, and it is in, it is right up there with Sora and, um,
[00:53:04] Steve: Runway.
[00:53:05] Cameron: whatever the other Yeah, Runway. Look, Runway. yeah,
[00:53:09] Steve: thought it was as good as Runway’s previous model,
[00:53:14] Steve: but, but not better than Runway 3, the
[00:53:16] Steve: new one. And it’s, it’s, yeah, it’s videos, it gives you a maximum of five seconds at the moment, it’s not a lot, um, in the ones that, that I saw, the samples that it had, uh, I thought it was okay.
[00:53:29] Steve: I didn’t think it was better. Yeah. But, look, if they were nowhere, they’re certainly not far behind now.
[00:53:36] Cameron: It can do five seconds image to video, but it can do three minutes in length for videos.
[00:53:45] Steve: Image to video. I can’t correct it.
[00:53:48] Cameron: it’s, which is a long time for a video. If you go through the demos that they have on their website, um, video extension, Uh, yeah, I mean, they’re quite long. So, um, it’s, uh, look, it’s just amazing with all these tools coming in.
[00:54:07] Cameron: Now, this one’s publicly available too, unlike Sora, although you have to have a Chinese mobile phone number in order to be able to register, which makes it tricky,
[00:54:17] Steve: Makes it difficult if you’re
[00:54:18] Steve: uh, doing the English version, doesn’t it? Need a plus eight
[00:54:21] Steve: six.
[00:54:23] Cameron: And, of course, as we’ve talked about, there’s the, uh,
[00:54:27] Cameron: Chat XiPT, I think I heard it.
[00:54:33] Steve: Did someone say that? XiPT.
[00:54:37] Steve: That’s a Cameron political joke.
[00:54:40] Cameron: XiPT. The, um,
[00:54:43] Cameron: there is going to be this issue about how it, how their AI products navigate. You know, some of the Chinese censorship and restrictions on content. But at the end of the day, you know, we’re going to be in a world where I don’t really need my AI to comment on the Chinese Communist Party or, uh, or tell me about Tiananmen Square in 1989.
[00:55:10] Cameron: I mean, the range of things that I’ll want my AI to do on any given basis, any given day, I don’t think is really going to be a problem for Chinese AIs to generate. It’s going to be able to talk about science, and maths, history, Outside of maybe, you know, Chinese 20th century
[00:55:31] Steve: if it is true, if it is true that the AIs are trained on certain databases, And OpenAI and, uh, Gemini and Anthropic are all trained on the general web of English. You could even make the argument that the Chinese apps, if they have a proclivity, because they’re training on Chinese language to train on the internet, which is very censored in China, they could become more scientific AIs because there’s less dissent and trolling and misinformation.
[00:56:06] Steve: Potentially, uh, they could become more scientific with the output from those AIs than what you might get from the training on an English based AI, which is, as we know, being, uh, washed via social media, which has the highest frequency of posting.
[00:56:23] Steve: Lots of inaccurate information.
[00:56:26] Cameron: Yeah. And you know, something that, um, I’ve heard, uh, Zuckerberg say a lot, I’ve watched a lot of Zuckerberg interviews recently, cause they just came out with the latest
[00:56:38] Steve: How do you feel about Zuckerberg
[00:56:41] Steve: 2. 0? Let’s just, Let’s just, get into Zuckerberg 2. 0 with the curly hair, mate. The curls get the girls. And Zach is, he’s right up there with
[00:56:49] Steve: gents in one, what is, Leather Jacket coming out next? What are we talking here?
[00:56:55] Cameron: I love it. I love it. Um, yeah, he’s looking like in one interview, he’s looking like a surfer dude. He’s just got, you know, uh, you know, tan, beautiful, big curls. He’s looking like he just stepped off the snow fields or the beach. Next, next interview with Jensen Huang, he’s looks like a hip, he looks like Run DMC
[00:57:15] Steve: Ha ha
[00:57:16] Steve: ha! He’s just all in
[00:57:17] Cameron: black with a big gold chain.
[00:57:19] Steve: Jacket. Well, here’s the question, is he going to pull a Jeff Bezos and get on the gear and just get some big guns, and get rid of Priscilla, and, and, and, and just get
[00:57:29] Steve: himself a nice sexy newsreader? Because that’s what Uncle Jeff did.
[00:57:32] Cameron: yeah, like, whoever his PR
[00:57:34] Cameron: people are that’s doing his makeover, I love it. I think they’re doing a great job. But anyway, um, that aside, and he’s had a personality upgrade too, he actually can talk like a human a little bit
[00:57:44] Steve: the upgrade. So,
[00:57:45] Cameron: The AI, the AI upgrade that he’s had in his chip is doing well. But, getting back to, um, The point is, like what he’s saying, ’cause they, they just launched their, their latest LAMA 3.1, which is an open source and he’s a big supporter of open source and he for ai and he talks about why, and we’ll get to that in the later segment.
[00:58:09] Cameron: But what he’s, one of the arguments that he’s making is that he doesn’t believe the future is all about the one big overarching AI tool that does. Everything. He’s talking about AI agents that are smaller and more nimble and do, you know, one thing really, really well, that’ll be able to talk to other AI agents.
[00:58:31] Cameron: And this has been sort of the vision I’ve been talking about on, on the show ever since we’ve been doing it. That’s how I see it sort of playing out as well. I don’t think we’re going to require a source of all knowledge, like GPT is turning into the one thing that knows everything about everything. I think we will end up with millions of AI agents that have deep knowledge in a particular domain and they just talk to each other.
[00:58:59] Cameron: Your experience might be like you’re talking just to the AI, but it’s actually a million AIs talking to each other behind the scenes. And if
[00:59:07] Steve: Conceptually that’s interesting for a couple of reasons. I think the thing that is interesting to me is that the cache that it needs to carry is so much smaller if you have a specific AI about 8 bit video games from the 1980s. Does the big AI need that? Or does the big AI just need the ability to go to the AI and procure the information it requires for that moment and then trans Transport that information and you would have to assume that that would be one of the ways that we could become more efficient with the average search in terms of the computation it requires and the electricity.
[00:59:38] Steve: If it uses an internet style strategy where it finds the right path to what it needs and then comes back rather than having all of it to carry the entire information load within a one giant AI.
[00:59:51] Cameron: Yeah, like, we’ve been talking about the LUI, the Language User Interface, for a long time now and, you know, I see that is the, that’s the magic glue. That’s the, it’s like the new version of the API that enables.
[01:00:06] Cameron: AIs to talk to other AIs, because they all just talk the same language, doesn’t have to be English, but whatever the common
[01:00:12] Steve: I know what it is. Pull the
[01:00:13] Steve: plug.
[01:00:15] Cameron: they can, you know, so my AI on my phone, let’s
[01:00:18] Cameron: say it’s Apple, Apple’s AI, Or it’s just an open source one.
[01:00:23] Cameron: Cause a lot of the, you know, Microsoft has released a mini version now of their AI. There’s a, there’s a Mistral mini. There’s a Llama mini. There’s a lot of mini models that are out there. GPT’s got a mini model. If you have an open source mini model that runs on your devices and knows everything about you, And it can understand language.
[01:00:42] Cameron: It can understand what you ask it. It can then go out to, oh, you want to know a question about, uh, the latest in, I don’t know, uh, medical advancements for Alzheimer’s? It’ll go talk to the Alzheimer’s, uh, medical research
[01:00:59] Cameron: databases AI,
[01:01:01] Cameron: right? And it’ll find that information. It doesn’t need to know
[01:01:04] Cameron: everything. It’s the Henry Ford model. You know, the famous story about Henry Ford.
[01:01:08] Steve: Pick up the phone.
[01:01:11] Cameron: He was, for people who don’t know, we’ve probably mentioned on the show, but he was, uh, being sued. Somebody, a journalist had said he was stupid or something and he sued him for libel and then they were asking him a whole bunch of questions about his business that he couldn’t answer.
[01:01:25] Cameron: He said, but you don’t realize I got a phone on my desk. I don’t need to answer. I don’t need to know everything. I can just ring one of my managers. They know the answer to those questions. So it’s the same, it’s the Henry Ford model, right? The AI will just call whichever AI it needs to answer your question.
[01:01:39] Cameron: But my point was going to be. If, if one of those, or many of those AIs are Chinese AIs that are handling all of this for me, doesn’t really matter. As long as I’m not asking it questions that pertain to stuff that the CCP doesn’t want you to know about, or doesn’t want its citizens to know about, it’ll be able to answer 90 8 percent of everything that I need to know.
[01:02:04] Cameron: For those other 2%, I’ll go to the, you know, an American AI that’s happy to dish the dirt on the history of the
[01:02:12] Cameron: Chinese
[01:02:13] Steve: what will happen with the AIs. We know about the great firewall of China where Google and Facebook and some other properties are banned, uh, over there. I wonder if, and we’re quite open to having Chinese firms operate in Western markets. That’s a topic for another day, but I don’t think that, uh, America should be as open as like, well, you would let us do business in yours or, or we don’t, but.
[01:02:39] Steve: I do wonder if with the AIs, they’re going to end up with a closed shop as well, and whether they’re going to be interchanging with each other, or whether it’s going to be a little bit like what we have now with the social web and search, whether it’s going to be certain AIs in China are shut down, our AIs, but we’re open to theirs.
[01:03:00] Steve: I mean, that’s going to be an interesting political play as well as a technological play. I don’t know which way it’ll go. You’re well versed in this area. What do
[01:03:07] Steve: you think will happen?
[01:03:08] Cameron: I mean, I don’t know. I don’t know. I think AI is going to change a lot. I think the CCP is, and all Western countries too, are going to struggle with censorship. Like, let’s not pretend that we don’t have extreme levels of censorship in Western countries. We do, it just takes a different form. of, to the censorship in places like China, the sort of censorship that we have in the West around things that are politically sensitive is, they’re sort of over the window, doesn’t allow conversation of them.
[01:03:47] Cameron: They’re highly limited and we have massive amounts of propaganda.
[01:03:52] Cameron: We spend massive amounts of money and time and effort on propaganda to sort of, um, get our citizens thinking one way. Like if you talk to most people in the West, I’m talking about the USA, the UK, Canada, Australia, right? And you ask them why America dropped nuclear weapons on, dropped atomic bombs on Hiroshima and Nagasaki in August 1945, most people will probably tell you they did it to end the Pacific war against the Japanese in World War II.
[01:04:26] Cameron: It’s not true. That’s not why the Japanese ended the war. Every scholar who studied this knows that, but most people in the West still believe that to be true. And if you try and tell them different, they freak out because they’ve got 70 years of propaganda that’s been drilled into them that makes them believe a certain version of the story, right?
[01:04:50] Cameron: So we have, we have censorship, but it takes a different form. But AIs are going to, I think. Just crush all of that. Um, you, you know, it, it, it’s going to be very hard for governments to limit this kind of information, whether it’s a Western government or a Chinese government. So I, I don’t know how that’s going to play out.
[01:05:09] Cameron: Moving right along, because we’re running out of time. I want to talk about my, uh, I want to talk about, uh, Cory Doctorow’s article
[01:05:16] Cameron: on crypto. Do you want
[01:05:18] Cameron: to, do you want to kick this one off or
[01:05:20] Cameron: do
[01:05:20] Steve: was a, really great piece and the basic premise is that some of the richest people in Silicon Valley, uh, the billionaires behind, you know, Mark Andreessen and Ben Horowitz and others, uh, giving Trump endorsements For their own financial wellbeing at the cost of all of the other elements.
[01:05:41] Steve: It went pretty deep into, into crypto, uh, that crypto is, is really, I mean, I’m just trying to think of the way to describe it. Well, he obviously went through the stuff that says that crypto isn’t money because it doesn’t satisfy the needs. of what creates a currency and that it’s just a giant manipulation so that they can fund Trump so that they can get what they want and continue on their merry 1 percent ways.
[01:06:11] Steve: That was my
[01:06:12] Steve: key, that was my net outtake.
[01:06:14] Cameron: Yeah,
[01:06:14] Steve: was a pretty deep article, went on a lot of angles, but that was my
[01:06:17] Steve: overriding outtake.
[01:06:19] Cameron: You know, you know, I think he’s, he’s saying that basically crypto is a
[01:06:24] Steve: Yeah, there you go. I mean, there’s the words. Crypto’s a scam. Yeah, keep going. Tell me, Tell
[01:06:30] Steve: me what
[01:06:31] Steve: you think.
[01:06:33] Cameron: Well, you know, he talks about how, uh, crypto, you know, has been hyped up by, uh, a bunch of people, obviously, over the last ten years, and they’ve, you know, uh, uh, scammed a lot of people, getting them to invest in it,
[01:06:52] Steve: And the worst coins around it. Not
[01:06:54] Steve: so much it, but all of the shit coins that live around the fringes of it.
[01:06:58] Cameron: yeah. You know, there’s been a whole bunch of scams like NFTs that a lot of people got scammed on, different, um, ICOs, the different coin offerings that people invest in that go nowhere, and there’s been an influx of crypto money into elections. Election campaigns and it’s hard to track and it’s hard to monitor.
[01:07:21] Cameron: There’s a lot of dirty money flowing around in that sense. But, you know, the crypto, the major crypto players are trying to influence politicians in the US to get favorable treatment for crypto, to try and keep it out of regulation, uh, try and help legitimize it so they can profit from all of the investments that these, some of these venture capitalists have been making it for a very long time.
[01:07:52] Cameron: He talks about the Bezel Concept. Uh, which is sort of, uh, deceptive, uh, prosperity, uh, which is really based on a fraud. He calls it the The bezel, like M bezel, but just the bezel where you make something look really attractive, sort of a, it’s kind of like a, uh, another name for like a pump and dump really, you, you, you use your influence in the media.
[01:08:20] Cameron: And when it comes to these Silicon Valley billionaires, They’re followership on Twitter, their influence over tech media, their ability to get stories written about the things that they’re investing in and create hype cycles around it when it’s all just based on sort of smoke and mirrors, usually. He’s talking about the need for regulatory clarity in companies.
[01:08:44] Cameron: Crypto and the challenges around that. Illegal contributions by companies like Coinbase, uh, towards campaign finance, um, and, and how that needs to be monitored a lot more closely. And just about the complex interplay between economic interests, political interests, the ethical considerations around crypto.
[01:09:04] Cameron: Anyway, it’s a really great article, uh, it’s called, what’s it called? Um,
[01:09:11] Steve: campaign finance
[01:09:12] Steve: violation in US history.
[01:09:15] Cameron: That’s it Yeah, check it out on Cory Doctorow. Really worth a read. I mean, crypto is, you know, I look, I say what I’ve always said. I think there’s a lot of potential in the, the technology behind crypto, but, um, I think all of the claims that are being made about it as being, you know, it’s going to be a legitimate
[01:09:38] Steve: Never will be. It’s been poisoned. It’s the, the, the lake is poison and my view on, on it is really simple. Technology is a currency and, and all currency is a technology and all currencies have represented the technological capabilities of the day and the overriding tools that we use. You go all the way back to Ferris coins and.
[01:10:00] Steve: Uh, Cowry, Shells, to Fiat Currency, Bills of Exchange, all of those types of currencies, uh, precious metals, have all represented our technological capability. And they all have certain requirements for them to be effective. Uh, one of them is stability, which crypto doesn’t have. Uh, they need to have acceptance, which crypto doesn’t have.
[01:10:19] Steve: So it fails on about four, four of the six requirements, uh, to be a currency. It will never happen because Any country that is smart enough needs to know that it needs to have a sovereign currency so that you can demand tax in that currency, which maintains the civility and your ability to fund government projects.
[01:10:42] Steve: But here’s the thing that was so interesting in that article is they’re using the crypto scam to get fiat currency in replace of the crypto that they own. Like, and if you ask any, any, uh, Bitcoin maxi, And if you want to truly understand whether or not they believe in Bitcoin, here’s what you ask them.
[01:11:00] Steve: You ask them this one simple question, Cameron, you say, what’s one Bitcoin worth? And invariably, their answer is in US dollars. And that tells you what they believe in. They sure as hell don’t believe in crypto. Because if they did, and you ask them, what’s one Bitcoin worth? They’d say one Bitcoin is worth one Bitcoin
[01:11:19] Cameron: One Bitcoin. Yeah, yeah. Alright, well, let’s finish up, Steve, because I know you’ve got a hard out. Um, I want to talk about closed systems versus open systems, uh, and again, because I’ve believed this for a long time, and I’ve heard Zuckerberg talk about it a lot lately. Now, let me preface this by saying, I don’t trust Zuckerberg for one second, even though I like his rebranding.
[01:11:43] Steve: See? See? We always be buying into the story, don’t we? All of a sudden, Oh, you know, I don’t like you, I don’t like Zuckerberg, but he looks cool. You know, five episodes from now, you’re singing the praises of Uncle Zuck. That’s all I’m saying. Listeners, watch out for Cameron. He’s been caught off guard.
[01:12:02] Cameron: And to be fair, he’s pretty upfront in most of the interviews I’ve seen with him lately, saying, Listen, I’ve got selfish reasons for doing open source AI. Um, and his thing is, his experience has been that Facebook has really struggled, particularly in the mobile web, in the last 15 years, because the mobile web has largely been controlled by Apple, as we know.
[01:12:28] Cameron: And Apple has the Apple tax and they will allow certain things to happen on their devices and they won’t allow other things. And he feels like they’ve been hampered by Apple’s control over a closed ecosystem. He doesn’t want that to happen again with ai. And he can see that, uh, you know, if, if it’s, if chat GPT open AI is the future of ai, uh, that he could end up in the same situation.
[01:13:00] Cameron: So he’s spending a lot of money. I mean, I, I saw a, a recent interview of with him and, uh, done by Jensen Huang, CEO of Nvidia, where I think Jensen was saying that. Men now have like 600,000 H one hundreds, uh, Nvidia
[01:13:19] Steve: valued at 10 grand each, aren’t they?
[01:13:20] Steve: Got
[01:13:22] Cameron: I’m not sure. 10, something like that. 50, maybe a hundred. I’m not sure which, how much the H one hundreds are worth, but it’s a lot.
[01:13:29] Cameron: So they’re spending billions and they’re making an open source product. Now, one of the things that, you know, often hear people who are not paying close attention to this about AI is, Oh, all the rich people are going to have AI and it’ll be controlled by a handful of companies like, uh, you know, Microsoft, Google, OpenAI, whatever.
[01:13:49] Cameron: And it’ll just be the mobile web or the web all over again. You know, people like us that are old enough to remember the mid 90s. We did think that the web was going to be this democratic place where anyone would be able to do anything. And that is
[01:14:03] Cameron: true. No, well, it is true. I can build a website. I can write whatever I want on my website.
[01:14:11] Cameron: Uh,
[01:14:12] Steve: You could have,
[01:14:13] Cameron: Just but you could, you could also stand on a soapbox on the corner and just
[01:14:16] Steve: scream to the top and you’d be just as effective as building your own website, right?
[01:14:20] Cameron: Well, that’s true. But I couldn’t, I couldn’t really
[01:14:22] Cameron: print my own newspaper, um, and, you know, make it accessible to You’ve got a people or billions
[01:14:28] Steve: still slim, But you’ve got a chance to break
[01:14:30] Steve: through in the system and be
[01:14:32] Steve: known.
[01:14:34] Cameron: But companies spent hundreds and hundreds of billions of dollars to basically make sure that audiences tended to go to their properties and, and not anywhere else. They, they sort of dominated the space. And people, uh, you know, are reasonably concerned that the same thing’s going to happen with AI. But I don’t think it is.
[01:14:58] Cameron: And I do think open source is going to be big. You’ve got companies like Mistral. You’ve got companies like Meta. We don’t really know how the Chinese AI companies are going to play out, but they’re going to be players in this space as well. You are going to have a proliferation. And, and Zuck’s vision is that, Hey, it’s funny because he uses Microsoft as an example of open source.
[01:15:20] Cameron: I never really thought of Microsoft as open source, but you know, when I was at Microsoft 20 years ago, the battle was between Microsoft and Linux, Linux being the open source alternative to Windows. And the Microsoft ecosystem. But Zuck’s view is that, yes, but anyone could write an application that would run on Windows, and Microsoft wouldn’t stop you from having an application that ran on Windows.
[01:15:48] Cameron: I mean, Marc Andreessen Might have something to say about that because we made it difficult for Netscape to run on Windows in the late 90s. You could, for all intents and purposes, unless you were in Microsoft’s crosshairs like Netscape were, because it was an existential threat, you could write an application and you could sell that application and Microsoft wouldn’t put a tax on you for selling your Windows application.
[01:16:15] Cameron: It didn’t have a marketplace that you had to get permission
[01:16:18] Cameron: to get your thing in. You could distribute it.
[01:16:22] Cameron: So it was open source in that, it was an open
[01:16:25] Cameron: system,
[01:16:26] Steve: you could get your, it wasn’t an five inch disc and upload that piece of software to
[01:16:30] Steve: run.
[01:16:32] Cameron: Apple’s is a closed system, and for good reason, Apple did it to try and prevent hackers and malware and all those sorts of things, and they go, okay, well we have to manage this and that. If that comes with a fee, it comes with costs, and you have to share your revenue with us because we’re managing this system.
[01:16:50] Cameron: So it’s a safe and a secure environment for users. I’ve never had a virus on my iPhone or my iPad. Never had to worry about it. Don’t, you know, I don’t really give a much thought about it, but he’s talking about a world where there are billions of AI agents that are built on top of open source AI models that.
[01:17:15] Cameron: He is building, along with others, he’s gonna build them, he’s gonna make them available, anyone can take them, you know, check the security or improve upon them, make them bigger, make them smaller, dedicate them to subject X, Y, or Z, they can be hosted on a, on AWS or on a Google data center or on your own data center, or you can run them with a small footprint on your phone or your iPad or one day your watch or whatever it is.
[01:17:47] Cameron: So Meta are putting God knows how much money, billions and billions, and probably hundreds of billions over the next few years, into a future where we have extremely powerful AI tools that are open source. And I think that is gonna really be a hugely
[01:18:11] Steve: does he leverage that? Does he end up renting out infrastructure like an AWS, but it’s, uh, it becomes, you, you, you subscribe
[01:18:19] Steve: to the meta AI engine to create whatever you need,
[01:18:22] Cameron: I think there’ll be revenue opportunities for them in doing that and probably providing hosted opportunities for businesses that don’t want to do it themselves. They’ll have revenue streams in there, but I don’t think he really knows. The sense that I get from the interviews with him right now, either he doesn’t know yet or he’s not revealing it yet.
[01:18:45] Cameron: He probably has ideas, but it’s just right now, the battle is for. Getting yeah. having
[01:18:52] Cameron: the tools available
[01:18:54] Steve: Have it, have the powerful tool, and then work out the revenue stream ladder. It’s a little bit like 1999, when everything was free, and it’s like, well, we’ve just got to build it, get people using it, and then we’ll get the revenue model. Which both Google did, Facebook did, all of them found their revenue models.
[01:19:10] Steve: Albeit, they went to the horrible advertising model, which is You know, the, the original sin of the
[01:19:16] Steve: internet, you know, don’t like it, but, uh,
[01:19:19] Cameron: but yeah, so imagine a world where we
[01:19:21] Cameron: have, uh,
[01:19:23] Cameron: open source AI, millions of people working on it, millions of AI agents working on improving it. And then when we figure out nanotech, we have millions of open source AI agents building the plans for open source nanoreplicators, nanofabricators, giving us the plans for open source robots.
[01:19:48] Cameron: I’m reading this book and I’ll wrap up with this because I know you’re going to go, um, Radical Abundance. Hey, Eric Drexler’s book from about 10 years ago. He’s the father of nanotechnology. 1986, he wrote.
[01:20:00] Steve: regime. Radical
[01:20:01] Steve: abundance.
[01:20:02] Steve: Got it.
[01:20:04] Cameron: Yeah, in 1986, he wrote
[01:20:05] Cameron: Engines of Creation, which was sort of the first book on nanotech, and, uh, his last one, which is 2013, Radical Abundance, he has this model, which I thought was fascinating. He said, I want to, I want you to imagine how a car is going to get built in the future. It’s being built in something the size of, uh, Household garage.
[01:20:24] Cameron: And if you look at the garage, you see some platforms that go up and down and a whole bunch of robotic arms that are assembling things like you would see in any sort of car factory, but the back wall. is made up of shelves that have devices about the, they’re boxes, they’re like the size of microwaves, some bigger ones about the size of a washing machine, some smaller ones the size of a household microwave, all up the wall.
[01:20:50] Cameron: And if you look inside of those, they have smaller versions of the garage with smaller robot arms that are assembling components. When they assemble their components, they get pushed out, of a door in the machine where the bigger arms pick them up to assemble them together. But inside the microwave in the washing machine, there’s a smaller box with smaller robot arms that are making smaller
[01:21:11] Cameron: parts.
[01:21:12] Steve: It’s like the, the entire thing of fractals. I’ve been really watching a lot about fractals lately and all manners of biology have certain patterns which are unavoidable. I mean, if you look at. The image of a brain and a satellite map of America or anything, it looks like a brain, you know, where all the cities are and the neurons and the lights and where things are flashing, which, where there’s more activity, which is, uh, the same as in the brain where there’s nodes and neurons going through to an MRI scan.
[01:21:41] Steve: It looks really similar. Large language models look really similar to a brain. And the idea of, you know, Fractals, uh, the way we create stuff very, very similar all the way through. And it’s almost like we have this inevitability that we will replicate nature in the way things are built, even at the nanoscale and then nanoscale all the way up to manufacturing a car.
[01:22:02] Steve: It seems like there’s this fractal orientation. I don’t know if he has that in the book, but it,
[01:22:05] Steve: it, it seems really interesting.
[01:22:07] Cameron: He doesn’t, he doesn’t mention fractals, but that’s great. I was thinking of matryoshka dolls, but fractals is a better, more exotic explanation. He says, To the right, picture yourself standing outside the final assembly chamber of a large product APM system and looking in through a window to view the machines at work in a space the size of a one car garage.
[01:22:27] Cameron: To the right, you see an exit door for products ready for delivery. To the left, you see what look like wall to wall, floor to ceiling shelves. Which, with each shelf partitioned to make a row of box shaped chambers. In the middle of the garage sized chamber in front of you is a moveable lift, surrounded by a set of machines.
[01:22:48] Cameron: And so, basically, the, the The little microwaves have little ones, have little ones, right down to molecular components that it’s building from scratch based on, you know, I have some inputs, which is, you know, bottles of nitrogen, oxygen, hydrogen, carbon, whatever it is, that probably it’s getting from deconstructing last week’s car model that you got bored with.
[01:23:12] Cameron: So it’s building you a new one for today. And it, it builds the molecular components and feeds them into the next box, which builds, you know, adds them together, which fills in the next box, which adds them together, assembles
[01:23:22] Cameron: them all the way You’d just scoop up some dirt and get everything you need.
[01:23:27] Cameron: Oh, well, you need to sort the bits from the
[01:23:30] Steve: Robots are going to do that. Robots will do everything.
[01:23:32] Steve: yeah.
[01:23:34] Steve: because
[01:23:36] Cameron: So that’s his view for this, but you know, I believe that we have a chance. I, I, I don’t know what probability was assigned to this, but we have a chance where we will have open source AI, open source robots, open source nanotech.
[01:23:50] Cameron: And that will lead to sort of this, um, version of the world and a version of an economy I don’t think we can really envision right now.
[01:24:02] Steve: it removes the scarcity model, which all of the, you know, whether it’s capitalism or communism, they’re all based on the fundamental principle that there’s scarce
[01:24:10] Steve: resources. And this kind of circumvents that to an extent.
[01:24:16] Steve: There will always be some form of scarcity, but the scarcity, there might be different types of scarcity, and I don’t know what they are.
[01:24:22] Steve: You know, scarcity of attention, scarcity of fame, scarcity
[01:24:25] Steve: of recognition. I don’t know.
[01:24:27] Cameron: Kurzweil in The Singularity is Getting Nearer
[01:24:29] Steve: I just the law of accelerating returns.
[01:24:32] Steve: Yeah,
[01:24:33] Cameron: He talks about how the IT
[01:24:34] Cameron: industry, the computing industry, has built on the law of accelerating returns over the last 100 years because each generation of technology helps you build the next generation of
[01:24:43] Cameron: technology, but that most other fields don’t benefit from the law of accelerating returns.
[01:24:50] Cameron: You know, this, this year’s, um, crop of wheat isn’t going to make next year’s crop of wheat exponentially better. But he said that as these sorts of, this is the, I’ll read this one paragraph and I’ll let you go. What makes the LOAR so powerful for information technologies is that feedback loops keep the costs of innovation lower.
[01:25:12] Cameron: And as artificial intelligence gains applicability to more and more fields, the exponential trends that are now familiar in computing will start to become invisible in areas like medicine, where progress was previously very slow and expensive. With AI rapidly expanding its breadth and capability during the 2020s, this will radically transform areas we do not normally consider to be information technology, such as food, clothing, housing, and even land use.
[01:25:42] Cameron: We are now approaching the steep slope of these exponential curves. That, in short, is why most aspects of life will be getting exponentially better in the coming
[01:25:52] Steve: that it’s the idea that everything becomes an information technology business. And once we started to use computation in many different industries, we got this horizontalization where things go across. And it actually is Easier to see that that becomes possible. And especially once you start to organize things at the nano and atomic level, then everything becomes organizing things at an atomic level.
[01:26:15] Steve: So then all of those industries have that positive feedback loop and the AI just becomes the scaffold to use a term that we’ve already used today that creates that possibility.
[01:26:26] Cameron: That is Futuristic, Episode 29. Thank you, Steve. And thank you, Martin Murray, for coming on. That was fascinating. Really appreciate your time,
[01:26:33] Steve: Thanks so much, Cam.
In this episode of The Futuristic, hosts Steve Sammartino and Cameron Reilly explore cutting-edge topics in technology and culture. They dive into AI’s role in creativity, discussing AI-generated music compared to human-made tunes, and examine the rise of local micro-AI models like OpenAI’s GPT 4.0 Mini. Steve talks about the transhumanist movement and ethical concerns over chip implants in humans, while Cameron shares his latest AI-assisted coding projects. They analyze SpaceX’s Starship launch and Elon’s vision for multi-planetary life, as well as nanotech breakthroughs in cancer treatment. The episode also covers the revolutionary potential of Apple’s Vision Pro in creating immersive entertainment experiences and its applications in various fields. The hosts end with an overview of technological advancements and the opportunities they present.
00:00 Introduction and Welcome
00:14 Corporate Critique and Personal Reflections
02:33 Transhumanism and Media Manipulation
07:26 Exploring Assembly Code and AI Agents
12:40 Coding Challenges and Subscription Creep
22:42 GPT-4o mini drops
29:06 RunwayML Gen 3 alpha
40:54 Orson Welles and the Future of Filmmaking
41:48 Starship Super Heavy launch into space
45:41 Debating Multi-Planetary Species and Space Exploration
50:27 Nanorobot with hidden weapon kills cancer cells ; Self-assembling and disassembling swarm molecular robots via DNA molecular controller
01:01:25 AI in Music Production: A New Era; Rick Beato’s critique of modern music ; Music producer Dan McClellan tests Suno and Udio
01:14:29 Apple Vision Pro: A Game Changer in Entertainment
FUTURISTIC #28
[00:00:00] Cameron: Just have to be different. Welcome to The Futuristic Episode 28, I believe. We’re recording this on the 19th of July, 2024. Mr. Sammartino, Mr. Steve Sammartino, won’t you ever quit? You’re back.
[00:00:20] Steve: don’t you make me sick with your fraudulent behavior. Ooh, ooh, Mr. Sammartino, talking to corporates, tell them how to do it. Ooh, ooh, Mr. Sammartino, you gotta pretend to care when you’re selling them things. What you believe is what they selling, make you think that buying is rebelling. That was a little bit of Zack DeLaRocca, Rage Against The Machine.
[00:00:46] Cameron: I love it, love it. With Del the Homo Sapien mixed in.
[00:00:51] Steve: Yes.
[00:00:52] Cameron: How you doing,
[00:00:53] Steve: Dels! It was DeLaRocca and Del the Homo Sapien, we double Delled.
[00:00:59] Cameron: Double Del. Della, in Italian, of course, meaning of the, yeah, Della Rocca. Of the, of the rock, Zach of the rock, as he was, uh, and is, I lost track of what he’s up to these days, but huge rage fan.
[00:01:18] Steve: I’m a huge, I listen to them more than any other band. And, you know, I feel like I’m a fraud. Because basically, I sing those lyrics with such passion, and then take big checks from corporations. But I always can come back to say it’s not my fault. Cameron told me I’m just the accumulation of molecules temporarily and all of my thoughts, I have zero control over them.
[00:01:44] Steve: So it’s amazing. Thank you for that get out of jail free card.
[00:01:50] Cameron: I, one of the, um, one of the, uh, motivations for me leaving the corporate world, leaving Microsoft 20 years ago, holy shit, this month, this, like two weeks ago. Wow, I missed that anniversary. Uh, it was the beginning of July 2004 when I left the corporate world and, um, part of it was listening to Rage Against the Machine, quite honestly.
[00:02:12] Cameron: It is part of my conversion to being a lefty, uh, was listening to Rage Against the Machine. Anywho, Uh, we’re not here to talk about that, Steve, uh, we’re here to talk about technology, the future. Uh, the future is now, what is happening in your life vis a vis the future, Steve.
[00:02:33] Steve: Had a spot this week on the project. For anyone who doesn’t know, there’s this thing called Free to Wear TV, which has 17 people watching it across Australia. Yeah, over the The waves, the airwaves, and they rang me that day and said, look, we need someone to give us a synopsis on what’s happening with, uh, the transhumanist movement, using technology to enhance, uh, physical and mental capabilities and, uh, Longevity, Escape, Velocity.
[00:03:11] Steve: And they asked me a number of questions. I did a talk to camera. They came to me and they had someone coming on who had, which is a bit of an old trick now, had a few chips installed under his skin. And it’s so interesting. They did two things. First, they asked me a whole lot of stuff and questions about the topic.
[00:03:27] Steve: And then when I was watching Waleed Ali, uh, talk about it on the, uh, On the preamble where they had some visuals and then cuts to me doing some soundbites. He stole all my good stuff and put it as his words. They’ve been merging with the machines and a whole lot of nice verbiage I’ve written about and said, anyway, I guess that’s the price of getting coverage.
[00:03:46] Steve: Uh, and then they wanted me to say, Can someone with chips installed in their body hack into someone’s phone or computer? I’m like, no, not at the moment. No one can. And they’re like, yeah, but if, and they were just trying to get this sound bite. They asked me the same question 10 times and I said, look, it’s not going to happen.
[00:04:07] Steve: I’m not going to tell you something that isn’t the case. And so you could see when they did the edit that it was, uh, and shout out to all the people from channel 10 that were tuning in, uh, you could see what they were trying to get. And then they had this guy on there. He was a classic charlatan, uh, you know, in my view.
[00:04:23] Steve: People are just doing this.
[00:04:24] Cameron: Do you remember?
[00:04:26] Steve: No, I can’t remember.
[00:04:27] Cameron: The reason I ask is because 19 years ago, uh, in the early days of my podcasting, I went and did a podcast with a guy in Melbourne that had a chip inserted in the webbing between his thumb and his forefinger that he was using to unlock his door. Just wondering if it’s the same guy.
[00:04:46] Steve: Yeah, I’ll bring him up. Here it is. Here. I’ll just find, I’ll say his name.
[00:04:51] Cameron: trying to find my archive podcast to see if I can
[00:04:54] Steve: His name’s Leno. L-E-N-O-N-O-E. And, I mean, Transhuman, he’s got his, his subtitle there, uh, what was interesting was that he, asked me in the interview, which is during the day, and then they had my spot with voiceovers and visuals, and then they interviewed him. And, you know, what I found was interesting was that he said he could hack into something with what’s in his phone, and they said, can you do it?
[00:05:24] Steve: And he was, Like he turned on his phone. He had RFID and NFC chips in him, right? Which is, you know, a lot of people have done. Yeah, you can open the door. Congrats, whatever. Uh, and I said, well, he can’t hack into anything. And then he was trying to infer that he could, and I said, well, then how do you do it?
[00:05:40] Steve: If you scan, he said, I could scan with your card and then get your card and do a copy and, and a URL redirect. I’m like, well, yeah, you’re still going to need something to hack into it. Whether it’s, he said, well, I’d use my phone. So he was trying to make out. As if he could hack in through something that was in his body, when he clearly couldn’t.
[00:05:59] Steve: And they kind of fell for it. Um, but the most interesting bit for me, and it circles back to the attention economy, where that is the most valuable resource in the global economy now. The biggest companies, you know, are based on selling attention. Uh, they asked him, so why do it? And he said, well, why not?
[00:06:17] Steve: But his real answer was, so I can be on TV. So he just didn’t want to say that.
[00:06:24] Cameron: Yeah.
[00:06:26] Steve: Like, you know what I mean? Like I get annoyed with someone who just does something for attention rather than creating some value, you know, anyway, that’s my view. That’s my whinge and that’s it. And that’s my week.
[00:06:37] Cameron: Uh, dude, sounds like one of my conversations with my, uh, TikTok, a son, like, don’t you feel like you should be adding some value? He goes, well, entertaining people is value.
[00:06:49] Steve: That’s true. I don’t disagree with
[00:06:50] Cameron: can’t argue with that.
[00:06:52] Steve: Yeah. And, and, and this other guy is opening people’s minds potentially to future possibilities of, you know, it’s tongue in cheek when, when I say that, um, but I think now. Uh, what can generate attention, the bar is a lot lower now. And that might be a function of the fragmentation of possibility and, uh, The desire to create attention and the fact that everyone’s got a megaphone and everyone’s the CEO of their own personal media corporation now, um, that, that you get a lower bar.
[00:07:26] Steve: Um, two other really quick things I did this week is I was in Warrnambool doing a presentation and it was the only time I’d ever had a room of 300 people where only one had used or heard of ChatGPT, which was interesting. I’ve been in other rural locations and it’s usually 50 50. That was a real surprise.
[00:07:42] Cameron: Yeah.
[00:07:43] Steve: So again, we do live in a bit of a bubble, just a bit of a reminder. And then the other thing I’ve been doing this week is trying to understand assembly code. Um, I want to learn how to make a video game where I can do the assembly code and the controls to do like a, a simple video game, not, you know, a replication using Python or something, but in the traditional code, I just, I want to understand The deeper layers of it, because code is such an abstraction now that we’re up to the level of natural language processing.
[00:08:15] Steve: I want to really get down to the ones and zeros and off and on switches. I’ll just be mucking around with understanding assembly code, using ChatGPT to understand assembly code. There you go, full circle.
[00:08:26] Cameron: Wow. Well, speaking of which, what is assembly code exactly? Let’s go to ChatGPT and see what it has to tell us. Assembly code is a low level programming language that is one step above machine code. It is specific to a computer architecture and is used to write programs that are directly executed by the CPU. Here’s a breakdown of what assembly code entails. 1. Low level language. Assembly language is close to the hardware and allows programmers to write instructions that the CPU can directly execute.
[00:09:02] Cameron: Unlike high level languages like Python or Java, assembly requires detailed knowledge of the computer’s architecture. Stop it there. So why would you want to do that, Steve?
[00:09:12] Steve: Oh, cause I’m, I’m curious to get more of an understanding of how things work at a deeper level. I’ve been thinking about that a lot. And I think the evolution of large language models has made me think more deeply about how things work, because I think we’re getting to increasing levels of abstraction when no one really understands anything.
[00:09:31] Steve: And I just thought it’s, it’s, it’s almost like growing your own vegetables, but in a computational sense and going, going back to the seventies. And I’ve got the big arcade machine here. Which doesn’t, runs on a, runs on a CPU, so it just runs on a, on a simple laptop and screen. Um, but I was just interested in, in how much memory you need and how much RAM and how it can work and how it can fit it all in there.
[00:09:56] Steve: Assembly code takes up far less space, so you can do a lot more with a lot less. So there’s just something interesting about it. I don’t know. I was just curious. It started off with what code a traditional 8 bit arcade game is written in. And then I just kept going down, you know. And then I wasted two days when I could have been generating income for my family.
[00:10:14] Steve: Sort of understanding that.
[00:10:17] Cameron: It reminds me, you’re talking about abstraction layers, I was listening, uh, watching actually uh, an interview with Eric Schmidt recently, um, was the CEO of Google for 20 years, give or take, and um, they were talking about AI among other things, and um, he was saying that the, Like he, what he sees is the near term future for AI is we’re going to have, you know, super advanced AI very quickly, AGI, etc, etc.
[00:10:47] Cameron: Um, he, and he was talking about AI agents, and we’re going to have AI agents in everything, and then those, you know, AI agents are going to talk to other AI agents. He said then what’s going to happen is the AI agents, We’ll realize that using English or even code, human created code, to talk to each other is inefficient.
[00:11:08] Cameron: So they’re going to create their own language to talk to each other, which humans won’t be able to understand. And he said, and that’s the point where we have to pull the plug.
[00:11:17] Steve: Right.
[00:11:18] Cameron: said, as soon as they start talking to each other in languages we can’t understand. understand, and there’s no transparency about what the agents are talking to each other about, pull the plug.
[00:11:28] Steve: It’s really interesting, my father and I, when we used to speak Italian sometimes, if we wanted to talk about someone, we would go to Italian. So I just feel like they’ve worked out that if you go to another language, you know, and you do it with your kids as well, when you, when they’re really young, you use complex language that they haven’t quite got the vocabulary for yet.
[00:11:47] Steve: Um, and I, and I, the sentiments there of Schmidt. Uh, and that’s a first principle. The first principle is be worried if you don’t understand. Now, it doesn’t mean necessarily that the computers have nefarious intentions,
[00:12:01] Cameron: Hmm.
[00:12:01] Steve: but if you can’t understand it, um, then,
[00:12:05] Cameron: Then maybe they
[00:12:06] Steve: then, well, well, then, well, then you won’t know.
[00:12:08] Steve: So, so the first principle is make the, the, uh, yeah, the, the reparations.
[00:12:17] Cameron: Sorry, I’m just changing my video. My video was flashing at me for some reason, so I’m turning it off and on and
[00:12:22] Steve: I’m getting a thousand text messages, which is fine.
[00:12:25] Cameron: now I’m all over the side here, which is weird. Um, yeah, turn your damn phone off, Steve. What are you, amateur? What are you doing?
[00:12:34] Steve: Yeah, I am. If there was anything that I am, amateur is certainly it.
[00:12:40] Cameron: Um, Well, Steve, uh, I’ve been coding a lot, um, in the last couple of weeks, so, um, without boring people with the details, Spotify, uh, fucked me severely about a week ago. Basically all of my, all of the MP3 hosting for all of my podcasts, um, is, sits with Spotify now, one of their subsidiaries, Megaphone, and there was a whole bunch of legacy links there from company, a company they acquired, an Australian company called Wooshka, a couple of years ago. And they arbitrarily turned all of that code off last week and, uh, I had about 500 episodes across my many, many shows that I produced just went dark
[00:13:23] Steve: The Sammartino Method. The Sammartino Method podcast is gone.
[00:13:27] Cameron: yeah, I had to fix that one. That was the big one I had to fix. That was the first one I went to fix. But, um, so that, so, and, uh, you know, it’s, had to update 500 posts.
[00:13:38] Cameron: I’m still in the process of it, and each post has a bunch of different mp3 links, because a lot of my shows have lots of different feeds for different levels of subscribers and whatever, and it was taking me A minute and a half, I timed it to update every post, so multiply by a thousand, that’s a lot of time.
[00:13:58] Cameron: Uh, so I wrote some code to automate a large chunk of it. I still have to do 50 percent of it, but I wrote some code to automate 50 percent of it. And that was a big coding project late last week. Thank you to GPT. I tried using Claude, 3. 5, Sonnet. It was a disaster, partly because I’m not paying for it, so, you know, I ran out of, uh, free, um, chats very quickly.
[00:14:22] Steve: Well, you got to worry about subscription creep, Cam. Subscription creep is a big financial issue, I think.
[00:14:29] Cameron: I’ve been cutting off all my subscriptions in the last week or so, actually. All streaming,
[00:14:34] Steve: I did a subscription audit. Are
[00:14:36] Cameron: Yeah, that’s what I’m doing too, cutting stuff off. Um, but anyway, I, I got the code working and, um, oh, I was impressed. Like now I, I just, I run some code and it says, what’s the link you want to update?
[00:14:48] Cameron: And I give it to it and it does more than 50%, probably 90 percent of it is automated. I need to finish the last 10%, which just seemed too hard to automate. Um, but I, again, I was impressed that I could write code to do that, you know, pre AI, no fucking way I would have known how to do that. And in a less, um, complicated sense, I’ve been using it more and more for film and music recommendations.
[00:15:15] Cameron: Like, uh, give me a cult film that I might like. Or, um, you know, give me an artist like Lou Reed that I might like. This video is right. Um, and it’s been doing a pretty good job. I actually discovered via this, uh, a really great film. And a really great band that I’d never heard of just by getting GPT to give me some recommendations.
[00:15:45] Cameron: So that’s been fun.
[00:15:46] Steve: you going to keep them a secret or are you going to tell us all what they are?
[00:15:50] Cameron: Uh, well, now that you’ve insisted, I will happily tell you, the film is a 1976 film called Stingray, written and directed by a guy called Richard Taylor, who, um, I think he did Fright Night after that, but not known for much, but it’s a, um, uh, actually 1978 according to IMDB. Prim it’s a low budget car chase shootout film.
[00:16:24] Cameron: I think probably in the vein of, uh, you know, your, um, uh, Cannonball Run slash, uh, Smokey and the Bandit era comedy action. Premise is great though. Starts off with a drug deal going down in an alleyway in the middle of the night. Two guys with the coke, two guys with the money, goes awry. The guys with the coke kill the other guys who were trying to.
[00:16:50] Cameron: But, you know, uh, betray them, they planted a tracker on them. Um, and then the, the, the two guys that are still alive, have the money, have the coke, they take off, they hear cops coming, because there’s a big shootout, they hear cops, they drive into a car yard in the middle of the night and deposit the briefcase with the money and the drugs in a Stingray convertible behind the back, behind the seats, you know, the, the boot of these sports cars.
[00:17:15] Cameron: was like you’d flip the seat down and then they go back then they get arrested by the cops anyway cops find them get arrested can’t pin them with anything because they’re not holding anything they get out a day later they go to the car yard to see two young guys buying the car and driving out the lot so they chase these guys down and the guy the two young guys uh like college age guys in the stingray think these guys are wanting to race so they start fanging down the streets you A cop car pulls out to pull him over for speeding, does pull him over for speeding, uh, on a, on a country lane somewhere, and the gangsters come up behind him and mow the cops down.
[00:17:55] Cameron: But the cops had called in that they were chasing a stingray. The rest of the cops think the kid’s in the stingray, So now the cops are after the kids in the Stingray, the gangsters are after the kings in the Stingray. But the best thing is, the gangsters, the leader of the gangsters, is a woman who’s dressed up as a nun to disguise herself from the cops.
[00:18:18] Cameron: And she’s this real hard ass, fast talking woman. Strong girl boss, as my wife called her, just a great premise. The acting’s sort of a little bit dodgy, very low budget, but real stunts, real car chases, real,
[00:18:36] Steve: and people dressed up as nuns. I think if you’ve got a stingray and someone pretending to be a nun, I feel like. These are the ingredients of greatness. I’ve always said that, Cameron. I’ve always said that.
[00:18:50] Cameron: It’s only got a 5. 6 rating on IMDb, but I think
[00:18:53] Steve: Well, what would IMDB know? What would IMDB know? Let’s be honest about that.
[00:18:58] Cameron: I got to say, my wife and I enjoyed watching this more than we’ve enjoyed watching much lately. Cause it was, it’s just, there’s something about
[00:19:04] Steve: trying to tell me that this is better than some animated movie that’s been made 76 times? Is that what you’re trying to tell me? That real people in real situations is better than a, yeah, an Avengers film? Well, I mean, how dare, I mean, what were you thinking?
[00:19:19] Cameron: you get the sense of like, it’s made for driving kind of a movie from the late seventies, you know, and it’s, it’s just fun. You can tell that who the people making it were just having fun. Like it was just like, fuck you. We get to make a movie. This is awesome.
[00:19:32] Cameron: The music that I discovered was, um.
[00:19:34] Steve: that fun, having fun, right now everyone is thinking about What everyone else wants to see instead of what they want to make. Like that, that for me is what is what’s missing culturally right now. That’s all I’m saying. I mean, that’s what the Avengers is. It’s like, what can we sell rather than what’s worth making?
[00:19:53] Cameron: Yeah. Well, the music I discovered, uh, was the Jim Carroll band, you know, Jim Carroll, the basketball diaries. You ever seen that Leonardo DiCaprio film
[00:20:01] Steve: I haven’t seen it. I know of it, but I haven’t
[00:20:03] Cameron: mid nineties, based on an American poet, Jim Carroll, who was also a drug addict. Um, and, um, it, it turns out I never knew this.
[00:20:12] Cameron: I knew who Jim Carroll was. I’ve seen the film, uh, didn’t realize that he had a punk band. Uh, called the Jim Carroll band. ChatGPT got me onto them. Uh, listened to it. Fantastic. Sort of a cross between Iggy and the Stooges and Lou Reed. Cause again, like Lou, he’s a poet that started, like Leonard Cohen, a poet who started writing rock songs and putting his poetry into it and, you know, like Lou, a lot of drugs and, you know, sort of, you know, dirty, dodgy lifestyles and that kind of stuff, but with a Stooges y kind of feel to it, really, really good stuff.
[00:20:49] Cameron: Anyway, so. But yeah, I just go to ChatGPT and I go, Hey, you know, tell me, I like this cult film. I like these sorts of films. Give me some recommendations of things that I may not have seen because they’re not very well known and that are available on Tubi, which is where I watch a lot of these cult films now.
[00:21:07] Cameron: Do you stream Tubi? Use tubi. tv,
[00:21:10] Steve: No, I
[00:21:11] Cameron: T U B I, it’s free. It’s ad supported, they put in ads every 15 minutes or so, um, and it just has a great collection of, you know, B rated, D rated movies from the 20s onwards, my kind of stuff, you know, schlocky, B, D, C rated, crappy, straight to video movies from the 80s and 90s, which is, uh, 70s, which is my favorite kind of films usually, because they’re just, again, fun.
[00:21:41] Cameron: Anywho,
[00:21:42] Steve: I like, it’s funny because every recommendation engine that exists. Exists because you need that quantum of support for it to work financially, that kind of original sin of the internet where you need a lot of traffic. And there’s something beautiful about asking an AI for things that are quirky and nuanced, because even if you Googled that.
[00:22:04] Steve: You’ve still got to get someone who’s going to give you quirky and nuanced stuff that’s still in their top 10. So by default, it’s very, very difficult to find these types of things. Whereas an AI engine that can scour the entirety of the internet can find something that is, you know, a little island off to the side that the way search is structured just won’t find.
[00:22:26] Cameron: and you can say to all, well, tell me about it, what’s good about it. This director, Richard Taylor, what did he do? You know, who’s in it? What have they done? You can, you can have a conversation with it,
[00:22:36] Steve: You get a flavor profile for what culturally what you’re looking for. Yeah, that’s why
[00:22:42] Cameron: well, tech news, Steve, moving on to that, uh, just today. OpenAI, OpenAI, uh, announced, released GPT 4. 0 Mini, uh, according to their press release, the cost per token of GPT 4. 0 Mini has dropped by 99 percent since Text DaVinci 003, a less capable model introduced in 2022.
[00:23:12] Cameron: So they’re talking about the massive Uh, efficiencies that they’re starting to get, uh, with some of their newer models. This is, they say, their most cost efficient small model. We expect GPT 4. 0 Mini will significantly expand the range of applications built with AI by making intelligence much more affordable.
[00:23:35] Cameron: Then they go on to talk about its, uh, context windows and how it’s ranking on the MMLU and the NLMSIS leaderboard. Uh, There’s a, there’s a big explosion of Mini. Have you played with any of the Mini models out there, Steve?
[00:23:50] Steve: I haven’t played with it, but it’s kind of interesting. And to me, it’s analogous. To the idea of six cylinder cars. And we said, you know what? They use a lot of petrol and this idea of AI and energy really having this symbiosis and having similar patterns as well, um, comes to the fore where you need to say, how can we become more efficient and, and really give us the result we need.
[00:24:18] Steve: Do we really need to carry all of this weight and have all of this capability where large parts of the search, you’re just carrying all this extra weight that you don’t need. So I actually found it really interesting, uh, to align this thinking to the fossil fuel era and transportation generally and heating houses and looking for more efficient ways to do things.
[00:24:38] Steve: And we’ve read all of that, that stuff as well, not just on the cost. Cause I, in many ways, cost and efficiency and energy usage are kind of the same thing within this computational context. I mean, you’ve got to buy the hardware of course, but if you use less of it, then you lose less energy, which means it’ll cost you less.
[00:24:56] Steve: So for me, that was the first thing that kind of popped in my mind. I thought, here we are again, And here we are following a similar path in a different technology, and I haven’t really used them, so, but I’d be really interested to see, you know, if it’s dropped by 90 percent of the cost, then the question is, okay, do we Do we still get sort of 80 percent of the result?
[00:25:20] Steve: What is the ratio of cost reduction versus the output? And I haven’t played with it enough to understand that yet. Do you know?
[00:25:27] Cameron: I know from the benchmark reporting that they did in their press release today, they say GPT 40 Mini is better than other small models of reasoning tasks involving both text and vision, scoring 82 percent on MMLU, a textual intelligence and reasoning benchmark, as compared to 77. 9 percent for Gemini Flash and 73.
[00:25:50] Cameron: 8 percent for Claude Haiku. So they’re comparing it
[00:25:54] Steve: They’re comparing it to four cylinder cars to four cylinder cars. I’m interested to know, how does it go against the big V8? That’s actually what I’m really interested to know.
[00:26:03] Cameron: it’s not going to be as good, or they would be telling you that it was, obviously,
[00:26:08] Steve: Yeah, right. Yeah. You choose your, you choose your measurement parameters based on your capability, right. And where you can win,
[00:26:14] Cameron: They’re saying it outperforms ChatGPT 4 on chat preferences, in the LMSIS leaderboard, but, you know. Anyway, the bottom, like,
[00:26:25] Steve: but that’s important. That’s a good point because you got to remember that the models get better and better if, if this small version is better than, you know, a couple of models previously that, that, that’s significant. And they should talk that up even more, I think.
[00:26:39] Cameron: One of the things that we’re seeing happening is not only the explosion of course, of uh, open source models, but the whi which are performing really well, but the explosion of mini models that, uh, faster and cheaper don’t quite perform as well as the top end models. But ostensibly we’ll be able to run locally on small devices eventually.
[00:27:05] Cameron: Um, you know, in theory that Microsoft launched one too, just in the last week or so, that apparently can run locally on phones. I was going to set aside some time to download it and install it on mine, but then, um, my Spotify crisis happened. But you know, it’s, it’s just, we, you know, we saw the Apple announcement.
[00:27:26] Cameron: Um, With Apple intelligence, this idea that in the next year or so, we’re going to have AIs running locally on some of our devices without having to go and talk to anything on the cloud in terms of stuff that it could do locally with local information, your information in particular, you know, is going to be a complete game changer.
[00:27:51] Cameron: I
[00:27:52] Steve: Well, and, and you would suggest that that’s why you need this small model so that we can run on edge computers and our own devices. I,
[00:28:01] Cameron: I think that’s part of it. And, but part of it is also as they’re understanding how LLMs work better, they’re able to get them to do more with less. I was watching a Gates interview just earlier today where he was talking about that, uh, as we understand them better, we’re going to be able to get them to function a lot more efficiently.
[00:28:23] Cameron: And they were talking about the energy requirements and he was saying, look, the breakthroughs we’re going to make in material sciences as a result of AI will far outweigh their short term energy cost to get us
[00:28:37] Steve: I think that too. I think that too. Um,
[00:28:40] Cameron: do too.
[00:28:41] Steve: I remember it might’ve been Nicholas Negroponte, there’s an old tech name, who, when he used to talk to some of his students, he said, I want you to assume that computation and microchips are free, because only when we do that will we push ourselves to make the things themselves cheaper.
[00:28:57] Steve: So you have this recursive efficiency effect where the technology solves its own technological problems and output problems.
[00:29:06] Cameron: Well, speaking of, uh, recent releases, Runway released RunwayML Gen 3 Alpha, a text, the update of their text to video engine. I tried to play around with it, but they wanted me to sign up, and because I’m in a Because I’m in a, uh, not signing up to new things, uh, binge at the moment, I said no. But I’ve seen some
[00:29:30] Steve: Well, that’s because you’re a communist. That’s because you’re a communist and you believe that everything should be free to everyone, right?
[00:29:36] Cameron: I do. And everything will be free. I believe that we will reach communism at some point in the not too
[00:29:41] Steve: Technological communist abundance.
[00:29:44] Cameron: Exactly. As I heard somebody say recently, I don’t know who it was, but they were saying it’s not a universal basic income, it’s gonna be a universal, universal Uh, wealth income, wealthy income, where we’re headed.
[00:29:59] Cameron: Everyone’s going to have everything. I think it was Elon actually in a talk. He was saying it’s, it won’t be a universal basic income. It’ll be a universal, everything you need, um, is the
[00:30:10] Steve: I could, that, that, that I can buy into. Universal basic income, I, I, you and I, we debated it once on somewhere inside our 28 episodes, um, on it. And, uh, the, the word basic for me was the problem because it’s, it’s, yeah, it’s all, you know,
[00:30:26] Cameron: It’s just like what Elon said to Donald Trump last weekend. Anything you need, 45 million a month for your campaign, done. Not a problem. That’s what he’s putting in at the moment. Did you know that? Um, anyway, have you seen the runway three video?
[00:30:43] Steve: I, I didn’t sign up, but I tell you what, I watched a couple of demos and I actually watched a video demo of someone showing use cases live, because as you know, like we saw with Sora, they can show you incredible output. How they got there is, is the demos are not how you got there, but he went through the process of prompting a few different things to show its capability.
[00:31:06] Steve: And some of them were extraordinary and some of them weren’t so good. When he went through it, he gave it a few challenges and some of them were glitchy and terrible, but some were great to the point where I thought, I’m going to sign up to this. And it made me think with some of the TikTok videos that I do, if I’ve just got soundbite ideas, but I haven’t, I don’t want to talk to camera.
[00:31:24] Steve: I can just do a VO of me and have a video, which explains it in 10 seconds. And it could be incredibly compelling. I was really impressed. And I thought it kind of was at the Sora level. I thought from what I saw,
[00:31:37] Cameron: even better than Sora, I
[00:31:38] Steve: Yeah, so I was really impressed by it to
[00:31:41] Cameron: the human faces that it was doing, like the human face, its ability to capture human faces is like, Stunning. It’s as good as, uh, Mid Journey for bases, but as video, it was really impressive.
[00:31:58] Steve: I was super impressed by it to the point where I’m this weekend I’m going to muck around with it and maybe sign up for a month anyway and see how it goes. I did
[00:32:06] Cameron: had the same,
[00:32:08] Steve: on the economics of it all, how, All of the AI tools. And as you know, there’s, you know, every month there’s a number of tools that arrive.
[00:32:17] Steve: They’re all really pushing hard to give you incredible discounts where you almost get 12 months for the cost of three, because they know that you’re going to play with it next month and next month, you’re going to turn it off because a better one comes along. So it’s a real game of cat and mouse with the economics of getting the signups was also noticeable. Yeah. Monthly versus 12 months, but I’m definitely going to have a play with it for sure.
[00:32:41] Cameron: I had the same thing thinking of TikTok, you know, with the QAV TikToks that I try and do where we’re talking about investing and it’s dull and it’s boring, you know, my sons have often sent me examples of TikToks where people will be talking about something boring in the bottom half of the screen and the top half of the screen is just a Minecraft, it’s like a Minecraft walkthrough
[00:32:59] Steve: compelling.
[00:33:00] Cameron: do with the topic.
[00:33:01] Steve: Or a time lapse of someone cooking or cutting up watermelons or whatever. It’s, they come in my feed too. And it’s, it’s actually, it gives me hope in humanity that. Despite the tools, some people come up with creative ways to draw attention while you’re verbally giving.
[00:33:18] Steve: And it’s interesting to have this dichotomy of visual and verbal non matching to keep your attention, to transpose the important verbal information. That’s really important. It’s, and it’s a, it’s a real innovation. And just while you’re mentioning, you know, the QAV podcast, uh, I wrote. I’ve got this big list of TikToks I want to do.
[00:33:42] Steve: One day, when I get two days, I’m going to do 50 TikToks in one day and just get all the videos done. But one of them I want to do on is CEO wages, which is a real bugbear of mine, that CEOs have continued to extract largess while everyone else gets less. And here’s the ironic thing, they are not any better.
[00:33:59] Steve: They’re actually They’re worse than the CEOs in the seventies because the ROI on the average S& P 500 is a little bit lower than it was in the last, you know, it’s been around about 10 percent for a hundred years. So they’re not better, but they get paid thousands of times more. So this was the prompt that I put in to Runway.
[00:34:17] Steve: I put in, create a video of a greedy CEO which demonstrates an increase in wealth and their wages exponentially through the years while the average worker is getting poorer. Show it in a documentary style which would portray it over a timescale of decades starting in the 70s up until now. didn’t give me a very good result.
[00:34:34] Steve: But, but, um, but that was on Gen 2. So I’m going to try that on Gen 3 this afternoon. Uh,
[00:34:40] Cameron: yeah, let me know
[00:34:41] Steve: the other thing that I think is cool is how you can drop in a couple of images and get it to extrapolate on those, which I really like.
[00:34:50] Cameron: Well, just, you know, I think it’s just another demonstration of where we’re headed in terms of the ability for. Regular people, the democratization of the creation of high quality video content. We’re not very far away from, I think, from regular Joes like us being able to produce really high
[00:35:21] Steve: Hold up, hold up on the, hold up on the regular. Okay, keep going.
[00:35:24] Cameron: I’m being, I’m being humble. Uh, yeah, I’ve. Being able to produce really high quality video that, um, you know, can be used in a variety of ways. It can be used obviously for a business communication side of things, but also for entertainment. Um, it’s going to be a, an incredible world when we can all produce this kind of stuff to go along with our content.
[00:35:49] Cameron: Like for example, Uh, you know, in some of my history shows, you know, where if I’m talking about some historical battle, Julius Caesar or Alexander the Great or et cetera, et cetera, what I have done in the past, like if I’m talking about Alexander’s battle, great, uh, some of his great battles, um, and I’ll say, go watch the Oliver Stone film, uh, the film itself is pretty bad, but, uh, you know, but look at the battle scenes.
[00:36:18] Cameron: It’s pretty bad. Battle scenes give you a sense of what this was like. For me to be able to create My own video of the battle scene by just explaining to the, uh, engine what I wanted to demonstrate, uh, you know, a hundred thousand, 150, 000 Persians on elephants versus, uh, 20, 000 Macedonians on horses and Alexander’s battle tactics.
[00:36:45] Cameron: To be able to communicate that sort of like as an educational. Tool for, for teachers, for schools, for AI to teach a kid something by producing video. It’s, it’s going to be an incredible world a few years from now with these tools. Really amazing.
[00:37:01] Steve: The point you make there is so important. And I do this at the start of every speech. I said there’s two ingredients for a technology to create.
[00:37:07] Cameron: I think. Your mic just switched
[00:37:10] Steve: I was just playing around with it. Is that better?
[00:37:12] Cameron: Oh yeah. Yeah. Yeah. You’re
[00:37:13] Steve: I was just moving it around. Sorry. Um, I think the, uh, the process that you just mentioned, then democratization is really important.
[00:37:22] Steve: It’s one thing for technology to feel like magic, but when it’s available to everyone, that’s what changes everything. And just looking at the marketing of the Messiah, I remember when you, you did the documentary film and you had some animations that you got made. And I remember you, I was lucky enough to see some of the early ones.
[00:37:41] Steve: Imagine the capability now, if you were to remake that film on some of the animations and the way you’d be able to do those quickly, more detailed, um, really with, uh, text prompting and commands would be pretty extraordinary. And I, and I do like that idea that Um, you needed one of two things to communicate previously, even in the social media era, you had to have a good turn of phrase or voice or have access to resources to create video, if you were less compelling verbally, but now you can be a storyteller, uh, if you don’t have access to a whole lot of resources, but you’ve got a point of view, and I really like that.
[00:38:17] Steve: It has
[00:38:19] Cameron: Yeah. It’s going to be an amazing future with all of this. I mean, there’s going to be a lot of, like with all, with all of these things, there’s going to be a lot of crap that gets generated, but yes, there does. You’re right. You know, there will be people. In some cases, existing creative artists, in other cases, people that are brand new to creative arts that will do amazing things with this.
[00:38:45] Cameron: You know, and the example I always keep coming back to in my head is Orson Welles and Citizen Kane. I’m a huge Orson Welles fan, always have been, huge Orson Welles buff. And for people who don’t know the story, When Orson Welles made Citizen Kane in the mid 40s, it was his first film, and he had no background in film, never acted on film, never directed, never written anything.
[00:39:13] Cameron: Uh, he had a theatre, theatrical and radio background, and then because he was a big success, particularly with the War of the Worlds on radio, Hollywood came to him and basically gave him a blank check, do whatever you want, full creative control, never done, never, never, never heard of before, never been done before. And he, and he went to Hollywood with a script that he sort of co wrote with, uh, Mankiewicz. And he, he, he got approached by a cinematographer, a guy called Greg Toland, who had just won the Academy Award the year before for cinematography. And he came to, he came to Orson and said, I want to, I want to work with you.
[00:39:48] Cameron: And Orson goes, Orson was 26 years old. And Greg Toland to work with you. And Orson’s like, why? You just won the Academy Award. You could do anything you want. And he said, basically everyone in the industry thinks they know. How to do this. I want to work with somebody who doesn’t know what they’re doing, so they bring a fresh eye.
[00:40:07] Steve: That’s really
[00:40:07] Cameron: And so, Orson just had these ideas. that he brought from his theatrical days, you know, could we do a shot like this? And Toland was like, we’ll figure it out. Yeah. Yeah, absolutely. We can figure out how to do that. Not like, well, no, no, no, you wouldn’t do that. And so the, one of the reasons why Citizen Kane was such a breakthrough film and is still revered today.
[00:40:29] Cameron: by Film Buffs today is because it invented a dozen new cinematic languages, techniques, like zooming in through a window into a room and some of the shots and the use of light and shadow and perspective that Orson and Greg Tolland invented that changed cinema. But a large reason for that was Orson didn’t know what he didn’t know.
[00:40:54] Cameron: He came into it completely ignorant of what the rules of film were. Uh, as opposed to Tarantino, right, who takes the best of film and, and rehashes it, takes all the best things and, and throws them into a single film. Orson came at it with absolutely, absolutely stupid about film, but just had a lot of ideas, and Greg Toland was his technology.
[00:41:19] Cameron: Greg Toland was like, yeah, yeah, we’ll figure out how to make it better. Get it, get it done, right? Imagine who, who, who is the Orson Welles of runway going to be, of text to video going to be. That’s what I often wonder, right?
[00:41:30] Steve: That’s a, that’s a really good point. And especially when it’s like, because you have to imagine a year or two from now, you could do a 60 minute feature film through
[00:41:43] Cameron: Oh, that will, you know, we will get there, I’m quite sure, uh, with these tools. Anyway, moving right along, um, moving into a completely different space, um, our friend Elon Musk.
[00:41:53] Steve: Pun intended.
[00:41:56] Cameron: Starship! Now, this is actually, uh, happened, uh, I think about a month ago, but, um, Fox and I watched the, um, uh, successful launch, I think it was the fourth launch of one of SpaceX’s Starship Super Heavies went up into space.
[00:42:16] Cameron: And, uh, successfully landed, as did its, uh, booster rockets. Um, it’s, I mean, a stunning video to see, like, these things, so massive, I think like 36 Raptors, uh, Raptor engines on the bottom of it, all but one successfully ignited. It goes up, the boosters break away, fall back down to earth, beautifully synchronized, landing, uh, on the water in this case.
[00:42:44] Cameron: And then the ship went up into space for a little while and then came back down to earth as well. It just, like, it’s an incredible piece of engineering to be able to send these rockets that large up into space and then have them return safely on the ground. Whatever you might think of Elon and his politics, Tony Kynaston said to me on QAV this week he thinks Musk is a Bond villain.
[00:43:08] Cameron: We were suggesting that in the next Bond movie they should just get Elon to play the Bond villain as himself. Just go, just, you
[00:43:15] Steve: You’d like that.
[00:43:16] Cameron: He’d love it. He’d be totally down for that, I’m sure. But I was saying to Tony, like, I watched, in the course of about a week, I watched not only the Starship launch, I watched a video where Elon took a guy through the Starship factory, I think they call it the Star Factory, um, for an hour, Showing him all the different parts of the manufacturing process and explaining what they’re doing for the next iteration and why they’re doing it this way, not that way.
[00:43:44] Cameron: Then I watched him on stage at the Khan Lion Conference, the advertising industry conference in Khan, as the, as the, he’s not the CEO of Twitter, whatever he is of Twitter,
[00:43:53] Steve: He’s the CEO of Twitter.
[00:43:56] Cameron: no, some,
[00:43:57] Steve: Twitter. Yeah, I know. But that’s what I’m saying. He’s the CEO of Twitter. She’s there
[00:44:01] Cameron: the real CEO of Twitter. Yeah. Yeah. Yeah, he is talking about Twitter and I loved it. The guy who was uh, interviewing him, who was the CEO of WPP, the big agency, um, his opening line
[00:44:14] Steve: My old
[00:44:14] Cameron: six months ago.
[00:44:16] Cameron: Oh yeah. You were all WPP.
[00:44:18] Steve: Yeah. When, when I was, when that was when Sir Marty Sorrell was running it before he decided to put some, uh, Interesting transactions at 1am on the corporate credit card, which is insane, which is insane given that he’s, you know, worth 500 million or whatever he was worth anyway.
[00:44:36] Cameron: This guy’s opening question, Dylan was six months ago you told our industry we could go fuck ourselves. Uh, what was that all about?
[00:44:43] Steve: Yeah, it was,
[00:44:44] Cameron: so. Good, good opening question. Anyway, so I watched Elon, you know, have that conversation for an hour, and then a couple of days later, I watched him with a panel of his neuroscientists at Neuralink talk about the latest updates to Neuralink and what they’re doing there.
[00:44:57] Cameron: Like, whatever you think of Elon. To go from talking about super rockets to running Twitter to, you know, Uh, putting chips in the brain over a course of a week, you’re like, Jesus Christ, what is it, what must it be like in this guy’s head, really? I mean, I juggle a lot of plates, but they’re not like that.
[00:45:17] Steve: Nah, and, and it’s pretty interesting if you think about what he’s done with, you know, media, transportation, space.
[00:45:25] Cameron: Yeah, it’s not even Tesla.
[00:45:27] Steve: Yeah, that’s right. Uh, so. Yeah, there’s nothing you can say other than I think the kind of innovations it makes are nation state style innovations and, and, and should be the bastion of that.
[00:45:39] Steve: They’re not. They should be. Um, that said, you know, the space one for me is super interesting and I’ve always held this position and Bill Maher did a great piece on it about three years ago. It’s like, I’m really interested in what they call the low, Uh, Earth Orbit Economy or the Low Orbit Economy for Starlink and what he’s doing there.
[00:45:59] Steve: I think that’s incredible. I’m really excited about how space gives us new technologies, you know, everything from digital watches to Teflon to, you know, the, uh, where we are with, um, the IT revolution that we’ve had since the, the 60s. I think all that’s great, but the idea of becoming a multi planetary species, Is an incredible misallocation of resources.
[00:46:21] Steve: Now I’m hopeful that, yeah, yeah, it’s, it’s, it’s insane. Yeah, no, it’s, it’s, it’s look for me that the space exploration as it pertains to creating benefits on earth, I’m all for. Right? But the idea of becoming a multi planetary species, A, is never, ever, ever going to happen, not in our lifetimes, unless we have a singularity in which all problems can be solved computationally.
[00:46:48] Steve: So that’s the possibility. But Elon’s idea that, by the way, you know, we’re already meant to be, have our first thing on Mars. You know that, like he’s already broken a zillion of his promises there. Um, he’s running late, running late. He needs the singularity for it to happen.
[00:47:03] Cameron: Jesus was supposed to return 2, 000 years ago. People are still hanging out for that, like, cut Elon some slack. He’s just the new Jesus.
[00:47:10] Steve: yeah, well, they’re more patient than I am. I’m really not into the multi planetary species. And I am, I actually know what he’s trying to do. He’s trying to populate Mars himself. He just had another baby in the last couple of months, and he’s really doing some solid work there. And I just love how he calls these babies XYZ or whatever he calls them.
[00:47:27] Steve: It’s my favorite thing. So that’s, that’s his getup. And when he says multi planetary species, I mean, His planet and his species. He just left those two bits out. Uh, but I’m really into the benefits that space and space exploration
[00:47:44] Cameron: why do you think we won’t be a multi planetary
[00:47:47] Steve: I listened to a podcast with an ex NASA scientist who talked about Mars and unless we have general AI or singularity level AI that helps us solve a whole heap of problems and potentially, you know,
[00:48:00] Cameron: to have in five years or less. So
[00:48:04] Steve: okay. All right.
[00:48:06] Cameron: AGI, we’re going to have, we’re going to have AGI, we’re going to have robots that we can send up there to build all the structures that we need for
[00:48:13] Steve: Well, you need to change the climate. You can’t just build the structures. It’s not enough. Anyway, there was a podcast that I listened to on Econ Talk with Russ Roberts, where he had an ex NASA scientist who’s one of the most revered, who’s worked in it for, and he just basically said, let me just start off by saying this, space wants to kill you.
[00:48:34] Steve: And he went through, um, the layers of redundancy that you would require. To live on Mars and the potential for error is just so high and constantly a threat, it would be impossible to get all of the resources and the materials up there, unless like you say, you have, uh, you can send robots up there that have all the energy they need, that have all the resources, that can convert, the dust or whatever with nano robots to build all of the materials we need and change the climate and change the gravitational force like he went through it in all great detail and the moment i listened to that i’m like yeah nah never gonna happen The only way it could happen is if you say exactly what you say, is if we have some sort of revolution, you know, singularity level revolution with robotics to change the climate, the gravitational force, put all the materials up there, have all the energy you need, and be able to entirely reconfigure it.
[00:49:34] Steve: The planet? Then it’s never gonna happen and not worth happening. Now, here’s the counter. If we have all of that, if we have all of that technology, and it has to be here before it can be there, then let’s fucking fix the problems here, and help the people here, and have abundance here, and we don’t fucking need to be a multi planetary species.
[00:49:55] Steve: The end, rant over, I’m Steve Sammartino, and I’m running for president!
[00:50:00] Cameron: I think. Um, Musk’s point is that, uh, in case things go wrong here, we need a backup. We need redundancy.
[00:50:10] Steve: Well, he’s making them go wrong by giving 45 million dollars to Donald Trump. What do you mean in case? He’s fucking creating the problem he’s purporting to solve.
[00:50:19] Cameron: Well, he would disagree, but, uh,
[00:50:21] Steve: course he would. He’s Elon Musk. He’s God. He can have whatever opinion he wants.
[00:50:26] Cameron: yeah. Well, moving right along, uh, nanorobots, Steve, um, you know, one of the big, Brongs of the future, one of the pillars of the future. We’ve got AI, we’ve got robotics. One of the other ones is going to be nanotech. So I’m always keeping an eye on what’s happening in terms of nanotech progress.
[00:50:47] Cameron: This came out at the beginning of
[00:50:48] Steve: can I just, can I just point out where Cameron got this, which I love. So, Cameron, please.
[00:50:56] Cameron: What’s
[00:50:57] Steve: you’re just at the next level. I mean, you’re just going into science daily. You’re just going deep into the journals. I respect that. Everyone else is just fritzing around with TechCrunch and WiredMag, not Cameron Reilly. Cameron Reilly is deep in the journals.
[00:51:11] Cameron: it’s one of the websites I look at almost every morning is Science Daily, particularly the nanotech thread to see what the latest is. There’s a story a couple of times a week, they have some updates on nanotech research. Anyway, this one came out of the Karolinska Institutet, um, which is one of the foremost medical universities in the world based out of Sweden.
[00:51:34] Cameron: Um, Researchers at Karolinska Institutet in Sweden have developed nanorobots that kill cancer cells in mice. robot’s weapon is hidden in a nanostructure and is exposed only in the tumor microenvironment, sparing healthy cells. The study is published in the journal Nature Nanotechnology. The research group has previously developed structures that can organize so called death receptors.
[00:52:04] Cameron: Uh, coincidentally the title of my next punk rock album, death Receptors
[00:52:09] Steve: What was the
[00:52:09] Cameron: on the Surface,
[00:52:11] Steve: That’s great, that’s great.
[00:52:12] Cameron: it’s been so many, I can’t remember so many albums.
[00:52:14] Steve: one. Death Receptors, Punk
[00:52:16] Cameron: Death receptors,
[00:52:18] Steve: Playing at the Bridge Hotel this Friday, Get Undown.
[00:52:21] Cameron: oh God. The bridge that takes me back, uh, on the surface of cells leading to cell death. The structures Exhibit six peptides, amino acid chains assembled. In a hexagonal pattern, this hexagonal nanopattern of peptides becomes a lethal weapon, explains Professor Bjorn from ABBA. Um, when he’s not leading ABBA, he’s leading the Karolinska Institute’s Department of Medical Biochemistry and Biophysics.
[00:52:51] Cameron: If you were to administer it as a drug, it would indiscriminately start killing cells in the body, which would not be good. Glad he pointed that out. To get around this problem. that’s medical, that’s medical terminology.
[00:53:05] Steve: not be good. High medical, not be good. And I love that they’ve, you know, they’ve pulled Mel Gibson into the scientific paper with a bit of lethal weapon work. I mean, this is everything we can believe in. Would not be good, lethal weapon. Who says
[00:53:20] Cameron: a,
[00:53:20] Steve: medical researchers are boring? Not me.
[00:53:24] Cameron: to get around this problem. We have hidden the weapon inside a nanostructure built from DNA. So, look, I could go on, but, you know, ever since I read Engines of Creation by K. Eriks Drexler, sort of in the, I don’t know, early 90s. You know, one of the dreams is that we can build nano, medical. Nanorobots that can go into our body and kill cancer, fix diseases by targeting specific cells with specific molecular, um, solutions to whatever’s going wrong with that cell.
[00:54:03] Cameron: The fact that, um, they’ve come up with this and have tested it is really exciting to me, Steve.
[00:54:11] Steve: Well, this is, this harkens back to our Futuristic 27 where we talked about, uh, longevity, escape, velocity. And the idea that we’re getting advances and I think the next article you’ve got as well is, you know, self assembling and disassembling swarm molecular robots to, you know, control and again, kill disease and this, this is for me, the really interesting stuff, the stuff happening at the nano level.
[00:54:40] Steve: And, you know, it, it, it gives us a question, is it, what is. The computational ability of the future is that things just operating at a molecular level instead of off and on switches. You know, there’s this real sense of, uh, biological mastery that’s happening where we organize things at the molecular level.
[00:55:00] Steve: I don’t know. It’s, it’s, for me,
[00:55:02] Cameron: to know a secret?
[00:55:03] Steve: Yeah,
[00:55:04] Cameron: Often on switchers work at a molecular level too.
[00:55:06] Steve: of course. But, but that’s what I’m saying. Like instead of us using, um, instead of us using chips the way that we do now, you know, it’s, I guess the biological machines, is that, is that how you frame it, do you call them biological machines or, uh, yeah,
[00:55:24] Cameron: Well, you know, the whole field of nanotechnology comes from a paper that, um, Richard Feynman wrote, I think in 1959,
[00:55:33] Steve: paper?
[00:55:35] Cameron: yeah, from 59, I think. Feynman gave a talk, or wrote a paper, or one or the other, which was called, There’s Plenty of Room at the Bottom. And basically he was explaining, look, The way that nature works is it builds stuff out of molecules, it builds machines.
[00:55:52] Cameron: All life is basically molecular machines. And there’s no reason why we can’t build molecular machines if we become adept enough at A, understanding how Molecular machines work, and then having the ability to create small enough tools and technologies that can build the molecular machines that we want to build, that will do the functions that we want them to do.
[00:56:18] Cameron: So that kicked off the whole thinking about nanotech and You know, we’ve seen lots of advancement. Things like CRISPR that have been around for 20 years now, 15, 20 years, are quite exciting. But this is just taking to a new level. In the story you’re mentioning, self assembling and disassembling swarm molecular robots via DNA molecular controller.
[00:56:38] Cameron: This is from June 14. By a Tohuku Toku University researchers from Tohoku University and Kyoto University have successfully developed a DNA based molecular controller that autonomously directs the assembly and disassembly of molecular robots. This pioneering technology marks a significant step towards advanced autonomous molecular systems with potential applications in medicine and nanotechnology.
[00:57:08] Cameron: A newly developed molecular controller composed of artificially. Designed DNA molecules and enzymes coexists with molecular robots and controls them by outputting specific DNA molecules. This allows the molecular robots to self assemble and disassemble automatically without the need for external manipulation.
[00:57:34] Steve: Yeah. I mean, when you hear, hear this and if you, I don’t know if you can put an exponential paradigm around it, but if these small experiments can scale, then you have to believe. Let’s call it the problem of death is something that can be a workaround. I don’t know if Google still owns it, but they used to own a company called Calico.
[00:57:58] Steve: Whose, uh, objective as a company was to solve the problem of death. I mean, you talk about BHAG goals, like it’s, it’s pretty interesting, right. And part of that longevity space, but I think CRISPR, uh, molecular robotics and, and, uh, nanobots is, is going to become more interesting. And I think tap into modern society where people start to talk about it a little bit more.
[00:58:24] Steve: I mean, we in 80s. A couple of real world examples that go beyond mice and kill cancer with a particular type of cancer. And then, and then it’s game on. I mean, I imagine it’s probably not that easy to get funding for these things here. It’s certainly not like the AI race where anyone buying GPUs can raise billions of dollars.
[00:58:43] Steve: Imagine if this had enough attention. And you could solve some real problems like cancer and the revenue sources around that. I mean, goodness me, you’ve got a great target audience. You’ve got all those boomers who own all the houses. They can just, just take a chip off the old block there and pay to cure their cancer and then just buy more property and keep more children out of housing, Cameron, is what they can do.
[00:59:08] Cameron: About, uh, 2007, I interviewed a guy called Dr. Aubrey de Grey. who is a British biomedical gerontologist. He wrote a book in 2007 called Ending Aging. And I remember when I interviewed him, he talked about how the last great disease that we have to fix will be aging. Um, okay. And, um, yeah, he, he’s still around, got himself into a little bit of, me too, trouble a couple of years ago, but, uh,
[00:59:43] Steve: Just a little bit of, just a little bit, just a little bit of me too trouble the other day. Anyway, he’s done some great work. Let’s get back to his growing. Sorry. I just saw an opportunity for humor in my humorless life at the moment.
[00:59:56] Cameron: He, uh, said some inappropriate comments to some female colleagues, apparently, but, um, He,
[01:00:03] Steve: Musk did and just have babies with them.
[01:00:06] Cameron: he, can I finish this fucking story, or, uh, can you, yeah,
[01:00:11] Steve: Sorry.
[01:00:12] Cameron: you’re the one with the heart out, and you’re not letting me finish the fucking story. And we haven’t talked about Apple Vision Pro yet. Anyway, he’s been working on this for 25, 30 years, um, and, you know, a lot of the stuff that he told me about nearly 20 years ago, we’re getting closer and closer to.
[01:00:27] Cameron: You know, this idea that aging is just a problem that we can solve, right? It’s the breakdown of cells, um, it’s, it’s not something that’s written into laws of the universe, that we have to get old and die, we should be able to fix that, and he’s, I remember he told me back then that he believed, I can’t remember the exact numbers, but he said something like by, by 2050 we’ll have the ability to extend the healthy human lifespan to 125 and then within the next 20 years we’ll be able to push it out to 150 and then I think he had, he thinks there is a hard stop around about 200 or something like that but by then we’ll have, fuckin who knows, robot bodies and uploading our brains and uploading ourselves into the Apple, Cloud with Apple Vision Pro.
[01:01:16] Cameron: Uh, so let’s, let’s skip my next story then, Steve, and, uh, talk about our deep dive, uh, for music. Um, do you want to lead this or do you want me to kick it off?
[01:01:29] Steve: think you’ve been, you kick it off. I’ve been interrupting a lot.
[01:01:36] Cameron: It’s like my wife cannot let me finish a fucking sentence without interrupting. Um, I saw this video during the week, um, on TikTok by a music producer and he was doing an analysis of. A couple of the text to music generators out there, Suno and Udio. Let me play a quick clip of him and I’ll just get to his summary.
[01:02:02] Cameron: So he made some tracks. This guy’s a professional music producer. His name’s Mike McClellan.
[01:02:08] Dan McClellan: Well, friends, we’ve arrived. AI tools can now generate music from text. But the question is, is it any good? So last week I posted a video where I looked at suno. ai, which is a website that anybody can use. You type in some text and it generates an instrumental track or a song. I told it to create a reggae song about a Soviet submarine captain who likes cheese.
[01:02:33] Dan McClellan: And boy, did it deliver.
[01:02:41] Dan McClellan: I got a lot of comments on this video. A lot of strong, mixed reactions. A lot of people said something to the effect of, Wow, this is so cool! Looks really fun to use, I’m gonna go make a bunch of stuff now. A lot of people said something more like, We’re doomed, this is the end of creativity, the end of humanity.
[01:02:56] Dan McClellan: Some people said something like, Well, this just means we all need to go see more live music now, you know, let’s bring it back to the old days. And this is the correct take. And a lot of people said, Go try Udio, it’s better than Suno. So that’s what we’re doing today. Today I’m going to give these two AI tools the same prompt and see how they compare and just how they fare in general.
[01:03:16] Dan McClellan: Cool mist falls on the green fern groves, Damp and serene in the forest coves, Drifting through pines as the owl hooves slow, yeah, yeah. Roaming past rivers where wild salmon play, I see the earth breathing in the fogless day, I say, uh, hey there old big woods, can you rain whisper too? I have to confess that Hearing this for the first time sent me into kind of a funk, because I’m a music producer.
[01:03:52] Dan McClellan: I spent ten years of my life working full time in music, people hire me to create what has just been rendered in a matter of seconds. I spend hours recording guitars, setting up microphones, editing takes, recording vocals, arranging vocals, recording strings, recording drums, editing drums, mixing. There is so much that goes into this.
[01:04:14] Dan McClellan: And this just did it in a matter of seconds, and while it’s still not quite up to snuff, it will be very soon. My livelihood is about to change.
[01:04:24] Cameron: So basically his, his professional opinion is, look, the tools are really impressive already, um, not quite ready for, you know, mainstream, but they’re gonna get there.
[01:04:37] Cameron: And I think he’s terrified by what this means for him and other producers and musical artists as, you know, I think all of us in any sort of creative industry is terrified at the moment.
[01:04:50] Steve: Well, I wrote about this, I think about a month ago. On my blog, it was called AI Killed The Radio Star, and I said this, artificial creativity, sorry humans, it turns out AI is creative too. Of course it’s derivative, but so is human learning. Uh, within a decade, someone, in parentheses, will create an album entirely with AI, which will be a global super hit.
[01:05:12] Steve: I’m not talking about a song, I’m talking about multiple songs, all of which will be among the most downloaded on Spotify of months. The album will have incredible music videos, the likes of which no one has ever seen creatively, And this album will create a media sensation where pundits explore this new reality with amazement.
[01:05:28] Steve: But it shouldn’t surprise anyone given where we already are. Yeah, I wrote a song too, but the way I did it was I used a few different AI tools to then put it in Udio and I gave it samples of the type of music that I liked and I based it on like nine inch nails and I made it like a cyberpunk. Some, um, and I even gave it an album name.
[01:05:51] Steve: Uh, the album name was Files of Ideas That Will Never Happen. The band name was called Trading Electrons and the debut single was called Electrons in the Void. Do you want to hear Electrons in the Void?
Ikea’s flowed like cybersmog, None will happen, life’s a card, Stuffed with dreaming, it’s all a scam, Fade to black, slam the sham, This song’s a glitch, a digital ghost, Another post, another toast, The lawn’s in chaffs, not in code, Evaporates, crashes the node, Makers cry, there’s a machine, Don’t be that drone, stay unseen, Overpriced, never delivered, Just molecules, data slivers, We’re electrons in a void, It’s not our fault, we’re just annoyed, Life’s a glitch, dreams corrupt, Let it crash, don’t disrupt, We’re electrons in a void, It’s not our fault, we’re just annoyed,
[01:06:35] Cameron: Wow, that’s absolutely banging.
[01:06:40] Steve: Do you joke it? I actually like it.
[01:06:43] Cameron: No, I like it, that’s banging, I want the whole track, that’s only 32 seconds, where’s the rest
[01:06:47] Steve: pretty good. Well, and because I kind of thought through the pieces of the puzzle that will make it match. And I wanted like a song that is kind of like cyberpunk, anti kind of tech, but you’re in tech and that the irony of it. And then I thought that industrial punk would suit that lyric base and sound.
[01:07:09] Steve: So I think if you. Again, it’s like a movie. It’s how you put the pieces of the puzzle together to create new forms of creativity.
[01:07:19] Cameron: Wow. So tell us, walk us through how you did that again.
[01:07:24] Steve: So I came up with a premise, uh, with some of the things that, uh, uh, had an idea around where we’re going technologically. And I had a bunch of words and then I put those words into ChatGPT and said, Give me some lyrics that gives a cyberpunk sound to it and pace. Use these particular words in the lyrics. So I gave it a bunch of words. It was about 12 words that I put in there and I said give me the lyrics.
[01:07:56] Steve: I played with it and made it go back to the well about four or five times. And then I used those lyrics and said, give me some styles that would suit these lyrics in music that’s already existed in band names. And then I got that and then I pumped it into UDEO and got some styles, played with the prompts until I got a style that sounded good and then dumped the lyrics in and got it to make a song with those lyrics.
[01:08:17] Cameron: Brilliant.
[01:08:18] Steve: So just a few more little steps to the puzzle. Do you know what I mean? Yeah.
[01:08:22] Cameron: Yeah. Like this is, you
[01:08:25] Steve: And then those lyrics and then think about those lyrics to, to draw back to something that we spoke about earlier is. You could use RunwayML Gen 3 with the song and those lyrics. And then next thing you know, you’ve got a pretty interesting video.
[01:08:41] Cameron: Yeah. And people are doing that. I mean, I’m seeing examples of this on Reddit all the time. Music videos, ads made with tools like Runway, um, or, uh, whatever the OpenAO one is, I can’t remember now. Suno? No. What’s the OpenAO? What’s the video speech to text thing that OpenAO have got? I’ve.
[01:09:01] Cameron: MentalBlank,
[01:09:02] Steve: Sora, Sora,
[01:09:06] Cameron: Sora,
[01:09:07] Steve: many. Mate, this, a few months ago, I used to keep this file of, um, um, amazing AI tools you can use. And it got to about 35 and I just went, yeah, just Google it. At the bottom, I said, there’s too many now, just Google it.
[01:09:19] Cameron: yeah. Yeah, look, you know, with all of this stuff, you know, I, I’m absolutely convinced, uh, you’re right, that there will be hit albums, that will be hit albums, that will be created by AI, and that people won’t know are created by AI. It’ll be like Milli Vanilli. It won’t be until You know, six months later that it will come out that it was actually completely created by AI and the artist doesn’t really exist.
[01:09:44] Cameron: It was just, uh, some, some dude in his basement who created the whole story and got it out there with the help of, um, some viral marketing. It’s funny. I was watching, you know, who Dick Cavett. was or is, um, great American, um, TV host in the seventies, eighties. He had a talk show, but it was, it was, it was like the original podcast.
[01:10:09] Cameron: He would get people on and like celebrities, politicians, whatever, but just talk to them for an hour, like just a real deep, intelligent conversation. And, um, I was watching one of him interviewing Billy Joel in sort of the mid 80s the other day. I just have it on the TV when I’m working. Again, it’s on Tubi.
[01:10:27] Cameron: tv, the whole Dick Cavett archives, um, and they were talking about Milli Vanilli. And the uproar about Milli Vanilli and Billy Joel was like, I was really surprised. Like, I thought everyone knew that they just, you know, were actors and just mimed it. He said, I was surprised. He said, but then the people who heard the song, liked it enough to go buy the album, then are upset when they found out they’re fake.
[01:10:50] Cameron: I’m like, you like the music. You bought the record because you like the music. What, who do you, why do you care who sang it? Really? I mean, you liked it when you heard it. So what does it matter if the guys in the video were fakes or not, right?
[01:11:04] Steve: But, well, I mean, then the question is, well, everything is set up in music, it’s just how far on the spectrum is it, but also, and, but it’s also, it’s the deception that people are upset about, the perceived deception, I mean, we should really be more understanding that anything in media is a quasi deception, it’s a, it’s a production, it’s just how much of it is produced and who produced what and how many people are behind the scenes and backup singers and all this kind of stuff.
[01:11:32] Steve: Um. But for me, that, that, yeah, uh, well, I’ll, I’ll, I’ll, I’ll come to that, but yeah, yeah, I’ll come back to that. But it’s a little bit like, uh, simulation theory. People say, well, are we in a simulation? Are we, or if we’re not? It’s like, if we are, if we aren’t, it doesn’t make any difference. It just is. If you’re in a simulation, you’ve got a
[01:11:52] Cameron: Well, it kind of does. Yeah.
[01:11:55] Steve: Yeah. Um, Rick Beardo, um, is a great, uh, I love his stuff. He’s a music YouTuber. He’s incredible. And he said, you know, is, is AI music, music. And that’s a real question. Like what is music now? So you might have different genres where it’s like you have music where you play instruments, and then you have music where you create it with AI tools, I think you’re just gonna get a split, where you’re gonna have maybe two sets of charts, you might even have AI charts, and you might have, uh, analog musician charts, it’s kinda like fast food and gourmet slow food, it’s, you know, canned food
[01:12:31] Cameron: even,
[01:12:32] Steve: Fresh Vegetables.
[01:12:32] Steve: Really?
[01:12:33] Cameron: analog musicians are using auto tuning and stuff like that. So where do you draw the line between using auto tuning and
[01:12:42] Steve: from Steven Commands to, um, the busker on the street with an acoustic guitar. It’s a very, very Raw spectrum. And where are we on the, on that spectrum? But the one thing that AI music doesn’t do, it lacks human nuance. And the big thing is with quantized music and Rick Beato has gone through this in great detail.
[01:13:03] Steve: When something is recorded on tape, it has the slight nuance where it’s on slightly on or off beat. The sounds are a little bit different where the microphones are set up. That guitar is the same as the other guitar, but it was from different timber from a different tree. And you had all of these sounds.
[01:13:18] Steve: that had difference and our ears have had 200, 000 years of evolution to be able to pick up slight differences in your voice versus my voice versus this sound versus and and as soon as things are quantized what you had was like a everything got Down to a small sampled regime. Like a snare drum is a snare drum that in Pro Tools, everyone uses that snare drum.
[01:13:40] Steve: Well, there’s seven snare drums instead of thousands of different snare drums produced, which have all these different sound. And they’ve done studies that music sounds better then because there was more variety and the lack of variety is the homogenization of music. It’s not that there’s not creativity.
[01:13:54] Steve: It’s actually to do with sounds and timbre anyway. So that’s kind of where. He kind of came to, but my, my, my view is if you create music via text prompt, is it really music? I don’t know if I know the answer to that question, but I know that music will live on a spectrum now where it’s like. It’ll be like, it’s a little bit like a restaurant.
[01:14:15] Steve: It’s like, okay, what is this? Oh, this is made by a human to this level, or this is grown in my backyard. No, this is supermarket shelf stuff. I don’t know. Trainspotting.
[01:14:29] Cameron: Alright, we’ve got four minutes and I wanted to talk about Apple Vision Pro. Yes, they’re in the country, they came into the country a week ago, uh, today, uh, I had a demo Saturday a week ago with one of my boys, uh, at the Apple Store in Chermside. Uh, Chrissy had one a couple of days later. I went to my demo, I was really going there because I dropped my laptop off and they were fixing it, and I went to pick up my laptop, and I was like, ahhhh, get a Vision Pro demo, had very, very low expectations, spent the entire 20 minute demo, going, holy shit, no fucking way, this is insane.
[01:15:07] Cameron: Absolutely blew my mind, Hunter, one of my sons, the TikToker, had one, same reaction, I got his on video. Chrissy, who’s been sort of anti The Vision ever since it was announced, she’s like, that’s all the world needs is another fucking screen for people to get lost in. Took her for a demo on Tuesday. She was like, holy shit.
[01:15:25] Cameron: I had no idea what this was. This is incredible. Wow. I highly recommend,
[01:15:31] Steve: Trainspotting. How good does it feel? Well, best orgasm you’ve ever had. Times a thousand. Get this skag your life.
[01:15:42] Cameron: highly recommend everybody who listens to this. If you live near an Apple store, uh, book a demo. They, they, they. You know, they’re not selling any of them. So they, they want to do demos and they keep saying, come back, have another one. When I went with Chrissy’s, the guy goes, you want to do another one while you’re here?
[01:15:57] Cameron: And I’m like,
[01:15:58] Steve: just like the drug dealers down in the alleyways in Footscray. Come back, have another one. It’s fine.
[01:16:06] Cameron: will blow your mind. And when you do it, I want to talk to you about how it’s going to,
[01:16:11] Steve: and, and we’ll talk about it next, next, um,
[01:16:14] Cameron: How it’s going to change TV and cinema. Um, particularly when you combine that with the things that we’ve been talking about, like RunwayML, when you’re, cause you, with its fully immersive modes, you feel like you’re right there. All you can see, your entire frame of reference is the video that you’re in front of and they have spatial photographs, spatial video where you feel like you can reach out and touch things.
[01:16:41] Cameron: Um, and it’s insane, like you’re watching a basketball game from the top of the hoop and it’s all around you, like so in your face. But, you know, one of my sons, Taylor, the talent manager as we were leaving, he was saying, imagine when you’re watching the Fast and the Furious film, but you’re in the car and you.
[01:17:00] Cameron: Everything is first perspective. And I was even thinking when When’s France? Ford Coppler are going to remix the Godfather for the seventh time, but you are in the room with Don Corani and your, your, uh, Jurassic Park. You are Star Wars. You are there. It’s hap the action is happening. around you because you’re sitting in the middle.
[01:17:24] Cameron: The camera perspective is not going to be, you know, we used to talk about sit forward and lean back media. This, if this can, if this kicks off and there’s a, there’s a lot of, there’s, there’s going to be a gap between when it’s like four people have one and it’s widespread enough for. You know, people who invest a lot of money updating their film or how they make film.
[01:17:47] Cameron: But it won’t be that hard, I imagine, to use text to video production technologies to build new things where you’re in the middle of the action that you can wear with these things. But it’s, it’s gonna be a complete revolution. In video entertainment, when you feel like you are in the middle of whatever video that you’re watching.
[01:18:07] Cameron: Uh, I was super pumped by it. Uh, I mean, I can’t afford to buy one and there’s no, I don’t think there’s really any point having one right now. It’s sort of a demo proof of concept tool, I think still. But, um, Forget, you know, I was thinking of it as a, as a work tool, you know, Oh, I can have my screens around me and that kind of stuff.
[01:18:29] Cameron: Again, the other thing that I didn’t really appreciate is the computer is in the goggles. So the entire computer you’re wearing on your eyes, the entire thing. And I, and I was thinking like a seven and a half grand. I remember the, the, the Windows 95 PC that I bought in 1995, which was a huge, big CRT with.
[01:18:49] Cameron: A huge fucking, uh, box that sat on the floor cost me 5, 000 30 years ago. This thing is 7, 000, but it’s all of that times 1, 000 in terms of performance, or probably more than that, and you’re wearing it on your face. It’s insane. What we’ve done with what they’ve done with, uh, shrinking technology in the last 30 years, but forget about it as a work tool right now, go and look at it as an entertainment device and what it holds for the future of entertainment.
[01:19:21] Cameron: It blew me away, really did.
[01:19:23] Steve: the metaverse place and the idea of the Oculus Goggles going way back, I always said that, look, there’s three really great use cases, right? I never thought we’re going to live in there socially or do that or do work meetings like this because I just don’t think you need it. It’s just overkill.
[01:19:38] Steve: But I always thought it would be an incredible entertainment device and movies. I thought it’d be great for industrial applications like fixing things and augmented displays on, you know, big screens. Repairing aircraft engines. And I thought it’d be great for training things like, you know, surgeons and all manner of things.
[01:19:55] Steve: So that were the three use cases and it sounds like they’ve really nailed that entertainment case, but I’ll, I’ll check it out this week and report back.
[01:20:02] Cameron: And I imagine FaceTiming too, like if you’ve got family on the other side of the world or even the other side of the country, being able to have a conversation with someone where you’re right there and they’re right in front of you will be cool too. But anyway, go check it out. That’s futuristic. Steve’s got to go.
[01:20:18] Cameron: I got to go to Kung Fu. Good to talk to you as always, buddy.
[01:20:22] Steve: mate. I really enjoyed it today.
This week on Futuristic, Cam, Steve, and Tony Kynaston of the QAV Podcast tackle the burning question: Will AI and humanoid robots bring about a utopian future or massive unemployment? They dive into potential economic shifts, the state of media, Apple’s antitrust issues, and whether competition will drive tech adoption. Tune in for a lively two hour debate on the future job market and the possibility of a global AI Cold War.
00:00 Introduction and Nostalgic Sitcom References
01:06 Guest Introduction and AI Optimism
12:04 Ray Kurzweil’s Predictions and AGI Discussion
45:22 Australian Media vs. Meta
58:02 Big Tech and Antitrust Issues
01:01:25 The Impact of AI on Employment and Economy
01:20:24 Debating Utopian Futures
01:20:59 Economic Realities of Utopia
01:25:17 AI’s Role in Future Jobs
01:26:56 The Human Element in AI Integration
01:31:07 AI’s Impact on Professional Sectors
01:32:49 The Future of AI and Society
01:38:57 Regulatory and Economic Challenges
01:53:15 The AI Cold War
01:55:41 Final Thoughts and Reflections
FUTURISTIC 27
[00:00:00] Cameron: you guys. Welcome back to a very special edition of Futuristic. That’s what they used to say on sitcoms in the
[00:00:16] Cameron: 80s, wasn’t it? If they were going to talk about AIDS or suicide or something, it was a very special edition of, uh, Different Strokes this week, where somebody got touched by a Catholic priest and we’re going to talk about the ramifications.
[00:00:32] Steve: See, I thought when different strokes came out that look, America’s not racist. It’s amazing. Look, a white rich guy stole two black children and put them in his
[00:00:40] Steve: penthouse apartment. Like, seriously? And I fell for that? What?
[00:00:46] Tony: I thought you were going to tell your joke about reverse exorcism then, Cameron.
[00:00:51] Cameron: Ha ha ha ha! My
[00:00:52] Cameron: joke? Why is it my joke?
[00:00:53] Tony: Oh was Taylor’s. It was one of the Reillys.
[00:00:55] Cameron: Yeah, yeah, yeah, it’s Taylor’s joke. Yeah, yeah. What’s a reverse exorcism, Steve?
[00:01:00] Steve: No idea.
[00:01:01] Cameron: When a demon pulls the priest out of a child. Yeah, uh, it’s episode 27 of Futuristic with myself, Cameron Reilly, Steve Sammartino and Tony Kynaston, the man from the QAV podcast, uh, who’s come on to, um, discuss, debate. Steve and I about our optimistic AI projections. How are you, Tony?
[00:01:31] Tony: good things, Cam. I don’t know about discuss or discuss. Yes.
[00:01:34] Tony: I don’t know about debate. I often agree with what you guys are saying and thanks for inviting me on. It’s a great show.
[00:01:39] Tony: It’s great to explore this area and you guys do a good job.
[00:01:43] Cameron: Thank you, Tony. How are you, Steve?
[00:01:45] Steve: Good.
[00:01:47] Cameron: What’s on your hat, Steve? I should have worn my Joe’s BBQ hat, but you’ve got a hat on. What’s that? Buck Fever. I thought that was, I thought you were going with the Pulp Fiction line there. My name is Buck.
[00:02:01] Steve: It’s a, it’s a local hat business called The Farm
[00:02:06] Steve: and all of them have some sort of an animal and a statement and I just like them and this is one and I get one every year for Christmas.
[00:02:14] Cameron: That’s nice. Well, let’s, uh, let’s talk about AI, um, before we do, Tony, the way we
[00:02:22] Cameron: normally start the show is we talk about one thing of note that we’ve done with AI or futuristic technology since our last episode. Uh, do you want to start?
[00:02:34] Tony: I’ll be very short, I’ve done nothing, but I’m just looking at Steve’s cap there, you sure it’s not a spoonerism, Steve? Buck fever?
[00:02:42] Steve: Could be.
[00:02:44] Cameron: Spoonerism.
[00:02:45] Tony: Yeah, swap the initials.
[00:02:47] Steve: Oh, no, it’s definitely not that.
[00:02:51] Cameron: Oh, there you go. You’ve, you’ve taken the show right, immediately, straight out of the gate, down to another level, Tony. Well done. Well done.
[00:02:59] Tony: not me, I’m not wearing the hat. It’s just where my mind, it’s just where my mind goes.
[00:03:04] Cameron: Lone
[00:03:05] Steve: If they’re all spoonerisms, that one doesn’t work, so.
[00:03:08] Tony: okay. It’s not the
[00:03:09] Steve: That botches that theory, doesn’t it?
[00:03:12] Tony: Well, possibly, yeah. Uh, no, I’ve done absolutely nothing with technology this week,
[00:03:17] Tony: Kev. I mean, apart from the fact that we live in an age where we’re talking to each other across vast distances and, and, uh, in real time with video, it’s pretty amazing, but otherwise, no.
[00:03:28] Cameron: What about you, Steve? Done anything impressive?
[00:03:30] Steve: well, I have something disappointing to announce. Weekly, almost daily, I put up what I believe are thought leading posts and ideas of things I’ve written and said on places like LinkedIn to generate work, and I get a few views. If I do something, it’s Incredibly poignant about the economic impact of AI.
[00:03:49] Steve: Five people read it. But this week, I went Mr. B style and went into the, let’s call it the game show side of AI. I put up a quiz. Guess which picture is AI generated and which one is an actual photo? Bam, 20, 000 views, which is the decline of modern society. Put up a pop quiz. People are in. Put up something thought leading that might change their economic or social circumstances.
[00:04:12] Steve: Crickets!
[00:04:13] Steve: up something that goes to the lowest common denominator of I’m smart, I can guess which one’s AI, and bam, 20, 000 views. That’s all I got to say.
[00:04:21] Steve: That was less than a minute. Check me later.
[00:04:24] Cameron: you gotta entertain, I guess is the lesson there, Steve. People wanna be entertained,
[00:04:28] Steve: Seems like
[00:04:29] Cameron: not educated, entertained, which is why we’ll all be getting our close off
[00:04:34] Steve: Entertainment is greater than education in 2024. Is that where we’re at? Wasn’t the internet the promise of all knowledge to
[00:04:40] Steve: all people?
[00:04:41] Cameron: yeah. Well, I’ll tell you what was entertaining was the, uh, US presidential debate, um, talking about the, the decline of decline of civilization, uh, that was entertaining for all the wrong reasons. Um, well, I, you know, I continued to do a lot of code, um, the, the. AI tool that everyone’s raving about from a coding perspective is Claude, again, Anthropic’s AI tool, Claude, 3.
[00:05:06] Cameron: 5 Sonnet is the current iteration of it. Um, a lot of people have had a lot of success, a lot of professional coders are saying they’re using Claude for their coding work. I’ve been playing with it a lot in the last week, trying to continue to automate our checklist process, Tony, uh, for QAV. And I have not had a lot of success with it.
[00:05:30] Cameron: It’s, uh, gets very confused when trying to write Python code to manage Excel spreadsheets. Leads me around in circles, so, you know, it’s this continuing thing of coding with AI, on one hand, is incredible and amazing because I can code shit and, you know, two years ago I couldn’t code to save my life. But it’s also really frustrating at times going around in circles.
[00:05:55] Cameron: So I continue to push through and part of me goes, dude, you’re wasting so much time doing this. A year from now, you’ll have ChatGPT 5 and it’ll just do it in a heartbeat. But you know, what am I going to do? I gotta, I gotta do something. So anyway, it’s, um, it’s, it’s, it’s one of those things where you have enough success that it sort of keeps drawing you in.
[00:06:19] Cameron: Just when I thought I was out, they drag me back in
[00:06:22] Tony: conditioning. Random reward.
[00:06:25] Steve: It
[00:06:26] Tony: keeps you there.
[00:06:27] Cameron: Yeah, it is. It’s like golf, right,
[00:06:29] Tony: Yeah. Oh, like any poker machines and gambling,
[00:06:32] Steve: social media.
[00:06:33] Tony: social media. Yep.
[00:06:34] Steve: Yeah. I mean,
[00:06:35] Tony: like watching sitcoms. So, you know, you get one good episode every three or four. It keeps you, keeps you tuning back in again.
[00:06:41] Steve: Although I don’t think that was by design, but I love that.
[00:06:46] Tony: that’s an interesting point you make, Cam. I think, you know, when I was putting together my notes for the show, I actually made a note to say that the democratize, democratization of coding is the, is probably the biggest thing I’ve noticed about AI in, My Lives or Society.
[00:07:02] Steve: And that’s the,
[00:07:03] Tony: And seeing you do it. I mean, going, as you say, over the last couple of years, going from very little knowledge of coding up to where you are now, just because you can ask ChatGPT to write some script for you.
[00:07:13] Tony: It’s actually, it’s been amazing to watch that. And, and democratization of coding and access to data I think are two things which will be big going forward.
[00:07:22] Steve: I do that on
[00:07:23] Cameron: When we started.
[00:07:24] Steve: I talk a lot about the idea that everyone’s an AI developer now with natural language processing. And most people are actually a little bit surprised and people will come up after and go, I just did that thing that you asked. And it’s like, wow. I still think. And it’s easy for us to get in this bubble of like, everyone knows this, but they don’t, they just don’t know that how emancipating the ability for everyone to create code and shape the technological world, because now we’ve got natural language processing is probably the heart of this revolution.
[00:07:53] Steve: I think.
[00:07:54] Tony: Yeah. And can you, you think about all the people when we started QAV who said, I don’t like Excel spreadsheets. Don’t give me Excel
[00:08:01] Tony: spreadsheets. Right. can sort of jump over that barrier with, with a kind of coding that you’re doing and just produce an end result. It doesn’t matter if it came out of a spreadsheet or whatever.
[00:08:11] Cameron: Yeah. Well, I was going to say, when we started QAV
[00:08:14] Cameron: four or five years ago, I could barely work a spreadsheet. Uh, and
[00:08:18] Cameron: now, you know, I can build
[00:08:20] Cameron: spreadsheet, you know,
[00:08:21] Cameron: um, masterpieces, because GPT tells me what to do. Put this code in here, do that there. It’s, it’s, um, been incredibly liberating. I, I did remember, though, one thing I did yesterday with tech, with AI, which I was So I’m standing at my mum’s place up in Bundaberg helping her do some yard work.
[00:08:39] Cameron: I’ve spent the last few days chopping down trees and taking loads of green waste to the dump in her trailer. Feeling very manly as I do it. It’s like the only manual labour I’ve done in the last year. Uh, but she doesn’t have, my mum’s got a TV that’s smaller than my iPad screen, and it’s not connected to any of the digital services.
[00:09:01] Cameron: She’s got a whole bunch of DVDs that have been sitting on her shelf for like 25 years, um, that are mostly crap. Uh, so Chrissy and I were in this in town the other day, uh, yesterday, and I was in like a silly soys, like a $2 discount store, and they’ve got a bunch of DVDs there for a couple of bucks. And I was slipping through ’em and most of them were crap, but there was some that looked like art house palace cinemas kind of stuff that Chrissy and I enjoy watching.
[00:09:27] Cameron: In the past, I would’ve. Pulled up IMDB or Rotten Tomatoes to look at reviews for these things. What I ended up doing was just holding my phone up, taking a photo of the cover into GPT and saying, is this worth watching or not? And it would give me a pricey on the film, tell me what the critics said, how it did at the box office or whatever.
[00:09:48] Cameron: What awards it’s won at Cannes or Sundance and places like that. Just boom. One second, I’ve got a review. And I did that like four or five times. Just hold up the cover, take a photo. What about this one? What about that one? What about this one? The other thing I did with it last night is we, Chrissy and I like to drink tonic water.
[00:10:05] Cameron: You know, we’re not alcohol drinkers. Chrissy’s sober for 12 or 13 years. I barely drink. Although my mum has poured me a couple of scotches since I’ve been here, which is nice. But we drink a lot of tonic water, and I just had the label, it was like a Schweppes tonic water, and I took a GPT photo of the ingredients on the back and said, how would I make this at home?
[00:10:26] Cameron: And it gave me the entire recipe and the process and what to do. And it involves quinine powder, I went to look up quinine powder, hard and expensive to get quinine powder. So I said, well, what, what, quinine powder’s a pain in the arse to get, what else could I use? And they said, what about Angostura bitters to put the bitter flavor in?
[00:10:44] Cameron: I go, I got some of that in my fridge. Yeah, okay, well, just, you know, replace the Angostura bitters and you’ll have something similar. So boom, got a homemade recipe for making my own tonic water now. I have heard people talk about this. You take a photo of any ingredients on the back of a thing in the GPT and it’ll give you a recipe to replicate it at home.
[00:11:03] Cameron: So if you can be bothered, there you go.
[00:11:05] Steve: I mean, I don’t think people realize how much GPTs and AIs are going to eat the internet. Things that we would go to to get information, yeah, you know, the IMBD example or Rotten Tomatoes of a movie is a great example. Take a photo of anything, image recognition, and image recognition, Leads you to a commercial answer that someone otherwise had a valid business running behind that on some sort of internet or app play, it’s really going to eat up apps because a GPT can do anything that an app can do.
[00:11:37] Steve: And you know, one of the topics that we want to discuss today is what, what are the competition implications of that? When classic power laws emerge and you’ve got three AIs that literally can do.
[00:11:50] Steve: Anything on the internet, what happens to the entire, uh, commercial system underneath it at the moment.
[00:11:57] Cameron: Yeah, that’s the thing that’s been mostly front of mind for me for the last few weeks. But before we get into that, I sent you guys a link to Ray
[00:12:04] Cameron: Kurzweil’s latest TED Talk. I’ve been reading his latest book, The Singularity Is Nearer.
[00:12:11] Tony: I thought it was coming out next year you’ve already read it have you?
[00:12:15] Cameron: Uh, yeah, no, it’s
[00:12:16] Steve: Came, it became so near tiny that it’s already arrived, see what, what happened, what, it just, it
[00:12:21] Steve: just, it went back, it went forward in time, and, and that’s, that’s what
[00:12:24] Steve: Ray can do, because have you seen his hair? With hair like
[00:12:27] Steve: that, he’s capable of anything.
[00:12:31] Cameron: That hair. Uh, yeah, so, um, Uh, I’ve been reading the book and it’s great. And it prompted me to pull out his older books too. So I went back over the last couple of days and I pulled out The Singularity is Near from 2005, Age of Spiritual Machines from 1999, I think, and The Age of Intelligent Machines from 1990 and had a look at some of his predictions from those books, which I’ll talk about a little bit later on, but in terms of the TED Talk, did you guys get a chance to watch that?
[00:13:03] Steve: did watch it,
[00:13:06] Cameron: any takeaways, any thoughts on Ray’s latest TED Talk, apart from his hair and his braces?
[00:13:11] Steve: but look, I liked it, and, and I think, The thing that I like about his TED Talks, which is the antithesis of most of them, is that it’s not really a keynote speech, which has got like this story arc and this whole thing, it’s just like, here’s the data, and here’s where it’s going, and I really like that. The one thing that is, I think, uh, a phrase that, uh, And the last one that really stood out to me, and you’ve written it down here, is Longevity Escape Velocity.
[00:13:38] Steve: I mean, that’s a really, really interesting idea where, uh, you know, because everything’s just really organized at the atomic level, I guess,
[00:13:45] Steve: to some extent. And, uh, I thought that that was a, really poignant moment,
[00:13:50] Cameron: to some extent, to,
[00:13:52] Steve: to all extents. There you go. There you go. Thank you, Cameron. Yeah. So that,
[00:13:57] Cameron: you know me, I’m a, I’m an atomic, I’m an atomic dogmatist. Everything is atoms, don’t tell me anything else. It’s all atoms,
[00:14:04] Steve: Including my thoughts, which I have no
[00:14:06] Cameron: including your thoughts,
[00:14:07] Steve: that I just made then wasn’t my fault, according to your thesis.
[00:14:11] Tony: Hehehe. Hehehe.
[00:14:12] Cameron: Not according to my thesis, according to physics. Alright, move along.
[00:14:17] Steve: Yeah.
[00:14:17] Steve: I, I really liked
[00:14:18] Steve: it. And I like, I love that he just went, here’s a whole heap of charts, bam, bam, bam. You can’t look at them.
[00:14:22] Cameron: Yeah, no presentation level, whatever with
[00:14:26] Steve: Zero. I don’t care. They’re in there. I just want, just read
[00:14:28] Cameron: He reminds me of, he reminds me of Noam Chomsky, like, like, it’s like, just, here’s the
[00:14:32] Cameron: data, take it, don’t take it, I don’t care, you know, let’s move on. Um, explain the longevity
[00:14:37] Cameron: escape velocity for the audience at home, Steve.
[00:14:41] Steve: Yeah. So the idea is that. As we move closer to having artificial general intelligence, we will eventually merge with the machines and that we’ll have enough knowledge to cure disease. And yeah, either, either with nanobots as well inside our body, that the life expectancy will increase at a faster rate than one year per year.
[00:15:06] Steve: Is that, is that a fair way to describe it? So we’re actually, We get to a point where everything is solvable because the technology is moving so fast that even though you’re aging at a year by year basis, the technology. is increasing at a level that’s faster
[00:15:20] Steve: than that, that will enable us to solve all disease and cure things
[00:15:24] Steve: so that we don’t ever die, is
[00:15:25] Steve: basically the idea.
[00:15:26] Steve: Unless you get hit by a bus, which he poignantly points out in the presentation.
[00:15:30] Steve: It’s, it’s actually not that easy to explain. How did I go
[00:15:33] Steve: Cam? Maybe you can give your version
[00:15:35] Cameron: well, I think the way I explained it to my mum and Chrissy after I watched it was essentially that at the moment with medical science, for every chronological year that you age, you’re actually only aging eight months because medical technology is able to stave off You know, sort of four months of what traditionally, a hundred years ago, you would have been experiencing in terms of the breakdown of your body and illnesses and the impact of those illnesses and
[00:16:05] Steve: injury was a, was a big one as well. That
[00:16:07] Steve: you die from a broken leg or something, or you have an appendicitis. I had that when I was 10, I’d be dead. If it was, if I was born 200 years ago,
[00:16:14] Steve: the end. Thanks for coming. I hope you enjoyed your stay, Samantina.
[00:16:18] Cameron: And he was saying that as, as, um, AI continues to progress, uh, over the next decade, we will be buying back more and more time for every chronological year that you age. So eventually you’ll get to a point, he’s saying in the next decade, where for every chronological year that you age, medical science will actually save you a year, so it becomes neutral.
[00:16:43] Cameron: You don’t actually
[00:16:44] Tony: It adds a year to your life
[00:16:45] Cameron: the aging process. Well then, and then he said it goes backwards, then for every chronological year that you will age, you’ll actually be saving 18 months,
[00:16:56] Cameron: um, so you’re actually getting younger as it’s repairing the damage that aging has done to your body, and so you will get to a point, and he’s saying in the next two decades, basically, by 2040, essentially, we will have reached a point where you’re You will get younger and younger as you get older until a point where you reach some sort of stasis and then you’ll just be held in that stasis.
[00:17:25] Cameron: Uh, you’ll, you’ll get to basically 25, the physical health of a 25 year old, and you’ll just basically hold there until you get bored and you want to terminate things, you know, or you get hit by a bus.
[00:17:37] Tony: Or you
[00:17:38] Steve: it was,
[00:17:38] Steve: it was peak oil. And now in the Kurzweil era, it’s peak humanity. Welcome to 27 forever, club 27, Kurt Cobain, we’re coming for you.
[00:17:47] Cameron: And I’m sure we all hope that Ray is the first person to get access to these
[00:17:52] Steve: Well, he needs to improve his hair. And, and that is the first thing on Ray’s list. Let me tell you.
[00:17:58] Cameron: You know, it’s sad to me, like you mentioned Marvin Minsky a couple of times, and for those people who don’t know Marvin
[00:18:03] Cameron: Minsky or his role in AI, he was like one of the original 50s and 60s who was at the forefront of AI research. He was Ray’s professor and mentor and friend. For many, many decades, I remember reading Marvin Minsky’s Theory of Mind book, um, geez, in the early 90s, maybe, when I first started getting really interested in this stuff.
[00:18:27] Cameron: And he passed away, uh, a couple of years ago, sadly. Like, he didn’t even get to see ChatGPT land in 3. 5, or all of this kind of stuff. So, you know, I find that Kind of really tragic. I’m just looking up when he actually died here. Hold on. 2016. Yeah. Okay. Eight years ago. Like, um, really like just, you know, the, the, the tragedy and irony of one of the guys who spent decades.
[00:19:00] Cameron: But here’s the thing that’s interesting that Ray talks about in his latest book is he and Marvin had a huge debate. You know, Ray talks in the book about the two approaches, the two basic approaches to AI. They’re thinking about AI over the last 70 years and, um, one is what he refers to, I think, as, um, symbolic logic or some variation of that.
[00:19:27] Cameron: Basically, and this has been the dominant approach for the last 70 years, is where you had to write down every rule about how the world works and give it to the AI. The sky is blue, cats like to drink milk, roses are usually red. Unless they’re not, etc, etc. And you had to write all of these rules, capture all of these rules, and then the AI, if you asked it a question, would have to look up all of these rules, and work out how to answer your question based on all of these rules.
[00:19:59] Cameron: And then he talks about the problem with, um, complexity. The com Plexity theory where every time you add another rule it gets more and more complex and if one rule is wrong or breaks it just breaks everything and it’s a very fragile system. The other idea was basically neural nets. Here’s another term for it in the book, I can’t remember off the top of my head what it is, but essentially what we now call neural nets, which is where you basically give it tons of information and tons of compute.
[00:20:33] Cameron: And tell it to, you know, build little nodes or tokens of information. And he says that Minsky and another guy, uh, wrote a book decades ago that basically shit on the neural net idea as not being valid. And it, he said it stopped research in neural nets for like 40 years. And before he died, Minsky said that one of his biggest regrets was co writing this book because it basically shut down a whole field of research, which of course today is exactly how transformers, which underlie the LLMs, actually work.
[00:21:12] Cameron: The problem that they had, they used to try and do this stuff in the 60s and 70s, he talks about a guy that was experimenting with it earlier on, and when Minsky shit on it, um, you It, you know, with Papert, I think was the name of the other author, it just, you know, everyone just veered away from it and his experiments shut down and no one took it any further.
[00:21:32] Cameron: But we didn’t have the compute to, to do that back then and, you know, all of a sudden now, thanks to largely NVIDIA, We have the compute to make this kind of stuff work. So anyway, Minsky, I just, I just hope that Ray survives long enough to take advantage of the stuff that he’s been predicting for the last 50 years, you
[00:21:54] Tony: that’s, that’d be the real tragedy for me. I mean, certainly Minsky was important in my life, but, but yeah, if I gets like dies a year before the upload. Yeah. I think you’re talking about Markov chains when you were talking about neural networks before. Expert systems and Markov chains. And that’s really
[00:22:11] Tony: what I grew up experiencing in business and how they were going
[00:22:14] Tony: to be revolutionary and, and just really ran out of computing space to, to be successful.
[00:22:20] Cameron: yeah. Anyway, um, sticking with, uh, Ray, so, um, He’s still predicting, he’s predicting AGI by 2029, and again for new listeners, what is AGI? I get asked this a lot by people, um, it stands for Artificial General Intelligence, the basic Like, there’s a lot of people with different definitions. It’s a bit fluffy, but the one that I always come back to that seems to make sense is AGI is when an AI system is better at most things than most people.
[00:23:01] Cameron: So it’s going to be better. Yeah. Why? You have a different definition?
[00:23:07] Tony: Sorry, I shouldn’t interrupt. I do, yes.
[00:23:10] Cameron: That’s fine. What’s your definition?
[00:23:12] Tony: Well, I thought in my mind an AGI is when you, uh, whatever you program can learn. So you didn’t, as you said before, you don’t program in the nodes and then have the system work out the logic to present the answer based on the data.
[00:23:27] Tony: It’s, you’ve written a program and
[00:23:29] Tony: it can think for itself and teach itself and then, um, solve problems that weren’t coded originally into,
[00:23:35] Tony: into the program.
[00:23:37] Steve: You can already
[00:23:37] Cameron: Well, we already have that. That’s what AI is now.
[00:23:41] Tony: Yeah. Okay. All right.
[00:23:43] Cameron: I mean, the way an LLM works is nothing’s coded into it. There are no answers coded into an LLM,
[00:23:51] Steve: Yep. it
[00:23:52] Cameron: it is
[00:23:53] Steve: uncover an answer based on the data that it has and
[00:23:56] Steve: the inference it can make through the probability in the large data sets. So it sort of does that now.
[00:24:02] Tony: Okay.
[00:24:04] Cameron: no one’s, no one’s coded, like, and this is a thing I again explain to people that are new to this a lot, because I think this is the thing that people don’t appreciate, like when people, people bring up hallucinations. Oh, it’s not perfect. It makes mistakes. You can’t trust it. See, the thing you have to realize is two years ago, three years ago, no one even knew this would work, this approach.
[00:24:32] Cameron: I saw when, uh, Jensen Huang, the CEO of NVIDIA, interviewed Ilya Tsutsikova, the chief scientist at the time for OpenAI, then unfortunately left recently, um, after the whole kerfuffle with Sam’s removal, etc. But he was interviewing Ilya, who’s basically the brains behind ChatGPT. One of them, but one of the dominant ones, and he said, Jensen Huang asked earlier, What surprised you the most about ChatGPT 3.
[00:25:00] Cameron: 5? And he said the fact that it worked.
[00:25:04] Tony: Hmm.
[00:25:04] Cameron: So you’ve got the chief designer of the AI who was surprised that it worked because the theory was just, you know, they started to play around with this approach of transformer models, which are only themselves five or six years old. Well, at the time they were five or six years old when they started putting it together and it seemed to be exhibiting emergent behavior.
[00:25:28] Cameron: And so they basically just thought, what happens if we throw way more data at this? And to do that, we need to throw way more compute at this. And so they scaled up their data and their compute. That’s all they did and continue to train it like they would normally train it, which is just, you know, come up with an answer to this question.
[00:25:49] Cameron: Yeah, that’s the best answer. Come up with an answer to this question. Yep. That’s the best answer. Come up with an answer to this question. Yeah, that’s the best answer. And let it, you know, figure everything out for itself based on how well it’s answers scored. As they threw more and more compute at it, it all of a sudden became intelligent.
[00:26:09] Cameron: And they were like, oh shit, like, this works! Like, they didn’t know, they, and we still don’t really know how it works. The people who run these things, Still really, it’s a bit of a black box. That’s the problem with it. We know if you put information in, intelligence comes out. We don’t know exactly why or how, but it just does.
[00:26:34] Cameron: So the fact that it’s, makes mistakes and makes errors is kind of not the key takeaway. The key takeaway should be, holy shit, this is magic. We just invented magic. You, you stick stuff in and something comes out. Now we have to figure out how, why it works so we can compress it and compress the data set and compress the level of compute that we’re using, et cetera, et cetera.
[00:26:57] Cameron: But the AGI,
[00:26:59] Tony: sorry. So I’m just going to read from the first thing that popped up when I asked for an AGI meaning. It comes from Amazon Web Services. AGI is a field of theoretical AI research that attempts to create software with human like intelligence and the ability to self teach. The aim is for the software to be able to perform tasks that it is not necessarily trained or developed for.
[00:27:21] Steve: sounds like your definition, sounds exactly, looks like you and AWS agree. Seems like you’re the man, Tony.
[00:27:26] Tony: Yeah. So, and definitions matter.
[00:27:29] Steve: there’s a lot of different definitions around this and often I think AGI gets confused with ASI and uh, you know, cause you’ve got Artificial Intelligence, Artificial General Intelligence and then Artificial Superintelligence, which is kind of what I think most people think of.
[00:27:44] Steve: When they, when they, um, talk about AGI, where it’s an artificial intelligence that far exceeds human intelligence in all manner of, uh, human intellectual endeavor, whether it’s creative, intellectual, economic, whatever it happens to be. I don’t know.
[00:27:59] Tony: I agree, Steve. I, when I,
[00:28:00] Steve: can’t even agree on the definition tells us where we are.
[00:28:04] Steve: We’re in a moment of significant change, right? That, that kind of is, I think, part of this whole
[00:28:08] Tony: we’re making it up as we go along. I agree. But, but when I think about this subject area, I think we already have AGI, but we don’t have ASI. AI is a sort of
[00:28:18] Steve: Cam and I have talked about that too, at various points.
[00:28:21] Tony: Yeah. So that’s, I guess, just the term straight. That’s what I think of when I think of those two things. And I think it’s important for like discussions about what happens with AI.
[00:28:32] Tony: I think ASI is really hard to predict what will happen and it’s way out there into the future and well, maybe not too far into the
[00:28:40] Steve: 2045, according to
[00:28:41] Tony: yeah, but it’s coming.
[00:28:43] Steve: which is even more radical,
[00:28:45] Tony: But I think AGI is a bit easier to predict. It’s Kurzweil’s curves. It’s, we know what an increase in computing power will do. You know, it allows us to code and all that kind of stuff.
[00:28:59] Tony: I think
[00:29:00] Steve: we have, we have been here before in the industrial era, when we used those curbs before, and we assumed that they would exponentially increase in perpetuity. And then we got to a point where they didn’t. Well, for two reasons. One was we, we decided that, um, faster and stronger wasn’t necessarily answer more efficient was.
[00:29:20] Steve: And we might, we may decide that as well with compute. We might decide, well, wait a minute, we’re just using too much energy and go, this is intelligent enough. Like airplanes haven’t gone any faster since, you know, the early seventies. That’s it. It’s 1969.
[00:29:34] Tony: Yeah,
[00:29:35] Steve: And we made them safer.
[00:29:36] Tony: it. Yep.
[00:29:37] Steve: And more comfortable.
[00:29:38] Steve: And maybe we make AI safer, more usable, you know, less, less, more economic in terms of energy. So there, there, there is some historical, and this might be different because AI and intelligence, computational intelligence is different to industrial powers, but we have, we were on that trajectory at one point where we were talking about, Oh, you’ll fly to London in half an hour, like, and it just never happened.
[00:30:00] Steve: So, I mean, I don’t know, but it’s worth, you know, Remembering
[00:30:04] Tony: Yeah, well, there are constraints, right? And probably in preparing for this show, I was reading articles and they, I forget now who it was, uh, may have been Sam Elson or someone like that, but they came out and said, the constraint they’re facing now is the data set, right? They’ve trained AGI on the internet, um, but they, but they’re finding that’s limiting.
[00:30:25] Tony: The learning is now being limited by the data set. So it’s possible that we don’t get any smarter or faster than the computing
[00:30:32] Steve: I think
[00:30:33] Tony: Now,
[00:30:34] Steve: is kind of interesting because it’s what you put in. There’s two, there’s two parts to that equation. One, and Cameron and I have discussed it, is the idea of dead internet theory, where the internet is already starting to be heavily populated by AI, and you get into a recursive, uh, Decline Spiral, where the resolution of the data and the nuance of new information gets lost because AI is populating the internet, which is what the AI learns on, so it can decline.
[00:31:00] Steve: The opposite side to this is the idea that self drive cars and closed circuit and open circuit cameras start to train the internet with inferences from the real and the physical world and satellite data and all of the information that is just absorbed rather than published, let’s say.
[00:31:17] Tony: really interesting. That’s where I was going to as
[00:31:19] Steve: Yeah,
[00:31:19] Steve: so we absorb
[00:31:20] Tony: is the constraint, then it needs to have an interface with the physical world. So it doesn’t just look up every picture of a rose. It actually looks at all the roses that you can see.
[00:31:31] Steve: And if everything is on camera and everything is recorded audibly, and we have enough camera and computation taking in inferences from the real world, the AI could maybe train itself without relying on human publishing of language and words and video, and then populate itself with a second generation of data points to train the AI.
[00:31:54] Tony: Yeah, and I go one step further. AI, I think, will hit the next quantum, so we’ve kind of reached the quantum level now, goes up again with synthetic eyeballs and synthetic eardrums, so it’s, I don’t think cameras will work because it’s a flat 2D image, but I think if you have some kind of, you know, if you can grow
[00:32:14] Tony: an eyeball on the Petri dish that links into Bye.
[00:32:16] Tony: Bye. ChatGPT, then it starts to experience the real world and the data set expands again. So I think that’s probably,
[00:32:23] Steve: And that goes into the whole Kurzweil bit too of, you know, synthetic, um, you know,
[00:32:30] Steve: symbiosis with, with computation and yeah.
[00:32:34] Cameron: your, your, your iPhone camera has LiDAR built into it. So we now have cheap, deployable LiDAR that can be used in cameras that are plugged into AI. So you get full depth perception, um, full recognition of shading, of colors and movement, that kind of stuff. I, I think that’s almost there. Anyway,
[00:32:57] Tony: But also, but yeah, but just, just before we leave that, like, I don’t think that’s good, but it’s not going to be enough to increase the data set enough to make AGI take that next leap. And I, and I use as evidence for that self driving cars. And, um, there was an article in the AFR recently about, um, Is it Waymo in San Francisco now offering autonomous taxi services?
[00:33:18] Tony: And if you look at the car, it’s full of LiDAR units and spinning cameras and all the rest, but it still does have some problems actually driving. It’s supposed to have less accidents than a human at the wheel, but it still has accidents. So, but yeah, you can only take spinning cameras and LiDAR so far.
[00:33:36] Tony: What’s the difference between us and that we have eyeballs and a, and ears and a head that can swivel. So that’s probably, I think, gonna be the, the next game changer for ai when it, when it merges with
[00:33:47] Tony: synth synthetic tech,
[00:33:49] Steve: Tony, we need to have a rock band called Swivel Heads and I just feel like
[00:33:52] Steve: I can have a punk band called the
[00:33:54] Tony: you can supply the hats
[00:33:57] Cameron: Tony, Tony’s a legendary guitarist and vocalist, man.
[00:34:01] Steve: I didn’t know that.
[00:34:01] Cameron: to the QAV theme song at the beginning of every episode.
[00:34:05] Cameron: That’s Tony,
[00:34:06] Steve: Well, I better get on that. Alright.
[00:34:08] Cameron: Legendary. Um, well, the point I wanted to make though, getting back to Ray, is in his TED talk, he said some people are predicting AGI
[00:34:17] Cameron: within two years, and he’s saying Elon, others are saying, and he said that’s possible.
[00:34:22] Tony: A GI or a, so sorry, A SI or a
[00:34:26] Tony: GI.
[00:34:27] Cameron: AGI,
[00:34:28] Tony: And is that the equivalent of a SI? Is that the artificial super intelligence.
[00:34:32] Cameron: no, just AGI, which again, I’m going to get back to Sam Altman’s definition, when the AI system is generally smarter than most humans at
[00:34:41] Cameron: most things.
[00:34:42] Tony: so that, why isn’t that, why isn’t that a SI,
[00:34:46] Cameron: well, ASI is when it’s like 10, 000 times smarter than humans.
[00:34:51] Tony: Okay,
[00:34:52] Tony: Mm-Hmm.
[00:34:53] Cameron: So the,
[00:34:54] Cameron: um, The AGI, um, you know, the AI that we have today, ChatGPT 4, et cetera, Claude 3.
[00:35:02] Cameron: 5, Gemini 1. 5, whatever it is, are really, really good at lots of things and really bad at other things. Like, I’ll give you an example if you, and I did this the other day to demonstrate this to somebody. You open up any one of those tools, and I, and add perplexity to this, um, and you ask it to multiply two four digit numbers.
[00:35:25] Cameron: You will get wrong answers over and over again. Uh, actually with most of it, ChatGPT 4 is the worst. It gave me like five wrong answers. I had the calculator in front of me and I kept going, Nope, Nope, Nope, Nope, till I finally got it right. The other apps I used took one or two, I took two, uh, two goes to get it right.
[00:35:44] Cameron: The first answer was wrong. And then I’d say, Nope. And it’d give me the right answer. So they’re bad at that. They can’t play chess. Um, you know, the coding that they do sometimes is terrific. Sometimes it’s shitty. Um, so they’re really, really good at lots of things some of the time, and then really, really bad at other things all of the time or some of the time.
[00:36:05] Cameron: AGI, in my mind, is when they’re right about nearly everything, nearly all of the time, and highly, highly reliable. And, and so the predictions for that, from everyone who’s in the field, Um, all the guys who run all the AI labs, and Kurzweil, and Elon, and Gates, and you know, Wolfram, is two to five years from now, is when we get AI systems that are massively more competent than most humans at most things, and Uh, extremely reliable.
[00:36:43] Cameron: Sam’s saying QPT 5 is going to be extremely reliable. We don’t know when that’s coming out, but the assumption is in the next year or so. And this is important when we get to the next part of the, the, the debate, the discussion, there’s around what happens to the economy when we have these systems readily available at relatively low cost.
[00:37:03] Cameron: That’s where I want to get to, um, with the show. Before we do further that, I want to touch on some of Kurzweil’s past predictions. In The Age of Intelligent Machines, again, which he wrote in 1990, he said, I placed the achievement of level four intelligence sometime between 2020 and 2070. Now, level four in that book was passing the Turing test.
[00:37:27] Cameron: And he still is saying in his latest book that it hasn’t done that yet. Steve and I disagree. We think it has blown away the Turing test, because it can answer pretty much any question as well as a human can, and in fact, the problem now, of course, is that it answers it too well. The way that you know it’s an AI is it’s able to answer it far better than most humans could.
[00:37:50] Cameron: But anyway, so, in 1990, he predicted that, and I’d say 2020 was pretty much right on the money. Like, ChatGPT 4, ChatGPT 4 Let’s say came out 2023. Um, that was pretty much, you know, he was a couple of years out there. He really said between 2020 and 2070. So it was on the lower end of that scale. This next one, not so good.
[00:38:11] Cameron: So in Age of Spiritual Machines, 1999, he wrote, if we apply the same analysis to an ordinary personal computer, we get the year 2025 to achieve human brain capacity in a 1, 000 device. Now, Apple is coming out with Apple Intelligence in an iPhone, which is a little bit more than a 1000 device, it’s like a 2000 device, depending on what model that you get, and it is going to have some level of AI built into it, we don’t really know how much it’ll be able to do until we see it, But, he may not be far off, that’s a year from now.
[00:38:48] Cameron: But then he went on to say, The memory capacity of the human brain is about 100 trillion synapse strengths, neurotransmitter concentrations at interneuronal connections, which we can estimate at about a million billion bits. In 1998, a billion bits of RAM, 128 megabytes, cost about 200. The capacity of memory circuits has been doubling every 18 months, thus, By the year 2023, a million billion bits will cost about 1, 000.
[00:39:17] Cameron: However, the silicon equivalent will run more than a billion times faster than the human brain. There are techniques for trading off memory for speed so we can effectively match human memory for 1, 000 sooner than 2023. So I asked ChatGPT about this. And it said he was slightly out on this one. Um, 1, bits of RAM is about, uh, 1 petabit of RAM, which is about, uh, 1, 000, bytes.
[00:39:51] Cameron: Uh, current RAM prices for DDR5 are about 3 per gigabyte in the US. So that would be about 350, 000 basically for a petabyte of RAM. But as it then goes on to say, not really Ray’s fault because it was market driven dynamics for that, but that’s one of the problems with projection, you know, projecting into the future, right?
[00:40:16] Cameron: You don’t really know what the market, we probably could build that amount of RAM, but there’s just no market for that amount of RAM in consumer devices because we have had really nothing to do with it You don’t need that much RAM to run a super fast iPhone or a super fast PC. We do need it to run a super fast AI.
[00:40:36] Cameron: So we may catch up, uh, with that. Um,
[00:40:40] Tony: I can just throw 2 cents in here, cam. I think the beauty of kurz while’s predictions and, and you know, you and I talk about the inability for people to predict on our QAV show all the time when we talk about the stock market and what companies will do and where the economy’s going, et cetera.
[00:40:55] Tony: But the thing I’ve always admired about Kurzwell is it’s rooted in this curve. The idea of the curve. Here’s what’s happening. Here’s what I’ve observed. I’m simply extending that out using the simple maths of plotting it on a graph. That’s, that’s such a powerful framework for, for anchoring his predictions in.
[00:41:13] Tony: That’s why he’s got longevity in the space, I think,
[00:41:15] Steve: I think too though, Tony, it is Use that word anchored. It’s anchored in the economic incentives, which ties into the QAV thing, on people building computation capacity and getting the price cheaper and all of that. So without of that, I mean, it’s not just like a law of nature. It’s how we interact in the market, which makes these things more available at better prices, but, but like the share market, and, and I remember in one of his presentations, it was when he, um, he did a presentation at Google Talks, it was around about 2000, I want to say 2010, when his last book, whenever that came out was, or 2013, and he went through the logarithmic charts that show it, Like the share market, they don’t move in straight lines.
[00:42:00] Steve: So you might have a, a period of flatness or decline and then, and then it goes up when there’s a, there’s a curve and a, a bit of a jump in, you know, technology which allowed those curves to over the long term be look like they’re predictable and straight line, but they, they bubble around before they hit those points.
[00:42:18] Steve: And sometimes you’re a little bit behind schedule and then there’s a curve jump in the tech, bam. And GPUs might be one exemplar of that as well.
[00:42:25] Tony: Yeah, and that’s why I think Kurzweil’s still pretty accurate, even though the price of that memory chip is 350, 000 to replicate the human
[00:42:33] Steve: Yeah, it’s like, you know, the, the, the all ordinaries and, you know, the S& P 500 are averaging, you know, 10, 11 percent or whatever it is over, you know, the longer term. Yeah, you have your ups and your downs in that, but over the long run, it’s kind of what you
[00:42:44] Steve: get. so
[00:42:45] Tony: taking a dot plot and attempting to put a curve through the middle of it. But the dot plot never follows the curve exactly.
[00:42:53] Cameron: That’s the name of Swivelhead’s first album is the Dot Plot, by the way. Um, this,
[00:42:58] Steve: I’m going to make that song this afternoon.
[00:43:01] Cameron: is near 2005, he said, based on the above analyses, it is reasonable to expect the hardware that can emulate human brain functionality to be available for approximately 1, 000 by around 2020. As we will discuss in Chapter 4, the software that will replicate that functionality will take about a decade longer.
[00:43:22] Cameron: However, the exponential growth of the price, performance, capacity, and speed of our hardware technology will continue during that period, so by 2030 it will take a village of human brains, around 1, 000, to match 1, 000 worth of computing. By 2050, 1, 000 of computing will exceed the processing power of all human brains on Earth.
[00:43:44] Cameron: Of course, this figure includes only those brains still using only biological neurons. Anyway, my point is that 2020, uh, Kurzweil’s been saying 2020 since at least 1990 and, um, he’s, like, you can debate what we get for, in terms of, for 1, 000 worth of computing, but the overarching idea there, I think, has always been that by 2020 we would have the ability to start to create forms of artificial intelligence.
[00:44:15] Cameron: And he was right on the money. Like ChatGPT, it’s almost like, you know, they were using Kurzweil’s timeline as a guide for when to launch this stuff. But, uh, it’s kind of astounding. So, when he says now AGI by 2029, when he says ASI, Buy, uh, well he says the singularity by 2045, I think, in his latest book, uh, let me just scroll down, yeah, 2045, and he defines the singularity as when we increase intelligence a million fold.
[00:44:49] Cameron: Now, you know, I’ve been arguing on this show that we’re already in the singularity, because I don’t think I can accurately even begin to predict what the world’s going to look like ten years from now. And to me, that’s always been the definition of the singularity, when change is so rapid, And so massive that we don’t really know what it means to be human 10 years from now.
[00:45:12] Cameron: We can’t really imagine what that would look like, but, um, he’s, he’s pushing it out a little, you know, another sort of couple of decades. Um, moving right along guys, uh, Steve, uh, I know you’ve wanted to talk a bit about, um, The Australian media versus Meta, and I see, um, your good mate Albo came out over the weekend and stuck the boot into Meta.
[00:45:39] Cameron: Um, I find this all pretty hilarious, but, uh, why don’t you throw your two cents in?
[00:45:44] Steve: well, I’m just interested and we don’t have to labor on it, but I think it’s important, especially for, for the Australian listeners. Um, there’s been a media bargaining code where. Uh, tech companies like Google and Meta have to pay for the content that they absorb, uh, into, um, their systems or, or, or into their formats where people can read the news and so on.
[00:46:12] Steve: And they reluctantly pay, who knows, you know, tiny amounts. Um, And we can see why there would be a protest against it, both from Meta, who are like, Hey, we’re just distributing stuff, we shouldn’t have to pay this. And then you’ve got News Limited and Murdoch and all the others saying, You’re just sucking up our content, and we should be paid for what we’re creating.
[00:46:31] Steve: I’m just really interested to hear what you two think about it, because I don’t know who’s worse. I don’t know if the Murdoch press is worse, or if, uh, Meta and Google and the others. I just don’t know. But, well, the one thing that I do know is that the Fourth Estate is important. As Cameron pointed out on a text message to me, he said, Fourth Estate?
[00:46:49] Steve: What’s Fourth Estate? We haven’t had one for a long time. But I’m really interested because It’s really, really important. We’ve already talked about the elections today. So,
[00:47:00] Tony: Yeah, I’m surprised Meta hasn’t offered some kind of revenue split for the ads that go with the news. as a way of paying back rather than try and have the government impose a solution on it. Because I, you know, I think you’re right, Steve. I think, um, certainly in the last 10 or so years, the level of journalism has declined as social media has taken off.
[00:47:22] Tony: Um, and you know where is that we’re evidencing in the US candidates who are standing for election, really. Um, there’s, there is no, Yeah, there’s no, you go back many years, there’s no, um, who was the guy from Thank You and Good Night? The, anyway, you know, the old sort
[00:47:38] Steve: Clooney.
[00:47:38] Tony: yeah, well, the character George Clooney played, you know, there’s no sort of,
[00:47:42] Tony: um, independent journalist holding people to it, holding truth to power, or telling truth to power these days.
[00:47:50] Tony: The journalists we have these days are so
[00:47:53] Cameron: Well, there are. Julian Assange, they
[00:47:54] Tony: it’s true. Yeah, they are there. True. They are
[00:47:57] Steve: the other guy that they just locked up in Australia who let out some military secrets, whatever his
[00:48:02] Cameron: Pride,
[00:48:03] Tony: McBride, yeah. Yeah, no, that’s a good point. But, um, but, you know, gone are the days where journalists, there was enough money in journalism that
[00:48:11] Tony: they could stand up to government.
[00:48:13] Tony: Now they’re pretty much beholding on government, right? So they become friends of government and it’s a bit of a dance. It’s, so it’s a problem.
[00:48:20] Cameron: So, like, it just amused the hell out of me when it was
[00:48:24] Cameron: News Limited. Uh, coming out and saying that Facebook was toxic, uh, toxic influence in society. Like, really? Really?
[00:48:35] Steve: No, but it absolutely is. This is the point, Cam, is that yes, it is. Right? They
[00:48:41] Cameron: Yeah. Both things can be true.
[00:48:43] Steve: that said, and we too are toxic. Like the point is, there’s a lot of toxicity here. Shout out to System of a Down.
[00:48:50] Tony: Hey, we know Toxic. We know toxic. We invented
[00:48:52] Cameron: Yeah,
[00:48:53] Steve: toxic. Don’t question us at Toxics. A lot of people, they don’t understand how toxic the mainstream media is.
[00:49:01] Steve: Very good friends of mine. Very good friends.
[00:49:04] Cameron: It’s pretty good, Steve. That’s probably the best amateur Trump impersonations I’ve seen. Um, Yeah, look, the media has, you know, I’ve been, I’ve been banging on this for a long
[00:49:17] Cameron: time now, since I started podcasting, really. Like, the media has, uh, run, deliberately run itself into the ground over the last couple of decades by, you know, cutting costs, getting rid of journalists, getting rid of fact checkers, cutting, cutting, cutting, cutting, uh, partly because of economics, because the economics of media retention fragmented with when the internet happened and, and, and ad spend fragmented and they didn’t, uh, uh, adapt.
[00:49:52] Cameron: Fast enough, so they got replaced by Seek, and Google Ads, and all of these other platforms that took their revenue away. But they keep trying to maintain their relevance in society, whilst at the same time laying off most of their journalists, and changing their content from any sort of hard investigative journalism to just click baity bullshit.
[00:50:20] Cameron: And, you know, they’ve been losing money hand over fist. Um, I, I did some quick research over the last couple of days. News Corp in Australia recorded a 75 percent drop in full year profit last year. Nine Entertainments saw a 38 percent decrease in net profit. So they’re, they’re just basically floundering.
[00:50:44] Cameron: And if, and if Meta, Stop paying them the millions that they’ve been forced to pay. By the way, I love this, like, the media narrative is Meta, uh, trying to, um, you know, bribe the media companies. It’s like heavy handed tactics of Meta. It’s like, really? You’re the ones who went to the government and got the government to pass laws to face, to force Meta to pay you money But they’re the ones that are being the bullies and the heavy handed in the marketplace.
[00:51:18] Cameron: I mean, I think Meta’s playbook here is like, you know what, we don’t need you, but you need us. And if you don’t have our revenue, you’re fucked. You’re already fucked, but you’re even more fucked if you don’t have our money. So they’re in the power position. And these media companies know it.
[00:51:34] Tony: Yeah. Except meta. Meta. Meta does need the content, so it does need the.
[00:51:37] Steve: it does need it a bit. Uh, look, it
[00:51:39] Cameron: Does it though?
[00:51:40] Steve: it, it doesn’t need it. But here’s what they should do. If, if the, if they don’t pay the, yeah, they, they definitely should shut it off because then at least what you get is something needs to break for something better to arrive. Okay.
[00:51:53] Steve: The system is already broken, so
[00:51:54] Steve: let it really, really break. If meta and meta don’t need it, you’re right. But let’s
[00:52:00] Steve: Good. Shut it off because then
[00:52:01] Steve: something has a chance of happening. Right? That’s better
[00:52:05] Cameron: already done this in Canada, they’ve shut the, they’ve, they’ve deleted the news from Canada, uh, when the Trudeau government tried to implement some similar sort of, uh, bully boy tactics. They just went, okay, we’re not carrying Canadian news anymore, so there’s no news in Facebook in Canada as I
[00:52:24] Steve: Yeah. And I think one of the interesting things is the definition of, the definition of news is, is interesting as well, right? Because the lines have been
[00:52:32] Steve: blurred. You could argue that the podcasts that, the things that. We write are news. to an extent. And last year meta, you know, really, really went dirty and blocked off local sporting clubs and all of that kind of thing.
[00:52:42] Steve: So that,
[00:52:43] Cameron: Yeah, I got blocked. All of my podcasts got blocked when they did that. We were considered news. I couldn’t post a link to any of our podcasts.
[00:52:50] Steve: It’d be interesting to see what happens, but I would like to see it break so that there’s a chance for something else to emerge. I don’t know what needs to emerge or where we go, but it’s really, really important. And it couldn’t be more important right now in the era of AI building the ability to create fake anything.
[00:53:07] Steve: It’s like, where is the trusted source and how do you get access to it? And distribution is the most powerful thing in business, I think, more powerful than product. All so often, so many companies do well because they have distribution, you know, whether it’s Meta or whether it’s Coca Cola or whatever it happens to be.
[00:53:25] Steve: And if we can’t distribute valuable
[00:53:27] Steve: ideas and news, you know, we’re going to be in some,
[00:53:31] Steve: we’re going to be in a singularity again, where the future is unpredictable.
[00:53:38] Cameron: I mean, there’s this hearing going on at the moment. I, I, I, I’m guessing this parliamentary inquiry into MEDA is, uh, been carefully timed to coincide with the, uh, re signing attempts of this media bargaining document. But, um, Albo, our Prime Minister, came out over the weekend and was criticising META, um, over the social, over the harm that social media is doing to the children of Australia and how META aren’t taking responsibility for that.
[00:54:14] Cameron: As a parent
[00:54:15] Steve: It’s not meant as responsibility. It’s the government’s
[00:54:18] Cameron: never thought it’s, no, it’s the parents responsibility. I’ve always thought, it’s not Meta’s job to keep Fox off of social media, it’s my job
[00:54:27] Cameron: and Chrissy’s job?
[00:54:27] Cameron: to keep Fox off of social media if we don’t
[00:54:29] Steve: No, and the government’s. We pay them,
[00:54:31] Cameron: Why is it the government’s
[00:54:32] Steve: up until Tuesday afternoon to pay them to legislate against things for our own benefit. I know what you think of that, but I think it’s, there’s a responsibility for the government to take steps to protect their populace. You know, whether it’s all the way up to the military, to local bylaws on roads and safety of children, absolutely, and, and, and the parents, more so the parents, but the government has, the last person that has
[00:54:58] Steve: responsibility for anything is a for profit corporation, right, and if and that’s my pet hate on all of this, is, oh, the, government, corporations have got to be more responsible, no they
[00:55:11] Cameron: on
[00:55:12] Cameron: Well, it depends on what they’re doing. If they’re, if they’re putting pollution into the rivers,
[00:55:17] Cameron: or into the land, or into the sky,
[00:55:20] Steve: into our minds,
[00:55:22] Cameron: Well, I have more control over what goes into my mind than I do over what goes into the river in, in my suburb, right?
[00:55:31] Tony: This is an age old debate, though. There are some children who don’t have two parents, there are some children who don’t have parents who care about social media or know the issues. So you need both. You need government legislation or frameworks and you need good parenting. You can’t always rely on good parenting, unfortunately.
[00:55:47] Steve: have some people who drive responsibility and some that don’t. And the rules are there to protect those against those who don’t
[00:55:54] Steve: do the responsible thing. Just like the parents who don’t do the responsible thing with their
[00:55:57] Steve: children.
[00:55:59] Cameron: that’s a reasonable argument. But I I don’t
[00:56:01] Cameron: think it’s META’s responsibility
[00:56:03] Steve: I agree with that.
[00:56:04] Cameron: the minds of our children. It would be nice if they cooperated, but,
[00:56:08] Steve: Well, this is the thing we’ve got to start, we’ve got to start doing regardless of what it is. You don’t ask for cooperation, you demand it. Right, and it’s the same when it comes to taxation and multinational tax avoidance, which just shits
[00:56:20] Tony: I agree, Steve.
[00:56:21] Steve: It shits me to tears. Guess what,
[00:56:23] Steve: government? It’s really easy. You’re sovereign. Guess what? You’re my new
[00:56:27] Steve: taxpayer. Right, here’s a revenue assessment tax the same as land tax. Congratulations. You’re my newest taxpayer. You don’t like it? Fuck off. You’re
[00:56:34] Cameron: Well, Steve, they’ll all leave the country if we do that. If we, if we, if we’re mean to them, they’ll all leave,
[00:56:41] Steve: Good, fuck off. They
[00:56:42] Tony: We’ll, we’ll end up like Tasmania and be great. Well,
[00:56:47] Cameron: argument with, uh, you know, uh, taxing mining interests. People will say, well, if we tax them, they’ll all leave. I’m like, good. We’ll nationalize it. Fuck off. I don’t care. We’ll just nationalize it.
[00:56:58] Steve: with them. Mate, we should totally nationalise all of the minerals in our ground and or, and or, have significant royalties on all of it. It is so easy, and it is such a simple decision
[00:57:11] Steve: to make. You don’t need a dollar. from any corporate donor because for once you’ll be making a decision for the fucking people you’re meant to represent.
[00:57:20] Steve: Easy. I’m fucking running for politics. I’ve had
[00:57:22] Cameron: They’re like, oh, the
[00:57:22] Steve: God.
[00:57:24] Tony: national, national, nationalized, they’re owned by this, they’re owned by the states. They just, they just
[00:57:29] Steve: Yeah. And they give them away and they give them away, Tony.
[00:57:33] Tony: they charge a royalty on their extraction, but it, But sometimes it looks like a giveaway, like in WA compared to Queensland, where they charge a lot more on the royalty, a col.
[00:57:42] Steve: We, we, we have given net proceeds to companies like Santos
[00:57:46] Tony: Mm. No, I agree,
[00:57:47] Steve: what we,
[00:57:48] Steve: what we, what we get in royalties versus what we give the fossil fuel industry. They actually get more from the government. It’s incomprehensible.
[00:57:56] Tony: No, I agree.
[00:57:59] Cameron: Steve, you wanted to talk about, uh, the European commission and Apple.
[00:58:02] Steve: Well, I just wanted to touch on it briefly and especially because Tony is in the house, you know, I Apple’s in breach of the EU anti competitive laws and they’re putting up a case against them. I mean, I think Cam and I have discussed that these big tech companies aren’t corporations. They’re basically modern era utilities, which are unavoidable and have unassailable competitive modes where I don’t think anyone can catch up to them.
[00:58:30] Steve: We’ve discussed that in great detail. And I just would like to understand Tony’s viewpoint on this. What sort of an investment is big tech? In the face of potential antitrust, whether it’s fines or splitting up, does it still make them a good investment? Does it tell you how powerful they really are and they’re still a good investment?
[00:58:52] Steve: If they get split up, where historically when monopolistic firms, whether it was Standard Oil or whether it was Bell AT& T got split up, They actually created more value post split. I’m just really interested from an investment viewpoint, given the power of big tech and what is it? The top seven tech companies have a market cap of what?
[00:59:11] Steve: It’s something over 15 trillion now. I think it’s crazy. Might even be 17 trillion.
[00:59:16] Tony: Yeah, well, I’m a value investor, so I don’t really follow big tech as investments, because I think it’s a bubble, um, and you look at NVIDIA, I saw
[00:59:24] Steve: answering the question then. This is, this is good. This is what I want to learn.
[00:59:27] Tony: I saw, I saw something on, um, I think it was in today’s paper, uh, NVIDIA lost 430 billion last week in market cap, so it’s,
[00:59:37] Steve: It’s just insane. It’s such a big number. You know,
[00:59:40] Tony: it is, isn’t
[00:59:40] Cameron: Yeah.
[00:59:41] Steve: companies only have a market cap of 30 or 40 billion, and then they lose 10 X that in a week. And it’s like, I mean, it tells you something, right?
[00:59:49] Tony: yeah, and I’ve got a chart I was going to use when we record QAV tomorrow, Cam. Someone produced a graph I saw on the weekend, which said that when, uh, I think it was the top 10 companies in the U. S. had such a concentration of market cap, it was a preceder to a crash, and it happened at this level in 1929 and in 2000.
[01:00:11] Tony: Um, so when you get this kind of, Money following the leaders, um, it, it tends to crash and revert back to the mean. So, uh, you know, I, I think that’s coming. I think it’s a bubble. I think whenever anyone asks if it’s a bubble, it probably is a bubble by, by definition. Um, I don’t know if that answers your question though.
[01:00:31] Tony: Like they do, Apple does have an unassailable moat, although there’s Google, Android, et cetera. Duopolies actually, duopolies can be as, as profitable as monopolies. You know, we see that in Australia all the time. Coles and Woolworths are very profitable. So yes, they’re great investments, but at the moment I think they’re in a bubble.
[01:00:53] Tony: Should they be broken up? I have some sympathies to the EU argument because Apple controls via the App Store most of the developer market. in the world probably and charges at 30 percent on that and there’s no very little competition and yeah that’s a monopoly which you there is an argument to say it
[01:01:12] Tony: should be broken up and as you say though that may not that may create more
[01:01:16] Tony: value um for investors so you have to watch that space Ha!
[01:01:19] Tony: Ha! ha! ha! ha! ha!
[01:01:23] Cameron: right. Moving right along then. Um, I want to, let’s skip over the next couple of stories because I really want to get into the deep dive and we’ve already been talking for an hour. Um, the deep dive. So let me, I’ll restate my thesis and then we can get into it. So the thesis I’ve been trying to figure out for the last month or so on my, on my blog and on the show is what’s gonna happen.
[01:01:50] Cameron: in the next five to ten years in our world, but particularly in our economy. And my, stated simply, my thesis is this, um, AI is going to continue to improve rapidly to the point where in the next five years, We will have a form of AGI that is going to be widely available to businesses as well as consumers and governments.
[01:02:22] Cameron: And we’ll be able to do most things better than most people in terms of knowledge work, information work, and will also be integrated into a wide variety of robotic devices, including humanoid robotic devices, which, according to Goldman Sachs and McKinsey and organizations like that, will be available for purchase in the vicinity of starting price 10, 000 to 20, 000 in the next decade.
[01:02:58] Cameron: So a humanoid, a humanoid robot with an AI built into its brain, maybe commercial versions of that might be a little bit more expensive. But my theory is that when a business can buy a robot with an AI brain for Less than the cost of the annual salary of a human, uh, with all of the incumbent issues that come with humans like HR costs and health and safety costs and all of that kind of stuff, that they will start to replace manual labor with robots as well.
[01:03:32] Cameron: And so over the course of the next five to ten years, I expect to see a lot of jobs being taken first by software, AI, then by robots. And I think this is going to lead to large scale unemployment. And when I Think about the traditional, what’s happened historically is when technology has replaced a sector, um, those people or the generation that comes after them in some cases have found jobs in other industries that may not have even have existed before.
[01:04:08] Cameron: But when you have AI that’s smarter than all humans and robots that can do anything a human can do better, More cost effectively, more efficiently, I can’t imagine what kind of jobs could be created that AI and robots won’t immediately be able to do better than any human. So then, if, uh, you know, there is large scale unemployment, by large scale I’m talking 20 30 percent unemployment, you know, the last time we had 20 30 percent unemployment we had the Great Depression.
[01:04:42] Cameron: And what happens when you have that level of unemployment, obviously, is economies meltdown, there’s not enough cash being spent, uh, so all wealth decreases. You know, there’s this idea that, oh, AI and robots will be available to the rich and the poor people will get screwed. I keep saying, if the economy melts down, everyone’s screwed.
[01:05:03] Cameron: There are no rich people when people can’t afford to buy goods and services. Businesses fail, small businesses fail, that causes bigger businesses to fail because the small businesses can’t pay their bills to the bigger businesses and there’s the trickle up effect, right? And governments don’t have enough income tax to come in to pay for services.
[01:05:24] Cameron: So you end up with austerity measures like they’ve had in the UK for the last 15 years and we’ve seen what’s happened to the UK. My hairdresser, believe it or not, I just got a haircut a couple of days ago. Um,
[01:05:42] Cameron: Steve couldn’t cope with that, he had to walk away from the mic. Uh,
[01:05:45] Steve: you didn’t! Listen, we’re trying to
[01:05:48] Steve: make this
[01:05:49] Tony: Which hair did you have
[01:05:49] Steve: truth, about the future, and then you just pull a Kurzweil on me and tell me how to haircut?
[01:05:54] Steve: You and Ray need to have a cup of coffee and sit down and discuss things.
[01:05:58] Cameron: it was a good inch longer last week and he put layers, layers, Jerry, layers. Um, he just got back from London, he’s visiting his grandmother, he’s a
[01:06:08] Cameron: Pom, uh, who’s lived here for 25 years, and he was like, dude, it’s so fucked over there, like, she’s 90 and she had to go to a hospital to get a brain scan done.
[01:06:17] Cameron: He said there were like, staff fighting over the two available wheelchairs they had, Old people lying in corridors just drooling without being, getting any attention for five or six hours. He said plaster falling off the walls in the hallways of the hospital. He said it looked like the Soviet Union in the 60s.
[01:06:35] Cameron: Like it was just that decrepit. He was shocked. It was his first time back in quite a while. And he was like, dude, that country is fucked. Fucked. And this was in London. Like, we’re not even talking about, was it, um, Manchester or Birmingham that’s just gone bankrupt? Uh, like eight cities in the UK now have just, have gone bankrupt because they can’t afford to pay their bills for their services.
[01:06:58] Cameron: Anyway, my, getting back to my theory is I am predicting and forecasting massive eco Economic and social upheaval in the next five to ten years, Tony thinks I’m bonkers. Discuss.
[01:07:15] Steve: I’ll let Tony go first, I’ve got a whole flowchart on.
[01:07:18] Tony: I think you, I think, well, inherently predicting is difficult. I’m not seeing a curve for your prediction, a Kurzweil curve for your prediction. But let me just, to me it defies the laws of commerce. So if you can buy a synthetic human for 20, 000 and a starving human will work for 19, 000. So there’s a race to the bottom that goes on in commerce in terms of cost cutting.
[01:07:43] Tony: So I don’t think you’re going to see widespread unemployment. We’ve seen this happen before, right? Mexicans cross the border and do gardening jobs cheaper than what, um, the locals do it. We outsource things to China because the manufacturing is cheaper. The, the production of a robot that can do A human’s job is always going to be tested against the cost of being done by a human.
[01:08:08] Tony: And if humans are
[01:08:09] Tony: being displaced en masse, like you’re saying, they’ll work for food they’ll they’ll lower their price, if it’s an issue. But you also said when we, okay, I’ll stop there. I’ll stop there.
[01:08:20] Cameron: humans working for food still doesn’t sound like a healthy functioning economy.
[01:08:26] Tony: Yeah, but the economy will reset as well. I mean, if, when there’s less purchasing power in the economy, prices come down. So food will get cheaper. It may not get better. The quality of the food might decline dramatically. Um, and we’ve seen this as well. I mean, why is McDonald’s successful? It’s because it can feed a family for 20 bucks.
[01:08:47] Tony: It may not be a good feed, but it can. And. Yeah, it’s calories, exactly. You know, how does the, how does the Chinese person survive in the factory? Well, they eat rice and they get their calories cheaply. So, it’s kind of this law of commerce or the economy that, that, um, the robot’s got to compete for its job, even though it’s maybe fantastic and can do it better.
[01:09:11] Tony: It’s still, it’s still got to pass the economic test.
[01:09:14] Steve: Well, I’ll just, I mean, that’s true. I just want to point out that
[01:09:18] Steve: the name of our second album of the Swivel Heads is going to be called Working For Food. I just had to point that out.
[01:09:24] Tony: We’ll have playing for food. We’ll play
[01:09:26] Steve: going to be our follow up album. Let’s hope it’s as successful
[01:09:29] Cameron: After the,
[01:09:30] Steve: of the dot plot. So yeah. I think it’s going to really have a nice flow on in terms of album names.
[01:09:38] Steve: It’s right up there. Um, it’s right up there with Garbage’s 2. 0, version 2. 0, which I really loved as an album, uh, circa 99. Um, I think I agree with you, Tony, is, is that. The tech adoption, there’s an economic paradigm that is required. Like what is the cost of a robot? That said, let’s assume that the robot or the AI or the combination of those two things is better than a human, can work 24 hours a day, doesn’t need annually, all of those things, which if they’re going to be, you know, the cost of AI online, which is kind of free, or a humanoid robot, which is, you know, 10 to 20 grand, the cost of a small car, geez, I tell you what, the economics of that, it doesn’t seem like it’s going to be hard.
[01:10:21] Steve: For it to exceed human capability. But for me, there’s a fundamental flaw in this idea. Um, and, and the number one thing that I think that really matters, and it circles back to the Apple thing, is competition is the most important thing when you have an economic upheaval. You need Large amounts of competitive players.
[01:10:42] Steve: And I’m going to explain why in my economic thinking on this is you have technology adoption. It doesn’t matter whether that’s a combine harvester or AI robotics. The reason a company adopts the technology is to reduce its costs. Right? Or to have greater output. One of those two things. Reduce cost, increase output, which reduces costs anyway.
[01:11:04] Steve: Um, so that’s why you would do it. And why would you do that? To make things cheaper. And you make things cheaper for one of two reasons. And this is a fork in the road. One is to sell more to more people because you’ve reduced the cost and increased your market scope. Reason A, or reason B, don’t increase your market scope necessarily, but increase your margin on what you sell because it costs you more, but you don’t put your price down.
[01:11:29] Steve: Now, if option A happens, right, that’s great, because you reduce, you increase the market scope, you reduce the cost, more people can afford it, and that frees up capital of the purchasers, the people buying things, to invest in emergent industries of which we don’t know what they are.
[01:11:46] Tony: Yeah.
[01:11:47] Steve: But, if we go to that second fork in the road, which is, no, they don’t increase their market penetration, all they do is increase their margin, then we’ve got a problem, because what happens is, they become more profitable, money isn’t freed up to go into new industries, and the only reason Part B can happen, companies Choosing the profit, profit and margin path instead of market penetration is if there’s no competition in which margins, and this is microeconomics theory 101, uh, increase, uh, access to the market, compete away profits because they all have the same new price.
[01:12:21] Steve: And if you don’t put your price down because you’ve got a lower cost of production, your competitors will, and you lose market share and people will substitute across. If, if we don’t have competition, then part B can happen. If you do have competition, it’s impossible for companies to increase their margin at the expense of consumers. that’s actually the most important thing in this right now in your thesis, Cameron, is whether or not we have competition. If we have competition, it’s impossible for the economics to work because That margin gets competed away, which frees up capital with humans who can spend money on new and emerging industries and then populate those places with jobs.
[01:13:05] Steve: It’s the same as like what happened with the music industry. Classic example, right? I spend 30 on a CD for 12 songs. Where does the 30 go? As soon as I can download or steal music, that 30 goes into data, an iPhone, all of those other things, emergent new industries. But now those emergent industries for the music industry are all big tech and big tech is an invasive species sucking up everything.
[01:13:30] Steve: If big tech choose the profit
[01:13:31] Steve: path with automation and robotics, the whole
[01:13:34] Steve: system collapses. So for me, the number one issue on this is competition. It’s actually not the technology. That’s my theory.
[01:13:43] Cameron: problem with that, Steve, is you’re saying prices get cheaper so people can redeploy
[01:13:47] Cameron: their capital in other places, but I’m saying that people are unemployed, so they don’t have an income to redeploy anywhere. And with Tony’s point about it becomes in terms of the humans will drop their price, you know, if you can buy a robot for 20, 000, a human will do it for 19, 000.
[01:14:07] Cameron: That may be true when the quality of the output of the robot is roughly equivalent to the quality of the output of the human. But if the robot, slash the AI system that you’re using in a knowledge worker sense, is twice as good for 20, 000 as a human would be for 19, 000 or three times or five times as good with its output for speed, reliability, etc.
[01:14:37] Cameron: There has to be a breaking point where as an employer you go, humans are just difficult. They need sick days. They need managing. Um, if I get it, if, if I have a, let’s create a hypothetical company that has a department with 10 knowledge workers, let’s say a marketing department. So you’ve got 10 people in the department.
[01:15:01] Cameron: You’ve got a manager overseeing those people, um, that, that has a salary. Um, if I can get rid of. All of those people and have an AI do all of that work. I can probably have a single AI to replace all 20 of those people. If I get rid of those people, I don’t need a manager, so I can get rid of the manager.
[01:15:22] Cameron: If I have, uh, uh, uh, an executive manager overlooking all of the managers and I’ve gotten rid of all of the managers, I don’t need the executive manager overlooking the managers anymore. There comes a point where the AI systems are just, Too good, the economics of keeping humans there and then you need a HR department and you need a building with light and power and you need a car park and you need, you know, to pay holiday leave and all this kind of stuff.
[01:15:47] Cameron: Surely there has to come a point where businesses go, you know what? Humans just don’t cut it anymore. I’m going to replace them all with AI and or robots.
[01:15:56] Tony: And then.
[01:15:56] Steve: that path for a long time. Cameron, though of, of systems replacing multitudes of humans, whether it’s paperwork and the, the software’s been eating that for a really long time. In terms of the, the idea
[01:16:11] Steve: that the adoption and they move to other areas. If they’ve, if they’ve lost their jobs, they can’t pay.
[01:16:16] Steve: That always happens, but it doesn’t happen rapidly. I don’t think. It might do this time. It could be different this
[01:16:21] Cameron: Yeah, but
[01:16:22] Steve: But it happens like this, Cam. It happens like that, and a few people lose their jobs, but as that product gets cheaper, as people lose their jobs, that money goes across to this burgeoning industry, and then the people
[01:16:33] Steve: move across to that.
[01:16:33] Steve: You’ve always got that structural shift. It doesn’t go, boom, like that, or this one comes down, and all these people are It hasn’t so far. It might do. It might do. But so far, it
[01:16:42] Cameron: never, yeah, but in the past, we’ve built, okay, we’ve built a, let’s say an ERP system,
[01:16:49] Cameron: which has replaced people
[01:16:50] Cameron: who
[01:16:50] Cameron: were doing resource planning on pen, on pads with pens, and those people could be redeployed. We had, we built a system here, but we didn’t build a system that could do everything all at once.
[01:17:02] Cameron: That’s the thing with AI, is we’re building a system that can do everything. All at once, at the same time, better than any human. We’ve never seen it. It’s like the, uh, the ultimate machine.
[01:17:15] Steve: you’ve got to remember, the people Who are being displaced also have access to this ultimate machine. And that thing that they had in their head that they were going to invent and create, they couldn’t create, but then now they can, and then now they’re going to go
[01:17:27] Steve: out and go, you know what? I’ve been ready to put, I’ve got this AI.
[01:17:29] Steve: I can just talk to it with natural language and it’s amazing. And I’m going to create this new revenue system in this new industry or job or company.
[01:17:38] Cameron: but I don’t, but I don’t have an income, so I can’t pay my rent. I can’t buy
[01:17:41] Cameron: food while I’m doing that. And no one’s going
[01:17:44] Cameron: to
[01:17:44] Steve: you got a redundancy and you got like six or 12 months money and it’s cause a big corporation and, and then you, but you do, you
[01:17:51] Cameron: Yeah. Okay.
[01:17:52] Steve: And then you go
[01:17:53] Cameron: pay them all out. and so some people take that money and use it to become entrepreneurs.
[01:17:59] Steve: Some do.
[01:18:00] Cameron: Some will,
[01:18:02] Steve: Maybe, maybe, many, many of them do it. And we end up with like a, a
[01:18:09] Steve: system where we have
[01:18:14] Steve: lots of, let’s just call them freelancers and independent workers. Like, like we’ve had pre industrial era where most people, sorry?
[01:18:23] Cameron: Isn’t that the gig economy?
[01:18:24] Steve: Well, yeah, yeah.
[01:18:26] Tony: a, that’s a curve
[01:18:26] Tony: that
[01:18:27] Steve: that does happen. I don’t know.
[01:18:29] Steve: Maybe,
[01:18:29] Tony: I think the trends,
[01:18:30] Steve: gig workers pre industrial.
[01:18:32] Tony: see, I, I still, you’ve jumped to the endgame, I still think about what happens between now and then as AI increases computational power, and so it’s still got to be, it’s still got to be sold to somebody. So, you know, who pays for ChatGPT, who pays for, you know, because what’s going to happen, I think is that, and it’s already happening, is that, um, you know, Silicon, Silicon Valley loves to have the frat boys throw money at these things, even though they’re not profitable, like Tesla, for example, and, and someday it’s got to make money for the serious people to keep putting money in.
[01:19:10] Tony: So, you probably see that. Companies like, uh oh, I see. The different, I guess the different difference in this is that you’ve got, say, apple developing in ai so they’ve got a cashflow to support it. Um, so people will still keep investing in, in ai, but I guess what I’m trying to say is AI will need to support itself economically.
[01:19:30] Tony: And if it’s, um, and that’s gotta happen on the road to this perfect. device that can do all jobs cheaply. Um, and there’s going to be so many secondary effects to that that you may never get to the utopian world of a 20, 000 robot doing all jobs because, um, you might have, uh, it might break down along the way.
[01:19:51] Tony: It might not get to 20, 000. It might, um, because as Steve says, there might be lack of competition in the marketplace and it’s never gets below 100, 000. because Apple want to keep
[01:20:02] Tony: making a, profit margin, um, there’s all sorts of things that have
[01:20:05] Tony: to happen economically or not happen economically to get to that utopia.
[01:20:10] Tony: And I, I find,
[01:20:13] Tony: you know,
[01:20:13] Cameron: Actually, I think it’s a, I think it’s a
[01:20:15] Tony: dystopia. Yeah. Well,
[01:20:16] Steve: thought the exact same thing to get to that
[01:20:18] Steve: dystopia. I love it. Cam will never let you get away with that one, Tony. Cam was on that.
[01:20:24] Cameron: Well, look, it’s going to be, I think there is a utopian version of the future. And that’s what Kurzweil is pitching. He’s a utopianist, that everyone’s going to be great and we’re going to be immortal and every, all the work’s going to be done for us. And I’m trying to figure out what does the economy look like in that scenario?
[01:20:45] Tony: my, my starting question is who pays
[01:20:48] Cameron: for what?
[01:20:48] Tony: for your, for Kurzweil’s utopia? Who pays? Right? Because,
[01:20:53] Cameron: question. yeah,
[01:20:55] Tony: that’s, and that’s why I’m saying I don’t know if we actually get there. It’s like, if you think about where we are now, someone’s still got to spend 2000 bucks on the Apple iPhone, um, if that’s where technology is, you know, along the curve now.
[01:21:09] Tony: So, which means they need jobs and they need to be able to, you know, save and buy that. So, it’s a bit like the old Henry Ford thing. He paid his workers enough so they, they could buy the product, the car. So, you, There is a bit of a circularity to this argument that, you know, I know that if we get to utopia, then you don’t need money, but how do we get there?
[01:21:32] Tony: I just can’t see, I mean, the current laws of commerce and the current laws of human nature suggest that we may not get there because we’re, there’s always someone trying to get an advantage and make some money out of it.
[01:21:44] Steve: I mean, the question that Cam raised last time we chatted about this is Do the current laws of commerce apply? Cause maybe we’re going into uncharted territory.
[01:21:52] Tony: Yeah, and that’s where I come, there’s, in terms of extrapolating the curve, we’ve never had a situation where they haven’t applied. So, it’s, yes, they may not apply in the future. Um, I give that a relatively low probability because they’ve always applied up to then. And having an AI, think about the power that goes with having an AI or an ASI if you get there.
[01:22:12] Tony: It’s, it’s really valuable and you can charge a lot for it rather than give it away for free and make this utopia on Earth.
[01:22:20] Steve: Well, there’s also the emancipation. If you make it available to the large populace, then would have to assume that there’ll be a whole lot of new roles and industries that are invented because people have superpowers where things that they were incapable of creating, they can now create, whether that’s an industry or a company or just working for themselves.
[01:22:43] Steve: If it is. Uh, as emancipating as we think, it doesn’t just emancipate the organizations that create it, it’s those that they sell it to, which then it gets a distribution through the marketplace, which redistributes some form of the funds, so long as there’s access and competition. They’re my two things, is access and competition.
[01:23:02] Steve: If we have that, we’ll be fine. If we don’t, then we could be stuffed.
[01:23:05] Tony: I think we’ll extrapolate where we are now, where if you compare our lives to what they were like 100 years ago or 200 years ago, we’re miles ahead. But do we feel like we’re in utopia? And I think it’ll be the same in 10 years time.
[01:23:18] Steve: You think we’re in Utopia, Cam?
[01:23:21] Cameron: Yeah,
[01:23:21] Tony: Well, Cam gets to sit
[01:23:22] Steve: Really?
[01:23:24] Tony: all day and
[01:23:24] Steve: Because after you
[01:23:25] Tony: playing with Chrissy and Fox. That’s utopia.
[01:23:27] Steve: but listen, after you eat and sleep and do a little bit of recreation, like, I feel like our minds have this heavy toll to carry. We’re carrying around this cognitive burden of uncertainty and, you know, too much information.
[01:23:41] Steve: It’s just too much of everything. I, I really feel a burden on, on carrying. All of this. I don’t feel, and I’m more well off than I’ve ever been in my life. I’ve got everything I’ve ever hoped for. I’ve got my arcade machine there, my BMX, my surfboards. It’s all, I’ve got everything I ever dreamed of. And I feel, I don’t feel as calm or as relaxed or, or
[01:24:02] Tony: never produces that result.
[01:24:04] Steve: Well, I’m not saying that I just go out and chase money. Come, comes from Tony, mate, Mr. Investor Extraordinaire. Like, and I go surfing and I do think, like, the point is, is that, I, you know, I can afford things, but I’m not saying that I was chasing, like, material but I feel like the cognitive load. In the modern era and the uncertainty and all of that is, is a real burden that reduces happiness.
[01:24:31] Tony: It’s always been there though. Like I was, I brought up, I was brought up in the era of the Cold War where every day you woke up was a good day, you weren’t bombed by nuclear war. So I think that underlying anxiety might be a condition of human nature. You’ve just got to learn how to control it.
[01:24:46] Steve: Okay. Thank you. Well, now they’ve just ruined everything for me. I just want to point out that I was having a really good time now. And now what you’re telling me is just, just roll with the anxiety and the unhappiness that is a natural human extension and gets worse as I get older. Thank
[01:25:00] Tony: ignore the noise. Ignore the noise.
[01:25:03] Steve: Very ha easy to say, hard
[01:25:04] Steve: to do. It’s like anything in life. Well, how do you get fit? You go to the gym and you eat less. Easy to know, hard to do.
[01:25:09] Tony: Hard work. Yeah.
[01:25:12] Cameron: Okay, so back to the future. So let’s think about the next couple of years. So. AI is going to continue to improve, dramatically, I believe, in the next few years. It will start to take jobs. Do we agree on that?
[01:25:29] Tony: Uh, yes, but, but that may not be a bad thing. They might as Steve says, supplant people into other
[01:25:34] Tony: jobs.
[01:25:36] Cameron: Okay, maybe. So,
[01:25:38] Steve: Take jobs and invent jobs, as it always has,
[01:25:40] Tony: Yeah. That’s a good way to say it. Yeah.
[01:25:43] Cameron: right. So, initially, the sort of jobs I think are going to go in the first
[01:25:48] Cameron: trench. And I’m talking the next couple of years. At first, slowly, while we’re testing the capabilities, the reliabilities of AI, and assuming that they stand up to that test, I think they’ll go quite quickly. Low level knowledge worker jobs and artistic jobs.
[01:26:07] Cameron: Graphic design, gone when you can just create something, um, using an AI in a second. Um, you know, low level sort of data analysis jobs, uh, gone. Um, writing. Anything that involves writing marketing copy or anything like that is gone. Secretarial work gone when you just talk to your computer and it can rewrite it in any fashion that you want.
[01:26:36] Cameron: I mean, I already dictate most of my podcast notes to my computer. Um, and if I want to sharpen it up, I’ll just then, it already is translated via an AI from voice to text, but then if I want it cleaned up, I’ll just dump it all into something and say clean this up for me and it will. Um,
[01:26:56] Tony: Can I just stop you there? I think that’s very simplistic. I think that there will still be some humans involved in each of those industries for different reasons. Um, partly because in the next five years I don’t think the AI will be perfect enough to replace holus bolus everyone. But just take the secretarial support.
[01:27:14] Tony: So you’ve got an AI controlling your calendar. I’ve got an AI controlling my calendar. Um, we work for large organizations and something fucks up and the meetings clash and you know, it’s human nature to want to open your door and yell at the secretary to say, sort it out for me. There’s still, there’ll still be someone sorting out problems with the tech.
[01:27:34] Tony: So he wants to plant every role, every job. in secretariat, in
[01:27:38] Tony: the secretarial
[01:27:39] Steve: Well, so Tony has solved it here. Basically, AI won’t take over because humans want to yell at humans and a lot of people don’t know that. That is the savior! To
[01:27:49] Cameron: I think that tells us, tells us more about Tony than it does about, uh,
[01:27:54] Cameron: Humans in the future.
[01:27:56] Cameron: Last thing I want to do is yell at someone and you’ll just yell at your AI. I mean, you’ll go, Hey, I mean, who hasn’t yelled at their AI before?
[01:28:05] Steve: And, and of all the things Siri isn’t good at, given the woke organization running, I should work. Well, I’m not one of those people, but, but the idea that says, sorry, I won’t answer that if you use, uh, abusive language.
[01:28:20] Cameron: racist
[01:28:21] Tony: don’t be, it’s mine says don’t be like that. If you went through my Siri searches, it would just be fuck off Siri.
[01:28:28] Steve: I say that too, and it says, I won’t answer such profane language. It actually does that to
[01:28:32] Steve: me. And, and, and as Cam, Cam has said, you got to be nice to the
[01:28:35] Steve: robots. Cam’s getting in early. Cam is getting in early.
[01:28:39] Cameron: People living across the road from my mom have got like a robot. Roomba mowing their lawn, which I’ve never seen in the wild before. Just all day, it’s just pottering around mowing their lawn. Then it docks on the side of the house to recharge. Super crazy. Anyway, uh, coding, obviously, in terms of the early jobs, and look, I’m not saying, Tony, that all the jobs are going to go immediately in these sectors.
[01:29:01] Cameron: I’m saying, okay, if you’ve got a team of five coders and a manager, you’ll, you’ll lose one, then you’ll lose two,
[01:29:08] Tony: Yeah. So there’s an, no, you’re right. There’s an efficiency there, but that’s my point is, is what does it cost to replace those people? How much is Sam Altman going to charge
[01:29:17] Tony: for the coding AI to replace the people who are now coders?
[01:29:22] Tony: And then compare that cost to people in India.
[01:29:25] Tony: Who do it for like two bucks an hour.
[01:29:27] Steve: So this is the competition
[01:29:28] Cameron: but they won’t be anywhere near as good as the AI.
[01:29:31] Tony: Oh, it says you.
[01:29:33] Cameron: I’ve hired people in India and I’ve used AI and already the AI is better than
[01:29:38] Cameron: the people
[01:29:39] Steve: Cam, that’s so racist. Cam is so
[01:29:41] Steve: racist. just saying, brother. No, well,
[01:29:44] Steve: but this is the competition thing. How much will charge? How much will Sam Altman charge? It depends on how much competition he’s got.
[01:29:52] Cameron: Yeah.
[01:29:53] Steve: humans and from other AIs.
[01:29:55] Cameron: but we’ve, they’ve already established sort of the price point, right? We’re already paying
[01:29:59] Cameron: 20 to 30 bucks a month for AI,
[01:30:01] Steve: Pretty good value.
[01:30:03] Cameron: premium level tools, uh, you know, whether you’re looking at U. S. or Australian, um, which will do all of that for you. They may have a higher level of access, but we’re talking.
[01:30:15] Cameron: You know, not a lot. I mean, they’re not going to start charging all of a sudden 10, 000 a month for it to do what they’ve already shown you it can do for 20 a month.
[01:30:26] Tony: Well, they might because, I mean,
[01:30:28] Tony: we’re getting into the area of prediction, so this is a bit of what if,
[01:30:30] Cameron: Yeah, yeah,
[01:30:31] Tony: like, what if the ChatGPT general’s 20 a month,
[01:30:36] Cameron: Tony, the show is called Futuristic!
[01:30:39] Tony: but they actually,
[01:30:39] Cameron: all we do on this show is predict the future, Tony.
[01:30:43] Tony: some little gremlin in Sam Altman’s corporate structure says, Hey, you realize if we actually package up the AI to say it’s a coding specialist, we can charge 1, 000 a month for it to replace coding people.
[01:30:57] Tony: Um, so there could be
[01:30:59] Tony: specialization in AI where
[01:31:00] Tony: they charge a premium, and then that’s got to be, you know, Competed against other providers in the
[01:31:05] Tony: economy.
[01:31:07] Cameron: Anyway, my point is that I think the first layers that are gonna go will be low risk, high cost, high benefit jobs, right? So you can take out a coder here, you can take out a customer service person, replace them with an AI, you can take out an analyst, you can take out a writer, graphic design, industrial design.
[01:31:30] Tony: Look, I agree. Call centers are toast. They’re gone.
[01:31:33] Cameron: So still initially it’ll be incremental, you’ll take out one, then you’ll take out two, then you’ll, you’ll still have managers overseeing the work and
[01:31:41] Cameron: that kind of stuff until
[01:31:43] Steve: big, big end too. Lawyers, investment bankers, a whole lot of
[01:31:46] Steve: analysts, like, we’re not just talking low end call center, you’re talking a whole cohort of, you know,
[01:31:52] Cameron: I think
[01:31:53] Steve: in a big law firm, where they’re doing discovery and all of that. Bam, AI, it’s done it.
[01:31:57] Cameron: Well, I think that’s the next level. So I say the first level is low risk. If I get rid of a graphic design person and then the
[01:32:04] Cameron: AI produces a bad graphic design thing, I can just say, do it again, do it again, do it again. If I have them, if, you know, if I replace a lawyer and they come up with a bad contract, um, that could be a bigger issue.
[01:32:16] Cameron: But I think the first iteration, we will be testing it. In the workforce, and they already are, but I mean at a higher level, more visible level, and when we have the ChatGPT 5 level of reliability that we believe is going to be there, they will start to go, you know what, we’ve been testing it, replacing a human for six months, for a year, It’s been terrific.
[01:32:41] Cameron: Okay, now let’s replace 10%, 20%, 30 percent of the team until it just replaces nearly everyone. Then around 27 to 2030, I sort of predict is the next layer, the second layer, when we’ll have AGI sometime around this timeframe as well. Higher level jobs will start to be replaced, middle management, because they’re going to be less people to manage.
[01:33:04] Cameron: So, you know, I can get rid of every second middle manager and just, you know, group all the people that we still have under this one manager. I think it’ll start to impact legal, accounting, HR, again, because there are less people to hire and manage, recruitment for the same reason, psychologists. I think more people will start to use AI as a psychologist.
[01:33:27] Cameron: And that may, I mean, we already have too few psychologists in the marketplace, so that may not have a short term impact on the, uh, incomes of psychologists in Australia. But eventually, uh, people won’t study to become psychologists if everyone they know is using an AI. So as a psychologist, um, medical, everyone’s going to have a free GP.
[01:33:49] Cameron: Again, this is going to be, it’d be a good thing because it’ll take strain off the existing medical infrastructure initially, but are you really going to study to be, are you going to go in a medical school today and study to be a doctor? If you’re looking at where AI is going in the next 10 years, you really think it’s worth 10 years of study to become a doctor when we’ll have AI in 10 years that can answer any medical question better than a human can?
[01:34:14] Cameron: I’m not sure that people are going to, that it’s going to be a logical step. Animation will go, business strategy and then acting and all sort of work in TV film production, I think is going to go.
[01:34:28] Steve: Well, Sora, I mean, just extrapolate out what Sora can do. But also with acting, that’s interesting though. And I’ll tell you, acting, you might say, We’ll go away. And we’ve seen that there are virtual influences that already exist. This is where, do we want to see something, human creativity because a human is doing it?
[01:34:51] Steve: Like that’s actually an interesting question. I don’t know the answer to do. We just know we just want to see creativity and I don’t care if that
[01:34:59] Steve: actor doesn’t exist in the real world. If I like their
[01:35:02] Steve: persona And it’s not Tom Cruise, it’s Billy Bloggs, who is an AI actor. Who’s in a whole lot of starring Billy Bloggs in the next movie.
[01:35:09] Steve: I don’t know.
[01:35:10] Cameron: films have already answered that question, Steve. And
[01:35:13] Steve: Yeah, good, good point. Although, although, we don’t just have, uh, Avengers movies. We have movies with real humans. The other contention
[01:35:24] Tony: that’s a note I’ve made, Steve. Sorry to interrupt, but, um, there’s an artisanal element. So, again, extending the curve, um, if AI gets so smart at feeding us, for example, that we get to drink a bowl of gloop
[01:35:39] Tony: every day, because it’s the cheapest, best way of getting our nutrients, then there’s going to be artisanal cafes pop up where they bake the bread by hand.
[01:35:47] Tony: Um, so I think,
[01:35:49] Cameron: no one can buy the bread because no one has any money
[01:35:51] Tony: well you say that, but, but who, AI, AI has got to be paid for, so people have to have jobs to pay for AI, otherwise it doesn’t exist. It’s, you know, Microsoft don’t half of ChatGPT, they’re not going to give it away. They’re going to, and that’s the other, other problem I’ve got with getting to where you want to get to, is that there’s a growth imperative in these big companies.
[01:36:12] Tony: Companies, they’ve got to keep growing. So if everyone subscribes to ChatGPT for 20 bucks a month, where does the growth come from? There’s got to be, there’s got to be, there’s got to be add ons and other services, etc. Like the doctors, etc. They’re going to be charged to allow Microsoft to keep increasing its revenues every year.
[01:36:33] Tony: It’s got to come up with new products and charge more for
[01:36:35] Steve: tend to get bundling and unbundling. Like, this happens again and again and again. And we’ve seen this in computations. Like, at the moment, we’re moving to a bundling phase in tech where everything comes under the one banner. And then it fragments out because people do better versions of this and it’s, that’s the doctor AI and this is the architect’s AI and this is, it’ll be interesting to see if that happens.
[01:36:56] Steve: I think that’s inevitable because it’s a greater way to make money by refragmenting the market, then re bundling the market. And it’s just basically, it’s the breathing effect of how the markets respond to maximize revenue over the longer term by changing their product portfolio.
[01:37:09] Tony: Yeah, and that maximizing revenue is really
[01:37:10] Tony: important in this discussion. It’s not going to, AI is not going to be given away. You’ve got to pay for the psychologist on
[01:37:16] Tony: your phone. It’s not going to be free, and therefore not everyone will use it.
[01:37:21] Cameron: 60 percent of the AI models that have been released in the last year were open source. And A lot of those are very, very,
[01:37:30] Cameron: close in terms of their performance capabilities to the top end models, the ChatGPTs, the Geminis, and that kind of stuff. There is a theory that we will have a lot of freely available open source models.
[01:37:45] Cameron: At the moment, you can’t really run them on your desktop PC easily. They require a lot of, uh, Technical know how to get them up and running and trained and all of that kind of stuff, but all of that will go away at some point where it’ll be click install and you’ll be able to run your own AI locally.
[01:38:00] Tony: Yeah, but that’s like, we’ve seen that curve before as well. It’s like Linux, right? It’s, it’s an open AI operating system, but
[01:38:08] Tony: Microsoft was so
[01:38:09] Tony: much better at giving you
[01:38:11] Tony: one that worked a little bit better and it was free, it was cheap and came with your laptop, that it won the day.
[01:38:18] Cameron: Yeah, because It didn’t have an AI built into Linux. When you have an AI built into Linux that can do, you know, Everything that Microsoft can do.
[01:38:26] Tony: It still may not win the day. It probably
[01:38:28] Cameron: It may
[01:38:29] Steve: gonna, who’s gonna own the data center? I mean, you might end up with someone like Amazon Cloud and others. You know, basically this idea that computing really becomes in the energy business and the energy they provide is the energy to run the data centers and the information in, in processing the data, which then has AIs, which are open source and cheap, but any commercial AI fails.
[01:38:50] Steve: I don’t know. You could get a whole shift in the computation, uh, infrastructure,
[01:38:57] Tony: The other thing that I wanted to add to this debate is you’re talking about professional jobs being replaced by AI, which could potentially happen, but the personal insurance industry or professional identity insurance industry has to get comfortable with that before it will allow it, and they allow it by, they won’t allow it by charging unreasonable premiums to, until I get comfortable with it.
[01:39:17] Tony: So it takes a long time for the insurance industry to see that AI is going to be effective as a doctor and not kill people before it will bring the premiums down to a low enough level to let AI operate at a cheaper rate than the current medical staff. So it’s going to take longer than five years, I
[01:39:34] Tony: think, just from the insurance point of view.
[01:39:36] Steve: but,
[01:39:37] Steve: but, but
[01:39:37] Cameron: who’s buying the professional indemnity
[01:39:40] Tony: The Doctors. Doctors currently, one of their biggest costs currently is to get insurance, so if they make a
[01:39:45] Cameron: Yeah. But if, if I’m asking ChatGPT, my medical question, where does professional indemnity come into it?
[01:39:53] Tony: Oh, well, good question, but the provider of ChatGPT could be sued by the family of the person who was injured by the medical advice.
[01:40:03] Steve: They’ll just take what, they’ll just do exactly what big tech has done up until now, which is yeah, well, we’ve got 5 billion customers and we’ll go to war with whoever because
[01:40:11] Steve: our pockets are full and we might lose a few cases here and there, just like Mark Zuckerberg, you know, yeah, let’s go for live streaming, shoot up some people live, it’s all good, whatever.
[01:40:19] Steve: That’s, I think, what will happen there. But, but that
[01:40:22] Tony: But that’s the Wild West of
[01:40:24] Cameron: ever shot any people live on camera, which is what you just inferred there. I,
[01:40:28] Tony: I don’t
[01:40:28] Steve: didn’t infer that he should, I said he let people video it.
[01:40:33] Tony: Wild West of medicine though, right? So medicine has
[01:40:36] Steve: You know I love Mark.
[01:40:40] Tony: So that when I, so that when I go to the doctor, I know the doctor has been taught, has been qualified, has the right insurance, um, will likely provide the right advice ’cause I’ve gone through all these gates along the way.
[01:40:53] Tony: The wild west of, of
[01:40:54] Tony: medical advice is gonna be, no, I’ll just ask my phone. , I don’t care if it, you know, may gimme the right answer. It
[01:41:01] Steve: doing it. We’re already doing it though, aren’t we? Pretty quickly.
[01:41:05] Cameron: yeah,
[01:41:05] Tony: I think it’s more likely that, I think it’s more likely that the phone will be able to provide me with prescriptions up to a point. You know, so there’ll be parts of the medical industry that
[01:41:16] Steve: I don’t think the medical profession goes away. It just shrinks. So, the first gate you go through is the AI gate, and then you take What you
[01:41:24] Steve: and the AI have agreed or understood or communicated to the doctor who then goes, Okay, yeah, I like what the AI has done and it just truncates and expediates the entire process, I think.
[01:41:34] Tony: yeah,
[01:41:34] Cameron: it’s the shrinking that I’m.
[01:41:36] Cameron: It’s the shrinking that I’m talking about. All of these sectors are going to
[01:41:40] Cameron: shrink and we have to figure out how we’re going to redeploy the shrinkage. Shrinkage, Jerry, shrinkage. Um, we have to,
[01:41:51] Cameron: like a turtle. We have to, we have to figure out how we deploy the shrinkage.
[01:41:55] Cameron: And my point of all of this is I think this is going to happen way faster. than governments are prepared for. I think it’s going to happen way faster than industry is prepared for. And I think it’s going to happen way faster than people are prepared for. I think it’s going to happen in this decade, massive shrinkage.
[01:42:15] Cameron: And we’re not talking about it, we’re not prepared for it, it’s gonna be, it’s gonna be like when the pandemic hit, and we were like, oh shit, we have not prepared for this, and the world went into chaos mode for a year. I think this is gonna be a similar sort of thing when it hits, cause no one’s, no one’s really taking it seriously.
[01:42:38] Cameron: very seriously. And I’m starting to feel like St. Paul running around saying the world’s about to end and no one’s listening, except in the difference being I’m right and he was wrong.
[01:42:49] Tony: I don’t know if you are right. I don’t think it’s going to hit us like the pandemic hit us. It’s not all going to happen at once for a start. Not every industry will be disintermediated, disintermediated,
[01:42:57] Tony: at once. Like you said, it’ll start off with graphic designers, etc, etc. And I take Steve’s point, if we extrapolate again from the curve, there’ll be a bigger gig economy when that starts to happen.
[01:43:08] Cameron: But when I say it’s not going to happen at once, I’m not saying there’s going to be a five year gap between level one and
[01:43:12] Cameron: level two. I’m saying a year between level one and level two. 2025 will be level one. 2026 to 27 will be level two.
[01:43:23] Tony: Yeah, I don’t see it happening that quickly, Ken. Level two. So are you saying that Judges are going to accept me defending myself with a phone so I can go in the court and say, Hey judge, listen to this. Here’s my argument. And Siri reads out the case precedence and the judge goes, yeah, okay. That’s just going to take a lot longer than two years, even five years for that to
[01:43:48] Steve: You’re saying that the human gatekeeper is more poignant than we think and will stay longer within commercial settings, legal settings, governmental settings.
[01:43:59] Tony: I’m saying two things. I’m saying, yes, that there’s a human nature to this, but there’s also an economic component to this. And I don’t think it’s going to be such a quick evolution. I think it’s going to be a lot of competition and
[01:44:12] Tony: what’s more efficient and who’s paying, what price do you pay for this and how do
[01:44:17] Tony: they make money and who are they charging?
[01:44:19] Tony: All the stuff that goes on in the regular economy has got to play out as well,
[01:44:22] Steve: Well, certainly areas
[01:44:24] Cameron: I’m not suggesting that Sorry, I’m not suggesting trial lawyers are going to disappear. Again, I’m not saying that all jobs in all
[01:44:31] Cameron: sectors are going to disappear on day
[01:44:33] Cameron: one. I’m saying shrinkage. So, lawyers, low level lawyers, legal clerks, you know, the, the, your local guy that does a real estate contract or a conveyancing contract or divorce agreement or an employee contract or All of those people will go first, not all of them, but increasingly they will all be replaced by an AI which can write a watertight contract faster, cheaper than any human can.
[01:45:06] Cameron: And then there’ll be a trickle on effect as, uh, as people and businesses become increasingly confident That, oh yeah, this is robust. Yeah, we can’t pick any holes in this. We’ve had all of our, you know, top legal experts try and find flaws in the cases that the AI is providing and we can’t find one.
[01:45:27] Cameron: That’ll hit the media. People will go, Oh, did you hear that? Yeah, the AI is now better than any lawyer on the planet. Um, why are we still paying lawyers 800 an hour to represent us? It’s going to happen pretty quickly, but it won’t happen All at once, it’ll be like, it’ll be an avalanche, but it’s going to be a very fast avalanche, is my point.
[01:45:49] Steve: I think it’s going to
[01:45:50] Cameron: years, not decades.
[01:45:52] Steve: if there’s an avalanche, there’s going to be two sides to this mountain. You’re going to have the commercial, let’s call it unregulated avalanche, which organizations get to make the choices on who they deploy to get the work done. And then you’re going to have things which have a higher regulatory hurdle.
[01:46:12] Steve: And the doctors is a good one that’ll take, I think, a lot longer. And this is a classic of Amara’s law, which states that the impact of a technology is usually overstated in the short run, but understated in the long run. Uh, and the example that I will cite there is that it took us 12 years before the government in Australia allowed us to see a doctor on the phone just to get a script renewed or something.
[01:46:35] Steve: It wasn’t until COVID that they did that, but the technological capability was there from about 2010. And it wasn’t 2021 until we deployed seeing a doctor on your smartphone.
[01:46:46] Tony: Yeah, and that’s why I think this will all roll out, Steve. That’s exactly, really good analogy. It’s um, The tech will get there before the economy does, I guess is what I’m saying. And that gives a bit of breathing space for the economy to work out what it wants to do with disruption and displacement.
[01:47:02] Steve: But I think what should happen is they should pay people like the three of us to give advice while we don’t know what’s going on to just come. And so just sing it, just sing out and we’ll just, we’ll
[01:47:12] Tony: yeah, and
[01:47:13] Steve: and just pay us an inordinate amount of money to pontificate on these points.
[01:47:17] Tony: and the background cam can ask ChatGPT and we can sell that one off as well. We can feed
[01:47:21] Steve: We’ll just have an earpiece in and Cam will just be asking it the question. We’ll just be saying, here’s what we think, because no one knows what we think anymore. We’re just asking the AIs in secret,
[01:47:31] Tony: But extrapolating the curve, right, we’ve seen, we just spoke before about the newspaper industry and how it’s disrupted. It’s just like, there wasn’t mass unemployment, people moved into other jobs, you know, because a lot of times these jobs like conveyancing or contract writing, they’re done by people who don’t really want to do those jobs.
[01:47:50] Tony: They don’t want to be treated like a robot just sitting
[01:47:52] Tony: there doing nothing. You know, turning out the same boilerplate every day. They, they see themselves as being better than that. So
[01:47:59] Cameron: but why are they doing it?
[01:48:02] Tony: eat,
[01:48:03] Steve: because they’re not better.
[01:48:04] Cameron: can’t,
[01:48:04] Tony: or they want to
[01:48:05] Tony: get,
[01:48:05] Cameron: not better, they can’t get a better.
[01:48:06] Tony: it’s the first step along a promotion curve for them is often the reason.
[01:48:10] Tony: Oh!
[01:48:14] Cameron: and I’m wrong, genuinely. But that’s the same thing I say to
[01:48:17] Cameron: Christians, uh, when they say, we’re gonna spend eternity in paradise with Jesus. I hope you’re right and I’m wrong, but so far, I’m never wrong, so.
[01:48:28] Steve: Cam, I’m never wrong. I love I’m never wrong. But what I, what I love though, is I, I think that you’re keeping your hair like that. So when Jesus sees you, he knows that he can be in a rock band with you because you have a similar haircut.
[01:48:40] Tony: I think you went to the wrong
[01:48:41] Steve: Alright, and that’s what you’re doing in
[01:48:43] Tony: where you’re wrong.
[01:48:44] Steve: You’re aligning.
[01:48:44] Steve: You’re aligning physically so that he may accept you despite the documentary films you’ve made about him not existing. That’s all I’m
[01:48:53] Cameron: When I first came into this, this is my old bedroom I grew up in,
[01:48:56] Cameron: by the way, to record this morning, this light was on above
[01:48:58] Cameron: me, and I thought that was too much of a halo effect, so I actually, I turned it off, so I didn’t, uh, accentuate the, uh, prophet esque, uh, look of my hair. Thank you guys! Look, we’ve been talking for two hours.
[01:49:12] Steve: Wow.
[01:49:12] Cameron: Um, I appreciate both of you coming in and talking me out of my, uh, state of panic. Uh, Tony, thank you for coming on and, um, bringing your, uh, vast intelligence to, and experience to this. Steve, as always, your vast intelligence.
[01:49:29] Tony: You know, that’s the other point I wanted to make before we go is we’ve seen super intelligence before, but it’s still subject to commerce and human nature. Look at Einstein. He discovered the laws of physics which led to the atomic bomb, but he couldn’t stop it from being used.
[01:49:43] Tony: It’s the similar sort of thing with when we get a ASI, I think,
[01:49:47] Tony: it’s going to have,
[01:49:49] Tony: it’s going to butt up against governments, it’s going to butt up against commerce before it
[01:49:53] Tony: eventually does what it
[01:49:54] Cameron: well, my final note is my, my problem with that is, I think, The idea that a superintelligence will be beholden to the desires of humans is like saying humans will be beholden to the desires of ants.
[01:50:13] Tony: No, I know, but what happens if the Chinese superintelligence gets built first and the Americans feel the need to build one, and there’s a superintelligence
[01:50:20] Tony: war?
[01:50:21] Cameron: I think that’s a very likely outcome
[01:50:25] Tony: it may mean that they cancel each other out.
[01:50:28] Steve: Well,
[01:50:29] Cameron: Well, cancel everything out if that happens.
[01:50:32] Tony: Possibly. yeah,
[01:50:37] Cameron: a super intelligence, once we have a machine intelligence, that’s a million times more
[01:50:42] Cameron: intelligent. And the thing that Kurzweil, sorry, points out in this latest book is, uh, maybe even in earlier books, um, the, the, Turning point will be when the AIs are able to code themselves better than humans can code themselves.
[01:50:59] Cameron: When the AIs are in control of their own coding, that’s when they go from being smarter than us to a million times smarter than us. Really, really and quickly, there’s this recursive loop that happens with each generation and the generations are days apart, not years apart. And that’s what they call FOOM in the AI industry.
[01:51:23] Cameron: Is that hard take off? FOOM! Like in a superhero comic when a jet takes off? FOOM! It’s when you, and this is the sort of stuff that Eliezer Yudovsky is scared of, is when the AI goes from being just a little bit smarter than us to 10, 000 times to a million times in a month, and just because of its coding optimization that it does for itself,
[01:51:49] Tony: that’s the dark, that’s the dark forest problem, isn’t it? Because really, what that, what that, what that is, what that ASI is,
[01:51:56] Tony: is an alien. And, you know, again, we have a paradigm for dealing with aliens. We, we destroy them
[01:52:04] Tony: before they have a chance to grow up and destroy us.
[01:52:07] Cameron: but we’re not going to do that.
[01:52:09] Tony: Why not?
[01:52:10] Steve: Because they’re our children. And,
[01:52:12] Tony: To our commercial interests.
[01:52:14] Cameron: Yeah, the same re I mean, this is the question, like, Eliezer Yudovsky and the like have been saying this for the last year or
[01:52:21] Cameron: two now is, We need to stop this right now! stop
[01:52:24] Cameron: building AI! Stop build Stop spending hundreds of billions of dollars a year to build something that’s gonna kill us!
[01:52:31] Cameron: And everyone’s like, Nah, nah man, we can’t.
[01:52:35] Steve: Elliot, have you seen how good that last email was to my manager? I don’t think you understand the implications here, brother.
[01:52:42] Cameron: yeah, the love poem that I just wrote to this girl that I like. Come on, man. It was like Pentamic It was a
[01:52:50] Steve: seriously,
[01:52:51] Cameron: I think you know, if we know anything about the laws of industry and commerce is that if there’s a buck to be made, we will push ourselves to the brink of extinction to make that buck. Um, hence, uh, the nuclear bomb that you said before is exactly the point. We built that to stop the Russians. Why? Because they were threatening the capitalist economy.
[01:53:14] Tony: Mm hmm.
[01:53:15] Steve: on that capitalism thing with the AI war across nations, one interesting thought, and Cam, you’re probably the best person to ask this, given your work on the Cold War podcast and so on. The idea that different nations are building out AIs, I do wonder if the AIs from nation states have some sort of a systemic competitive advantage.
[01:53:40] Steve: Evolution, where it’s commercial AI, which has all of the world’s knowledge and data in it, versus the Chinese AI, which is segmented in the great firewall of the internet. Do they train their AIs on the type that they deploy in their own economy? Or do they stealthily bring in everything from the world and the West to train their AIs?
[01:54:00] Steve: Because you might end up with two typologies. A communist typology AI, which sort of has a very draconian viewpoint of the world, versus the open source. A. I. model from the U. S. which has everything in it from fake news to whatever to train it. Like, do you end up with two typologies of
[01:54:17] Tony: has a draconian worldview of the world as well.
[01:54:20] Steve: Well, yeah, but do you end up with two different A.
[01:54:22] Steve: I. s? And if you do, is one more intelligent and defeat the other? Is it kind of like a
[01:54:27] Tony: that’s Elon’s. That’s Elon’s.
[01:54:28] Steve: competing
[01:54:30] Tony: He wants, he wants to build an AI because ChatGPT is too liberal.
[01:54:35] Steve: Well, I’m just interested in that, if you end up with this AI Cold War with them
[01:54:39] Tony: Yeah, it’s an extension of human
[01:54:41] Tony: nature. But of course, ASI may just, just go off on a different, might go off and swim
[01:54:46] Tony: with the whales and forget about us. Who knows?
[01:54:48] Steve: Yeah. Yeah. Like on her.
[01:54:50] Cameron: well, I, I want the communist AI to win, because I want Star Trek communism, um, in place, where everyone is given everything that they need, and no
[01:55:00] Tony: China doesn’t have Star Trek communism.
[01:55:04] Cameron: Not yet, that’s why they’re building the AI, to get us there.
[01:55:08] Tony: Oh, and, and the,
[01:55:09] Tony: the head of the CCP will just hand over power to the AI, will he?
[01:55:14] Cameron: That’s what Xi Jinping’s lifelong mission has been, is to give the world Star Trek communism, that’s his,
[01:55:20] Steve: Did he really say that?
[01:55:21] Cameron: and, no, I said that, but, you know, I’m telling you. Xi Jinping,
[01:55:25] Tony: Hmm.
[01:55:25] Steve: I said it.
[01:55:26] Cameron: Xi Jinping is a true believer in, uh, the, the
[01:55:31] Cameron: utopian communist, uh, ideal, as am I.
[01:55:35] Tony: Yeah, but he’s wearing a red shirt and we all know that red shirts never survive. Star Trek episodes.
[01:55:41] Cameron: As for the ASI just leaving us all behind, Tony, I, I, I, I’ve thought a lot
[01:55:46] Cameron: about that argument and I don’t think it’s necessarily an either or. The ASI can go off and explore the universe and turn all of the planets into computronium, but they can stay here as well and keep running things here. I don’t think it has to be an either or.
[01:56:02] Cameron: Anyway, fun chat. I, I really enjoyed talking to both of you guys. Um, nice for Tony and I to be able to spend two hours talking about this instead of the half hour we normally do at the end of QAV these days. I’ll talk to you on QAV tomorrow, Tony and Steve. Um, I’ll talk to you whenever I talk to you. Have a good week, both of you.
[01:56:20] Tony: Thanks guys. Well done.
This week Steve and Cam debate Apple Intelligence – the biggest flop ever? Or the the beginning of huge things for AI? Also – Cam talks about his timeline for the next five years, and how the world’s economy is going to require complete re-engineering.
Futuristic 26
[00:00:00] Cameron: Testing, testing, loop back, yeah, I think everything’s running. Welcome, welcome back to The Futuristic. Uh, this is, um, I’m not sure what episode this is, cause I haven’t, do you know what episode this is? 26 says Steve, thank you Steve. Uh, it’s been a few weeks between episodes, Steve, because we were talking like, hey, there’s not much happening, not much going on, then all of a sudden, everything’s happening, everything’s going on.
[00:00:37] Cameron: Uh, Apple, Apple. This week, a couple of days ago, did their WWDC event where they announced
[00:00:45] Cameron: Dun, dun, dun, dun, Apple Intelligence.
[00:00:51] Steve: And I’m glad they called it that, Cameron. I’m glad they called it that because there was nothing artificial about it, my friend. There was a few little just maneuverings. And I feel like there’s a big heated discussion on the relative merits of Apple intelligence because let me tell you, that’s all it was, man.
[00:01:08] Steve: It was deck chairs inside the Apple ecosystem.
[00:01:13] Cameron: Steve sent me a text basically saying
[00:01:15] Cameron: he thought it was a bunch of bullshit. And I was like, Oh, I disagree, man. I think it’s huge. So we had to do a show, but we were just talking off air
[00:01:21] Cameron: about,
[00:01:22] Steve: glad, I’m glad that we disagree because no one wants to tune into the Mutual
[00:01:26] Steve: Agreement Society. How boring is
[00:01:27] Steve: that
[00:01:28] Cameron: Well, we normally, we normally agree. Um, I, I wanted to have my QAV Kynaston on because he and I had a debate on at the end of our QAV show this week for about an hour about AI. He’s still very cynical and skeptical that AI is gonna have any impact on the world. And, uh, I was like, dude, I need to get you on with Steve and we can have this debate.
[00:01:51] Cameron: Cause I respect Tony. He’s one of the smartest guys I know. One of the most successful guys I know. And so I respect anything he has to say, but I think he’s missing the point on AI. But anyway, um, we were off air, you and I were talking about David Lee Roth, the wisdom of David Lee Roth, and then you brought up Motley Crue, which brings us back to Apple, because the opening of the WWDC, Craig Federighi and his team, uh, Paratrooping, skydiving out of an Apple plane to the Motley Crue track Kickstart My Heart.
[00:02:30] Cameron: I thought it was all quite funny. Frederique pulling on his helmet that was basically a version of his perfectly coiffed silver hair. Jumping out, uh, and then they, they land and Tim Cook’s waiting for them, standing on the
[00:02:46] Cameron: roof of, uh, the Apple headquarters at number one loot. Um,
[00:02:53] Steve: yeah, I’m not jumping out. I’ll just stand here,
[00:02:56] Steve: kids. Well,
[00:02:58] Cameron: Uh, so the first hour of the two hour show, I was sort of going, when are they going to talk about AI, because they’re talking about, All of the updates to iOS and Apple, the watchOS and the macOS, no mention of the AI stuff, and I’m like, come on, we know that the OS stuff, the AI stuff is coming, we know that it’s going to impact all of these things, why aren’t you leading with that?
[00:03:26] Cameron: They’re trying to do a Steve Jobs one more thing. AI. It should have been Woven into the whole presentation, I thought. My complaint was, stop wasting my time talking about all of the updates to the OS if you’re not talking about the thing that actually we all came here for, which is AI. But anyway, that was my biggest complaint about the whole thing.
[00:03:54] Cameron: I don’t really care about being able to change the color of the widgets in the iOS, and I don’t really care about emojis, and I don’t really care about Being able to time text
[00:04:08] Cameron: messages. I don’t really care about anything that they had to talk about, quite honestly, until it got to the AI.
[00:04:14] Steve: it seems like, it seems like you agree with me because everything
[00:04:18] Steve: you’ve mentioned, you don’t care about, which is 99 percent of what was announced. There was, there was nothing, nothing. It was one out of 10. It was a massive flop, it was a super fail, and the fact that their share price has gone up 10 percent since
[00:04:34] Steve: then is astounding me.
[00:04:37] Steve: But no, no, it doesn’t prove me wrong, because we know that in the short term,
[00:04:41] Steve: the share market doesn’t know what it’s talking about, but in the long term it does. It’s a voting machine, and it just proves that the world are morons. That’s all that proves, and that fund managers don’t understand AI.
[00:04:51] Cameron: I said, Tony should have been on this and here you are quoting Benjamin Graham, Tony’s, uh, God. Um, in the short term.
[00:05:00] Cameron: It’s, uh, no, in the short term it’s a voting machine, in the long
[00:05:03] Cameron: term it’s a weighing machine.
[00:05:05] Steve: machine, exactly.
[00:05:06] Cameron: Benjamin Graham’s,
[00:05:07] Steve: to say, Mr. Benji Graham, Intelligent Investor. Well, they had a 10 percent um, upside on a 3. 2 trillion market cap, which is 300 billion in value, and it turns out that the current market valuation of OpenAI, I think, is 98 billion. Tell me how, What Apple announced, this is the irrationality of markets.
[00:05:28] Steve: Tell me how what Apple announced is worse three times of what OpenAI is.
[00:05:33] Cameron: Okay,
[00:05:34] Steve: is, this is where people fail to get
[00:05:36] Steve: Yeah,
[00:05:37] Cameron: No, this is where, in all due respect, cause you know I love you and I respect
[00:05:42] Steve: you know what all due respect means? Hey dickface.
[00:05:45] Cameron: you
[00:05:47] Steve: all due respect,
[00:05:49] Steve: I’m really saying hey dickface.
[00:05:52] Cameron: should take that,
[00:05:53] Cameron: um, uh, gracefully because I don’t respect many, I don’t respect many people. And
[00:06:00] Steve: you don’t respect anyone, and I’m pretty
[00:06:01] Steve: suspicious you
[00:06:02] Cameron: for,
[00:06:02] Steve: respect me, which is fine. The
[00:06:04] Steve: longer some
[00:06:05] Cameron: For me to say that, for me to say that, it means a lot,
[00:06:10] Steve: Thank you. Thank
[00:06:11] Steve: you, mate.
[00:06:12] Cameron: Um, no, look, Apple Um, of, uh, Apple’s reach is enormous. The fact that Apple are now putting AI in, and they’ve always had machine learning. I don’t know if you saw, um, Tim Cook’s interview on, um, Marcus’s podcast. But, you know, they were talking about the fact is Apple will say, look, they’ve always been using AI. There’s AI in your watch, fall rec, you know, when it knows that you’ve fall alerts, the way it does it, they got machine learning and AI has been part of Apple’s thing for a long time, but they haven’t talked about it as AI since the whole LLM generative AI thing took off.
[00:06:55] Cameron: And that’s what we’ve been waiting for is them to say, Hey, here’s how we’re going to use this kind of AI. in our devices. There’s how many iPhones out there? One and a half billion iPhones that people have got. Here’s the thing from my perspective. What the, the, the way that they have announced they’re deploying Apple intelligence.
[00:07:19] Cameron: Okay. So for people who didn’t watch it, there were two parts of the AI thing in, in the keynote. Apple, uh, building AI. And again, to be clear, we’re talking about the sort of LLM based approach to AI, language based AI. People call it generative AI still. And, and I think that’s, I was thinking about this, listening to Tim Cook this morning, last night.
[00:07:46] Cameron: I, I, I don’t like the term generative AI because to me, it’s not about the generative component of it, even though that is a lot of what Apple’s doing. It’s not the fact that it’s generative that I think is important. It’s the fact that it understands language.
[00:08:03] Cameron: That, we need a better term than generative AI.
[00:08:06] Cameron: I
[00:08:07] Steve: and I agree and, and, and I use this when I do my keynotes on stage, you know, the phrase that I use is language is the fabric of all human knowledge because it, it crosses the chasm from mass to engineering, to marketing, to legal. If you understand language, you can create context, which ends up in schematics, visuals, every type of thing.
[00:08:27] Steve: And, and it turns out that the training mechanisms when they train the databases. On what the visuals are. Yes, it looks for patterns, but it’s the words underneath the patterns that helps the machine understand that that’s a palm tree, because it’s seen that described, you know, millions and billions of times, and then it knows what that pattern is and it matches the pattern to the word, and that’s actually all language based.
[00:08:47] Steve: And a lot of people miss that.
[00:08:48] Cameron: don’t know who I got it from, whether it was either Kurzweil or Wolfram a year or so ago, but the language user interface, linguistic user interface, I think is, is more
[00:08:58] Cameron: important than the generative aspect. But anyway, so just to be clear, that’s what the, this is what
[00:09:03] Cameron: Apple, so the Apple intelligence,
[00:09:05] Steve: GUI to Louie is, is the thing. Graphical User Interface to language, GUI to Louie. No, it is Language User Interface. I use that on stage as well. It always gets a laugh because it rhymes and everything that rhymes is 10
[00:09:15] Steve: percent better. I’ve always said that.
[00:09:20] Cameron: Yeah, you are a marketing guy. You came out of marketing and yeah, you know, you’re, you’re
[00:09:25] Steve: Still there.
[00:09:25] Cameron: rhymes, it’s 10 percent better. That’s perfect. Gooey DeLooey. So the Looey aspect of it. So There’s two components of, of, of what they talk. The first is Apple intelligence. So this is their own implementation that’s on device to a large extent, and it will enable you to use language to do a lot of the basic stuff that people who use LLMs already do with it.
[00:09:54] Cameron: Um, low, low level stuff. Write me an email, generate me a picture, create, you know, they were big into create your own emojis and all that kind of stuff. They didn’t say create your own icons, which I found interesting because one of the things I’ve been doing a lot lately, you know, I use iOS shortcuts a lot, increasingly because now I can write code thanks to GPT and I can use it to do, you know, relatively sophisticated things with iOS shortcuts.
[00:10:24] Cameron: Then. If you create a shortcut in iOS and you want to save it to your desktop to run as an app, they don’t let you, they’ve got a bunch of pre built icons that you can use and none of them are very descriptive. So what I’ll do is I’ll go into GPT and I’ll say create me an icon for an iOS shortcut for Adding notes to my Kung Fu note.
[00:10:50] Cameron: And it’ll create a little icon for me that says Kung Fu notes with a little picture of a guy doing Kung Fu that I can then save as the icon. So I have custom built, custom designed icons for all these shortcuts now, but they didn’t, they should have
[00:11:03] Cameron: got me involved. I would have said, this is the, this
[00:11:06] Cameron: is the way you can customize iPhone now.
[00:11:09] Cameron: Anyway,
[00:11:10] Steve: stop. Just really quickly. You’ve given me an idea. I’m gonna see if GPT this afternoon can create me an 80s style 8 bit video game that kind of is a mashup of some of my favorite karate games. So that’s, I’m not getting any work done this
[00:11:23] Steve: afternoon and it’s
[00:11:24] Cameron: that is work. No, that is work. That’s, that’s, you know, that’s where, that’s where the rubber meets the road. You may not get paid for it, but it’s still work. Um, so Apple Intelligence on device creating emails and texts and, and images and all that kind of stuff. On device, it’s only going to be available if you have a 15, uh, an iPhone 15 or later, an iPhone 15 Pro or later.
[00:11:53] Cameron: You’ve got to have the silicon chips, you’ve got to have the latest Apple chips, same with the iPads and the Macs, uh, the MacBooks. You’ve got to have a fairly recent one that’s running their chips, because it’s all running, uh, Then there is a second aspect to Apple Intelligence, which they’re calling Private Cloud Computing.
[00:12:11] Cameron: If you need to do something that can’t be handled by the chip on device, and they made a big deal out of that because they’re saying it’s all about security, it’s all about confidentiality of your data. They’re really driving that whole aspect home. If you want to still do it in that contained privacy security, uh, system, but it needs cloud computing power, it, the phone, the device, the iPad, the Mac will reach out to this Apple thing they’re calling PCC, private cloud computing, still completely private.
[00:12:48] Cameron: All of your data is locked in. Down, secured, encrypted, but it’s in Apple’s server farm, basically. So that’s part one. It’s fairly boring implementation of it, but. It’s going to be available on all of your Apple devices, if you have a relatively modern device, and, um, it’s going to be secure. The second component of what they talked about was integrating OpenAI, ChatGPT into Siri.
[00:13:19] Cameron: And by the way, all of that’s driven by Siri as well. So, hey Siri, Make me a picture. Hey Siri.
[00:13:26] Cameron: Sorry, my phone is like
[00:13:29] Steve: Look, same, see what you did, see what you did, creating
[00:13:35] Steve: complexity,
[00:13:36] Cameron: Our phones just went nuts. Um, all integrated. They’re not rebranding Siri. It’s all happening in Siri. But then you can say, Hey Siri, uh, show me what movies are showing at my local cinema. And It will say, would you like me to use? Yeah,
[00:13:52] Cameron: sorry. My phone, even though I put it on airplane
[00:13:54] Cameron: mode, how do you stop it?
[00:13:56] Cameron: Do you have to like turn it literally off to
[00:13:59] Steve: put it in the next room, put it in the lock box,
[00:14:02] Steve: Cameron, lock
[00:14:03] Cameron: Oh, I’m turning it, turning it completely off. Yeah. Okay. I need to turn you off. Uh, it will say, do you want me to use ChatGPT for that? And you, or you have to apparently say yes every time, and then it will use ChatGPT to handle your thing.
[00:14:22] Cameron: Now, here’s why this is massive. There are. Uh, the vast majority. I mean, I don’t know how many people are using ChatGPT on a daily basis now, but you know, let’s say it’s a hundred, maybe 200 million people, um, that have used it at all. I don’t know how many use it on a daily basis, probably a lot less than that. Um, within the next year or so, you’re going to have all of Apple’s user base, billion plus people, whether they know it or not, using ChatGPT. AI on their phones to do stuff. It’s going to, it’s, it’s a massive, we’re going to look back. Five years from now and say, this is when it all changed. This is when AI, the, the, the daily use of AI went from early adopters, which is the people that are using ChatGPT or Claude or Gemini or whatever now, still a lot of fucking people, but it’s still early adopters, right?
[00:15:21] Cameron: To
[00:15:22] Cameron: everyone who has an iPhone or iPad or a Mac will be using AI. And the implementation that they’re doing is boring. It’s low level, it’s boring, but it, it’s the beginnings of great things, again to quote David Lee Roth, The
[00:15:40] Cameron: beginnings of great things cannot be seen with the naked eye. That’s from Skyscraper, 19,
[00:15:48] Cameron: Skyscraper, 1988,
[00:15:51] Steve: I’m going to call 86, 88 could be, but let’s just make a promise to
[00:15:56] Steve: ourselves.
[00:15:57] Cameron: solo album.
[00:15:59] Steve: Great. And let’s make a promise to our listeners. That we try and mention David Lee Roth in every episode going forward, because that’s the kind of podcast I want to
[00:16:07] Steve: be involved in.
[00:16:08] Cameron: Dude, David Lee Roth changed my life. When I was 12 or 13 and I discovered Van Halen for the first time and I saw David Lee Roth, I was like, yeah, that’s who I want to be. That’s, that was my, my first role model in
[00:16:24] Cameron: life. Was, uh, David Lee Roth.
[00:16:30] Steve: Now, I’m ready to tee off on quite a few of these things that you’ve
[00:16:34] Steve: mentioned. And the first one I will say, I totally get your proposition of why this matters. I get the bigness of it, because what you’re really talking about is a market dynamic rather than a technological dynamic. And I agree with you entirely on
[00:16:52] Cameron: step change in terms of market access.
[00:16:55] Steve: for consumers. It’s actually not a step change. at all in technology. And it’s not even a step up to where the steps are up to. Like they’re not even if it’s a staircase, they, they went onto step number two of the 10 steps on the staircase. So in terms of technological innovation, any, not useful, anything that’s innovative, you know, being a fast follow who come back with a better product, it’s none of those things in my view, but I totally get your sentiment.
[00:17:22] Steve: On this is an introduction to a wider marketplace of the capabilities of AI, which in many ways helps others understand why there’s a bit of a boom now with everything from NVIDIA to AI being the topic du jour in business. So I get that. I get that. I was more thinking about the machinations of what they launched and how underwhelming they were, but the bundling of these and handing them over to, you know, whatever it is, 1.
[00:17:49] Steve: owners or what have you. I get the gravity of that.
[00:17:52] Cameron: And, you know, Siri, as they pointed out in WWDC DC, Siri’s been around for 13 years now, came out 2011, just before Steve died. And we all know, I mean, it’s been a running meme joke for many, many years, how terrible Siri has been. I still use it, as you know, all day.
[00:18:13] Steve: I still use
[00:18:14] Steve: it.
[00:18:14] Cameron: Every day, but it has been stuck as a technology for many, many, it has had incremental improvements, but pretty much it’s been a massive disappointment, uh, to me and for most of us for many, many years.
[00:18:29] Cameron: And there’s been lots of sort of reasons and explanations from Apple over the years about why You know, the code base was terrible. It was hard to upgrade. It was hard to maintain. Apple have finally gone. I don’t know how they’re doing it. They didn’t really explain
[00:18:45] Cameron: it. I haven’t heard Tim talk about it, um, in any of the post WWDC interviews either, but somehow
[00:18:52] Steve: You’re talking about Tim Apple because I just love it. I just love it when, when Trump said, we got to Tim here, Tim Apple,
[00:18:59] Cameron: Tim Apple,
[00:19:01] Cameron: Tim
[00:19:01] Steve: that was the greatest moment in culture, Tim Apple,
[00:19:04] Cameron: Trump is a branding genius. He’s a marketing genius. Oh, by the way, did you see, there was an article I read in The Fin last week, one of the original producers of The Apprentice? has just started
[00:19:18] Cameron: talking publicly about making The Apprentice because his NDA has expired
[00:19:24] Steve: please send me that.
[00:19:26] Cameron: oh, it’s gold man.
[00:19:27] Cameron: Like him talking about that first season of The Apprentice and what they went through. Apparently the original idea was they were going to have a different billionaire each season to be the host of The Apprentice. They, you know, they wanted to get Spielberg and Katzenberg and all the other bergs. Um, And
[00:19:49] Steve: on fire
[00:19:50] Steve: today mate,
[00:19:50] Cameron: no one
[00:19:51] Cameron: else. I just did two hours of the Renaissance talking about the foundations of anti Semitism, man. It was great.
[00:19:57] Cameron: I’m on fire. About Christians and the first crusade and roasting babies on spits and it’s been fun.
[00:20:04] Cameron: Um, yeah. But, um, none of the other billionaires would do it. No, and everyone else was like, no, actually we have real businesses that we need to run.
[00:20:13] Cameron: We have real jobs. We’ve got things to do. More fool them because Trump used it to become president. Um, anywho, was I going with this? Tim Apple. Yes. They haven’t talked about it, but they’ve had to, they’re having to rewrite Siri from the ground up, I imagine.
[00:20:31] Steve: even, you can call it Siri, it’s just ground
[00:20:33] Steve: zero.
[00:20:34] Cameron: well, but no, but it, it, you know, it has to be Siri. But it also has to now, they have to increase Siri’s intelligence by a thousand and they’re integrating it into all of their apps. You’ll be, you know, it’s, again, it’s early days, but within a year, within two years, once they have rewritten the code base that enables all of this across all of their devices and the OS, we finally get to that point that we’ve all, you know, when I say we, you and me, and people like us, early adopters, nerds, have been talking about where you can say, hey, Phone, hey Siri, um, find me this and integrate it into that and, you know, tell Steve that I, you know, you know, find the next available time in my calendar and Steve’s calendar when we can get together and do an episode.
[00:21:26] Cameron: And by the way, go through my notes and look at all of the things that I’ve saved in the Futuristic folder of the last week and create a document and send it to Steve and, you know, where you just. Talk to your devices and it does all of the shit. You don’t have to go in there and fucking tap, tap, tap, tap, tap.
[00:21:44] Cameron: Where you just tell your device what you want to do and it does it for you because it has inherent intelligence
[00:21:50] Cameron: enough to do these things in its own iOS, its own ecosystem to integrate everything. This is, this is the beginnings of that. It’s huge.
[00:22:02] Steve: Okay. So It’s not the beginning of that. What you want is, is huge.
[00:22:08] Steve: but this is not the beginning of that for a whole lot of reasons. All
[00:22:10] Steve: right. It’s
[00:22:11] Cameron: Why? How do you, how, why?
[00:22:13] Steve: this? This is just some, this is just a bit of packaging and wrapping because none of that, none of what you described is possible.
[00:22:22] Steve: So it’s not the beginning of it because none of it’s possible. How can it be the beginning of it if none of this that you’ve just spoken about is possible based on what they’ve announced? A couple of tiny little, you can ask this app to do
[00:22:34] Steve: that. I mean,
[00:22:35] Cameron: because what they announced is indicating that they are now rewriting
[00:22:42] Steve: it, but they haven’t
[00:22:43] Steve: got it. It’s
[00:22:43] Steve: not there. That’s like saying we’re going
[00:22:45] Cameron: yeah, but they are gonna do it.
[00:22:46] Cameron: I mean, this is Apple, they’re not fucking
[00:22:48] Steve: Well, then next year, well, next year when it comes, I’ll be excited and I’ll be as excited
[00:22:53] Steve: as you are, but this is not it.
[00:22:56] Steve: So,
[00:22:56] Cameron: is not it. but this is, this is the, the,
[00:22:59] Steve: on, on, on,
[00:23:00] Cameron: but this
[00:23:01] Steve: that will be.
[00:23:02] Steve: And that would be the most important technology for a human ever invented.
[00:23:07] Steve: If we have exactly what you’ve mentioned, two things.
[00:23:10] Steve: First one is ChatGPT like capabilities. And I’m going to use it because it’s still, I think, better than all the others which I’ve been
[00:23:19] Steve: testing. It’s
[00:23:21] Cameron: said that in his interview with Marcus Brownlee.
[00:23:24] Steve: it really better than all of them by a long
[00:23:27] Steve: shot.
[00:23:27] Cameron: that’s the best, yes.
[00:23:29] Steve: Uh, so ChatGPT like three things. There’s three parts to the equation. First one is ChatGPT like capabilities, which obviously doesn’t have because it has to refer to ChatGPT whenever it can’t do something.
[00:23:41] Steve: The second one is agent like capabilities, which is what you said. Cause at the moment, even when you go into ChatGPT, we’ve got to do all those things in a singular fashion. You can’t get it to do a number of things and coordinate, you know, a little bit like the God Mode and AgentGPT, and everyone’s starting to talk a lot about agents and how they’re going to come together.
[00:24:00] Steve: I think Apple is. The best place to do an agent like service and Siri could be at that concierge, which goes through all of your things. And the other one is having an ecosystem of a personal database of your photos, knowledge, people, communications, data, all of that. If it pulls those three things together, that is Star Trek mode.
[00:24:20] Steve: You know, that is, You know, STM, that’s Star Trek mode, if we can get that. And AI with all of those capabilities like ChatGPT, agent like integration where Siri becomes your agent, and then a walled ecosystem where you have Private database information that’s relevant to you and those you’re connected with, as well as access to wider web information and being able to go in and out of your ecosystem with those privacy, uh, barriers.
[00:24:45] Steve: That is absolute utopia. And I would be even more excited than you if that was what we have, but we don’t have that in my
[00:24:52] Steve: humble opinion
[00:24:53] Cameron: No, we don’t have that, but for the last 18 months, since GPT dropped 3. 5, uh, end of 2022,
[00:25:03] Cameron: you and me and, and, and people like us have been prognosticating about what the future was going to look like when we had Star Trek, um, tech. And, but it w up until this week. Apple has never indicated that they were on board to build that.
[00:25:23] Cameron: We all assumed, I think. That Apple and Microsoft and Google and Facebook behind the scenes were all going to be playing around getting us there. They had the same vision, but it hadn’t been confirmed. You know, Apple was doing their secret car and their Vision Pro and fucking around the edges with all of this kind of nice toys.
[00:25:50] Cameron: This week, Apple went, Tim Apple, stood up and said, effectively, Yes, we are on board with your vision. We, we, it’s baby steps, but it is going to, The fucking, the wheels are turning now. We have,
[00:26:16] Steve: And I think on, on, on the Apple AI wagon, it’s a wagon, right? And the Apple AI wagon has got some new bumper bars. It’s got a nice new fluffy steering hubcaps, but I think, I think, and I hope,
[00:26:30] Cameron: off the rear vision mirror.
[00:26:31] Steve: Puffy
[00:26:31] Steve: Dice,
[00:26:32] Cameron: Hey,
[00:26:32] Cameron: I’m
[00:26:33] Steve: a little box out front, and they put some Chrome, they put some Chrome
[00:26:36] Steve: on the
[00:26:37] Cameron: decals, some
[00:26:38] Steve: sure
[00:26:38] Cameron: Fiery, flamey decals. Peace.
[00:26:41] Steve: They have, they’ve tinted the windows
[00:26:43] Steve: and let’s hope Eddie Murphy’s not around to put some bananas in the tailpipe because they’re going to fall from the banana in the tailpipe, man, right? And then I actually think What they’ve done is they’ve, they’ve pimped the wagon, the AI wagon a little bit, and I think while this is doing this in concert, they’re building what we’re speaking about.
[00:27:02] Steve: And I actually think they’re gonna come next year with all of this. But I think this is a, yeah, we’re on board. Here’s some, uh, jazz hands, jazz hands to get us through to next year. That’s what I think, and I hope, because I actually want them to come back with what they’ve always done, which is a, and by the way, here’s how you do it.
[00:27:23] Cameron: look, uh, you know, there this interesting, um, vision of Apple that I hear talked about online and you’ve indicated this in the past too, Um, Apple don’t innovate, they take innovation and they just make it better than everyone else and they do the best implementation of it. You know, you’ve said before the iPhone wasn’t the first smartphone, um, obviously Airpods weren’t the first Bluetooth headphones, the Apple Watch wasn’t the first digital watch, all of these things are true, um, the iPad wasn’t the first tablet, Um, Apple just came along and.
[00:28:14] Cameron: Made the best version. I mean, debatable, obviously, but I think, generally speaking, most people will agree that Apple made the best or one of the best implementations. They took stuff that was already existing and said, hold my beer, we’re going to do this properly. Let me show you what this looks like if you actually implement this well. There’s no reason, I mean, okay, a lot of that stuff was done during Steve’s
[00:28:46] Cameron: day. I mean, the watch, not. Obviously, the AirPods, not. And I think they’ve done well with those. I
[00:28:53] Cameron: mean, I
[00:28:53] Steve: two really great products. Two really, really good
[00:28:56] Steve: products.
[00:28:57] Cameron: love my watch, I love my AirPods. Um,
[00:29:00] Steve: love my AirPods. I don’t use my watch much, but I’ve got one and I just never really I just use it for
[00:29:04] Steve: sport when I’m doing surfing or running or whatever.
[00:29:08] Cameron: Um, I, I believe that Apple will do great things with AI because that’s what Apple does, that’s what the culture of Apple does. Say what you want about Tim Apple, but I think the culture there is still all about,
[00:29:23] Cameron: you know, taking ideas and doing the best implementation of them. Um, I, you know, it wasn’t hugely
[00:29:31] Cameron: innovative in terms of cool shit that you can do, but
[00:29:35] Cameron: it’s,
[00:29:37] Steve: I can’t even think, I didn’t see one.
[00:29:38] Steve: I would love to know one, there’s none, there’s zero. It’s actually, it’s not even a one out of ten, I was too generous, it’s a zero out of
[00:29:45] Steve: ten.
[00:29:46] Cameron: but it’s going to enable a billion people who don’t use AI every day today, a year from now, assuming they have a
[00:29:57] Cameron: relatively new phone, uh, will be using AI.
[00:30:02] Cameron: Every
[00:30:03] Steve: here’s the impact I actually think it’ll have on society based on what you’ve just said. I actually think it’s going to change the context of work. Even quicker than ChatGPT has, because I think you’re going to get people who are not necessarily office bound, who have an iPhone, starting to use it far more frequently.
[00:30:22] Steve: I go around and I do surveys every time I do a speech. I do at least one a week. In corporate, it’s about 50 to 60%, sometimes 80, but I would say on average 50 percent are using it or have used it. When I’m doing it with trades and factories and mining. The numbers are so low. We’re talking about Australia with as high a technology penetration as any market in the world.
[00:30:43] Steve: It’s tens and twenties. So that is going to, I think, change the way people work and the way people view work, because all of a sudden they’ve got this tool which changes their mindset, which changes the type of jobs we do, and has a big impact probably on the future of work.
[00:30:57] Cameron: day. I think, yeah, I think it’s gonna have huge trickle effects, uh, that we won’t see overnight. But, um, I, I do think now that Apple is retooling their OSS to have at the core, I imagine that as we have big improvements in ai, um, capabilities. They will very quickly make their way into the device OS’s as well.
[00:31:35] Cameron: And there’ll be an upgrade path for it, and we’re gonna quickly go from AI being still as it is today, a fringe. Part of daily life to becoming core to the way that people think about how they interact with their devices in the next year or two.
[00:32:00] Steve: So a couple of things that I think would be, I’d like to get your view on with this, from a business point of view and almost a, you know, tech oligarchy, you know, oligopolies and monopolies that we’ve discussed sometime. One of the things that was really evident from my view was that there was a lot of app killers in, in this business.
[00:32:23] Steve: So it was really, um, significant to see. There was a lot of people who would have went, well, there goes my business. You know, watching, uh, the conference and the announcements, uh, simple things. There’s apps out there that do like the calculator on the iPad and take a photo of something and it’ll give you the written word.
[00:32:43] Steve: And there was so many clear. App killers out there, uh, which again, further ensconces the power and it makes you think about, uh, desire to keep people in the ecosystem. It gets harder and harder, the bigger a company gets, achieving the growth that, uh, investors have become accustomed to with tech companies that they’re just double digiting every year.
[00:33:07] Steve: It seems as though they’re stealthily ensconcing their monopoly powers within that ecosystem and shutting out innovators within it. I was a bit surprised, especially given. Some of the antitrust cases that have gone against Apple for taking 30 percent on purchases within apps. Uh, we’ve seen that in the gaming industry.
[00:33:25] Steve: So I was a bit surprised at that, but there’s a hell of a lot of app killers. What are your thoughts on that?
[00:33:29] Cameron: Yeah.
[00:33:30] Cameron: A lot of the apps that I use, 1Password, which I’ve used for years and years and
[00:33:35] Steve: Yeah. Another great one.
[00:33:36] Steve: Yeah. Yeah. One pass
[00:33:37] Cameron: going to have a standalone password app, uh, Rectangle Pro and Moom, apps like that, that enable you to set up your desktop with a lot of separate, you know, dragging windows into predefined spaces so you can manage lots of, um, simultaneous apps.
[00:33:54] Cameron: They’re going to have their own version of that, so there goes those subscriptions. Um, the calculator stuff, uh, yeah. Their calculator app is actually really cool. Their iPad calculator app is pretty cool with the, with the pencil, uh, functionality in it. But all of that aside, I mean, it’s really the AI stuff that is gonna, you know, uh, in terms of, so one of the questions that my boys had is the, the ChatGPT integration, who paid for it?
[00:34:26] Cameron: Is Apple paying OpenAI to have ChatGPT on the phone? ChatGPT on the phone? Or is OpenAI paying Apple to have ChatGPT? We know Google pays Apple 20 billion a year to have
[00:34:37] Cameron: Google on the phone. Is OpenAI going to be paying Apple or is Apple going to be paying OpenAI? What do you
[00:34:45] Cameron: think?
[00:34:45] Steve: interesting. Well, I think Apple will be paying open AI, I think. And, and I’ll tell you why, because I imagine that would go to the paid level of functionality. And they wouldn’t want their users to have to pay and OpenAI, I think, is in a position of power. Generally, if you want to know who paid, then it usually comes to who’s got the most power.
[00:35:10] Steve: And I would have thought that in this, it’s one of the rare events where the small company has the thing that the big company doesn’t have. So, OpenAI doesn’t need more demand. I mean, of course, it’s a big company and it gets Well, if anything, they have problems serving the demand they’ve got right now because they have different layers of access, potentially.
[00:35:32] Steve: So, I would have thought that OpenAI would pay. If you think about Google as a search engine, they’ve got an alternative which, um, just to make it the default, Is really profitable for Google, right? Super profitable. Apple’s like, whatever, do you want it or not? Do you want access to our wealthy consumers who all have iPhones?
[00:35:53] Steve: Whereas with OpenAI, it’s actually providing a needed service. They’re way behind. They need something to integrate there. I think, gee, we’ve got to get to the bottom of this. We’ve got to get to the bottom, but I think that OpenAI has the power and OpenAI would have been the recipients of funding. But that’s a great question for you
[00:36:10] Steve: boys.
[00:36:11] Cameron: So the rumors have been going around for the last month that, uh, Apple were in negotiations both with OpenAI and Google to put Gemini. So it was Gemini versus GPT as the default. Now if you’re OpenAI And then the announcement comes out that Apple are making Google Gemini the default, uh, AI on their phone.
[00:36:40] Cameron: that integrates with Siri, that’s a, that’s a potential killer. If you, if, if Google, Gemini becomes the default,
[00:36:52] Cameron: like Google is
[00:36:53] Cameron: the default search engine on your phone,
[00:36:55] Steve: Yeah, now that’s a good point because it could get people used to using that and when they’re off device they revert to Gemini instead. It’s a really good point. So it becomes a defense mechanism to make sure that your brand is ensconced. Um, but then the other flip side is you would have to think through these levels of integration and usage that there would be by stealth.
[00:37:20] Steve: Lessons coming in. It’s almost like a little bit of a visit to Xerox PARC to understand how things work before they come back with their own version. It’s a little bit like Back to the Future. 50 years on, you know, 1974 or 1977, we’ve got people looking at what’s next. Learning from having a little bit of a tour, you know, without knowing the levels of integration in the back end when it reverts to ChatGPT.
[00:37:46] Steve: What, what sort of an access to, to their knowledge and how it all works and their ability to, to clone it is, is another
[00:37:54] Steve: question.
[00:37:54] Cameron: the question Taylor asked was, um, why would OpenAI give Apple ChatGPT when You know, probably Apple are just going to build their own fully implemented LLM in the next year or two and then they will cut, will cut GPT
[00:38:15] Cameron: straight out. And I was like, well, assuming that this is basically Microsoft, um, doing their deal with IBM in the early 80s to put a version of MS DOS on every IBM PC for a dollar.
[00:38:33] Cameron: Microsoft A computer that they ship, you know, I’m assuming, my guess is that Apple is paying OpenAI, despite the Gemini threat that I mentioned. We all know, I mean, you said it, Tim said it, Tim Apple, uh, and I think we all, the rankings out there pretty much agree that ChatGPT is superior to Gemini. So, if I’m Sam Altman and I’m having negotiations with Tim Apple, um, I’m like, Sure, go ahead, put Gemini on your devices, but Gemini sucks.
[00:39:09] Cameron: Do you want people to have the best experience when they’re using your new
[00:39:15] Cameron: supercharged version of Siri? Or do you want them to have a shitty experience? If you want the best, you come to us, and you’re gonna have to pay us. A dollar a phone that you ship with this on it. If you don’t, then fuck you. Go put Gemini on it.
[00:39:31] Cameron: Cause Gemini sucks. I don’t know. It’s, it would have been interesting negotiations to
[00:39:35] Cameron: be part of, you
[00:39:36] Steve: that, to me is where it’s at. That’s, that’s to me is real
[00:39:40] Steve: clear.
[00:39:41] Cameron: you talked about who has the power. I mean, one and a half
[00:39:44] Steve: I think OpenAI has the power. No, I think OpenAI has
[00:39:47] Steve: the
[00:39:47] Steve: power in this situation. Yes, they’ve got a billion
[00:39:50] Steve: people who, it, but remember a billion people is a quantum, right? And so that’s scale. And usually scale has this advantage.
[00:39:57] Steve: But scale is, is a double edged sword because the scale also means one and a half billion people Severely disappointed. So, so it kind of works in both ways, right? yeah,
[00:40:10] Cameron: Well, um, Steve, the other,
[00:40:13] Cameron: uh, I know you’ve got a heart out. What time’s your heart out today?
[00:40:17] Steve: Oh, 1230 is okay. But, um, but look, there’s a couple of other little bits. I, you know, I think it’s interesting to talk about this private cloud thing, because there was a lot of hoo ha about that. And, uh, Mr. Mars got real upset, which again, obviously doesn’t really care
[00:40:36] Steve: about the privacy. Clearly that’s a, that’s, yeah, Mr.
[00:40:39] Steve: Mars, that’s a ruse. We’ve got Tim Apple and Mr.
[00:40:41] Steve: Mars.
[00:40:42] Cameron: Elon Mars, that’s what you call him,
[00:40:44] Steve: Yeah. Elon Mars. Elon Musk. Elon
[00:40:47] Steve: Musk is really upset
[00:40:49] Cameron: said he won’t allow, he said if, if, if Apple announced they’re putting ChatGPT on the phones, then he won’t allow any Apple phones in any iPhones in any of his businesses and factories. And I was like, and I care about that, why? Unless you’re a Tesla employee or a SpaceX employee or a Twitter employee, why do you give a fuck?
[00:41:15] Cameron: Who gives a shit?
[00:41:18] Steve: And, and, and yeah, Apple’s really going to change its strategy for 10, 000
[00:41:22] Steve: customers.
[00:41:24] Cameron: Um,
[00:41:26] Steve: Has to be in the news. That’s his whole
[00:41:28] Steve: It’s
[00:41:28] Cameron: yeah, yeah, like he, he also dropped his lawsuit against OpenAI this week too. By dropped I mean not delivered it, uh, he dropped it, dropped it. You know, pulled out of his law. He was suing OpenAI for, you know, not being open. And, you
[00:41:45] Cameron: know, he dropped that lawsuit. Like, it’s just, he’s just fucking around with getting attention.
[00:41:52] Steve: the same with the Well, the other
[00:41:53] Steve: one is the 45 billion payout that he
[00:41:55] Steve: wants.
[00:41:56] Cameron: he
[00:41:57] Cameron: got that approved today by the shareholders.
[00:41:59] Steve: got it. Yeah. Yeah. Which Anyway, uh, look, I think that, uh,
[00:42:04] Steve: the
[00:42:05] Steve: private cloud thing for me was interesting for one perspective. What Apple promised was kind of funny in my view. They said, it’s going to go into this place where it’s, it’s not on device, but it’s in a cloud and we’re going to protect it.
[00:42:15] Steve: We’re going to do it. They basically just described what cloud computing is basically. But ours is private and better, but they basically just described cloud computing. It’s not like you put it in the cloud, that’s an open rule and everyone can just go into your data warehouse and stuff. They basically just.
[00:42:31] Steve: Described cloud computing. Come on now. Seriously.
[00:42:34] Cameron: unless it’s Facebook. I mean, they are
[00:42:37] Cameron: trying to position themselves, you know, Facebook and Google,
[00:42:40] Steve: It’s not, there was no fact in that. That was just positioning. We’re going to be really protective of cloud’s different. It’s more private.
[00:42:47] Steve: It’s like,
[00:42:47] Cameron: well, no, but there is a point here, like again, I was having this conversation with Taylor and Hunter the other day,
[00:42:54] Cameron: that, see, Apple doesn’t make money out of advertising,
[00:43:00] Steve: Okay. So I was going to
[00:43:01] Cameron: money by selling your data to advertisers,
[00:43:06] Steve: I get it. I
[00:43:07] Cameron: although,
[00:43:09] Cameron: It does get paid 20 billion
[00:43:10] Cameron: dollars a year by Google, who make their money from advertising, so they kind of,
[00:43:16] Steve: on your phone while they’re learning it. So it’s not like
[00:43:19] Steve: your phone is. Exactly. The other one as well is that
[00:43:24] Cameron: they don’t make money directly from advertising.
[00:43:28] Steve: indirectly they make a lot of money from advertising and they make a lot of money of
[00:43:31] Steve: advertising with apps and advertising apps, all of that kind of stuff. So they’re not in it direct, but indirectly they, they extract revenue.
[00:43:40] Steve: Um, but they’re not in the, let’s call it the surveillance capitalism business, right? That’s, they’re not particularly in that like the others are. But I did think that the idea that this cloud is private, because the mindset that they’re trying to overcome is this idea of, oh, we’ve got on device information and that’s all there, and then you’ve got, and that’s all encrypted in there, and then you’ve got cloud, but we are going to use the cloud, but don’t worry, no one will get into it.
[00:44:03] Steve: I kind of, that’s really different to the idea of, Whatever we see on a screen, we’re going to use that to surveil you and sell advertising to you, which is the Facebook, Google, whatever model. But the idea that their cloud is somehow more private than Dropbox or Azure or any other cloud is kind of, you know,
[00:44:22] Cameron: But their
[00:44:23] Steve: paying homage, uninformed of what a cloud is.
[00:44:26] Cameron: I do think this is going to be an important thing. And you know, so I wrote a blog post I wanted to talk to you about if we have time.
[00:44:32] Cameron: I did a blog post, uh, actually I, I wrote it, for myself and then I sent it to you and then you didn’t reply. So then I did it as a blog
[00:44:38] Steve: No, no, no. I did have a look at it, but I was on my phone when I looked at it, and it had this really weird thing where it went.
[00:44:45] Steve: So I actually, I’ve got it on my list of things to look at. It was the flow thing. I did look at it, but I need to look at it more closely.
[00:44:51] Cameron: I basically sat down, um, over the last week. I called it mind mapping the future. I was just trying to think through the next five years and what I think’s going to happen, how it’s going to play out. And I applied some probabilities to some of it. But, um, I, I do think we are at a critical point with all of this.
[00:45:15] Cameron: So think about it from this perspective. Okay. We’re going to end up with AI. At the core of all of our devices, not just Apple’s devices, but Google’s devices and Samsung devices. Is there anyone else I’m
[00:45:33] Cameron: forgetting? Is it Google and Samsung? Who’s left? That’s, that’s pretty much it, right? That’s all the devices really out there today.
[00:45:40] Steve: Microsoft are making a comeback when they invest in Sam Hoffman and Johnny Ives new piece of hardware,
[00:45:46] Steve: and they’re going to win
[00:45:47] Cameron: well, I don’t know if that’s going to happen now that they’ve done a deal with, Apple. I
[00:45:51] Cameron: think.
[00:45:53] Steve: what about the Facebook,
[00:45:53] Steve: phone? You’re forgetting about our boy Facebook
[00:45:55] Steve: and the fire phone from Uncle Jeff.
[00:45:58] Cameron: Yeah.
[00:45:59] Steve: on now,
[00:46:00] Cameron: Uncle Jeff He does, he is not Jeff, Amazon. He doesn’t get the Jeff Amazon
[00:46:04] Steve: he’s Uncle Jeff, we’ve got Tim Apple, Uncle Jeff And Elon
[00:46:08] Cameron: And what’s ma what’s um, um, Facebook
[00:46:13] Cameron: Mark, Facebook of Berg. Just, he’s just Zucker face.
[00:46:18] Steve: one of
[00:46:18] Steve: them. Zuckerface, he is a bit of a Zuckerface.
[00:46:22] Cameron: Um, so, okay, we’re going to end up with all these devices with AI at the core. Now, AI is going to know everything about you.
[00:46:31] Cameron: Our devices already know a lot about us, but it’s in silos. It’s got your cookies and your web history and your message history and all your email history. But the AI at the core is going to literally understand you.
[00:46:48] Cameron: It’s going to have
[00:46:49] Steve: Crawl. It’s like gonna crawl. It’s gonna crawl and train on your full digital life experience.
[00:46:57] Cameron: not just Your, digital life. It will
[00:47:00] Cameron: be
[00:47:00] Steve: your
[00:47:00] Steve: entire,
[00:47:01] Cameron: one of the, one of the, in, one of the most, I think, jaw dropping parts of the ww DC presentation was, you will be able to record your phone
[00:47:15] Cameron: calls
[00:47:16] Steve: I
[00:47:16] Cameron: in
[00:47:17] Steve: that was, that was the best innovation in the, in the entire thing. I cannot tell you how many phone calls I’ve
[00:47:22] Steve: said, I wish this was recorded. mate of mine, you know, Scotty Kilmartin, we used to say, this could be a good podcast because we’d just shoot it, you know, shoot the shit and it would come out real nice, but I actually like that as a feature.
[00:47:35] Steve: I think it’s a great feature.
[00:47:37] Cameron: You’ll be able to record it and it will transcribe it and then summarize the transcription. Now, it’ll also, it won’t be long before it’s not only recording your phone calls, it’s recording.
[00:47:51] Cameron: Every conversation you have in the real
[00:47:53] Cameron: world, you’ll watch your phone,
[00:47:57] Steve: Well, it is. We know that, remember?
[00:47:59] Cameron: openly,
[00:48:00] Steve: know that
[00:48:01] Steve: it is. So formally, we’re going to formally announce we’re
[00:48:06] Steve: backfilling the reality
[00:48:08] Cameron: be retrospective, by the way,
[00:48:10] Steve: We’re just conditioning. We’re just conditioning society. We’re
[00:48:13] Steve: conditioning
[00:48:14] Steve: society
[00:48:15] Cameron: by the way, we’ve also got recordings of the
[00:48:17] Cameron: last 10 years of your conversations, which will also be archived.
[00:48:22] Steve: How did it get so good so quick, Cameron? How did it know so much about what happened? Did I talk about what happened in 2013? I
[00:48:28] Cameron: So it’ll be record, it’ll, it’ll, you’re, the AI at the core of all your devices will be not only tracking your digital life, it’ll be tracking your. IRL, and it will understand you infinitely better than anyone, your spouse, your parents, even you yourself, because memories, we forget shit, it won’t forget shit, it’ll remember everything you’ve ever said or has ever been said to you.
[00:48:59] Cameron: Right? All written. Now,
[00:49:03] Steve: Well, my AI now, my Steve Sammartino AI, is better than
[00:49:07] Steve: me. Because it remembers everything I’ve written, because I’ve
[00:49:09] Steve: directed it to, and I can’t
[00:49:10] Cameron: now in that world, there will be privacy concerns, there will be security concerns. Um, you know, one of the things I explored in my, my blog post is what happens when the police say to Apple, we want access to Steve’s entire, AI Core, because we want to know about every conversation that he’s had, every message that he’s written.
[00:49:33] Cameron: Um, you know, Cameron made a joke about, um, Christians murdering Jews in 1096
[00:49:40] Cameron: and roasting their babies on spits.
[00:49:42] Cameron: We want access to everything else that he said. We think he might be anti Semitic. I’m
[00:49:46] Steve: You’re pretty public with,
[00:49:47] Cameron: you
[00:49:48] Steve: but you’re really
[00:49:49] Cameron: want to know anything about me, just go listen to my podcast. I don’t filter anything. It’s all, it’s.
[00:49:54] Steve: Yeah, you’re not hiding it. See, everyone else is like, you’re fine because you’ve been, for ever since I’ve
[00:49:59] Cameron: It’s all out there.
[00:50:00] Steve: calling it as you see it. Whereas most other people,
[00:50:03] Steve: they have these
[00:50:03] Steve: phone calls
[00:50:04] Cameron: my actual, my real life is boring
[00:50:06] Cameron: compared to what I’m like on my podcasts. Uh, cause I, I don’t get, no one, no one cares in my real life. I only get to talk about what I really think with my podcast hosts. Um, so, uh, now my point is. It’s taken me a long time to get there. In that world where your AI is at the core and it knows everything, who are you going to trust?
[00:50:27] Cameron: Google, who has spent 20 years selling your data? Facebook, who has spent 20 years selling your data? Microsoft or Apple, who has spent years and years and years telling the FBI to go fuck themselves? Remember the FBI have been trying to hack into, uh, Terrorists, alleged terrorist phones for years and Apple have been going, alleged terrorists and Apple have said, no, we’re not going to give you access to their phones.
[00:51:01] Cameron: Um, uh, who are you going to trust? I think Apple positioning themselves as the security
[00:51:08] Cameron: first company is going to be incredibly
[00:51:12] Cameron: important to their brand.
[00:51:15] Cameron: Five years
[00:51:15] Cameron: from now.
[00:51:17] Steve: 100%. Look, I agree with that. I don’t think that the private cloud is part of that. It’s just
[00:51:20] Steve: whether or
[00:51:21] Steve: not,
[00:51:21] Cameron: it
[00:51:21] Cameron: is. Yeah, but it’s private. they’re
[00:51:23] Cameron: like, they’re like, they’re saying it has the same level of encryption as your device.
[00:51:31] Cameron: an extension of the same encryption technology. So even they can’t see what you’re doing and talking
[00:51:37] Cameron: about in the cloud. Cause it’s totally encrypted.
[00:51:42] Steve: I’ll be to that. I mean, for me too, I mean, one of the big ones, I did the calculation once in Australia, can’t remember what it was when I wrote the manifesto on, um, um, Saving Technology, oh no, that was about eight years ago. Um, 10, 000 word manifesto that no one read, you know those, they’re always fun. Um,
[00:52:00] Cameron: your thinking straight.
[00:52:02] Steve: yeah.
[00:52:03] Steve: Uh, the numbers were astoundingly small on what we, what the companies get in order to invade our privacy. It, it, it might’ve been 20 or a hundred bucks per person, per year. The fact of what we are giving up just for that
[00:52:20] Steve: small
[00:52:20] Steve: amount of revenue is in,
[00:52:21] Cameron: Yeah,
[00:52:23] Steve: well, well, I would, I would, all the privacy, I would pay that amount of money.
[00:52:26] Steve: Do you know what I mean? I would happily go, here’s the a hundred. Don’t surveil anything. Yeah, have the hundred. Give me access to all the services, but have the hundred. Don’t surveil me at all. I’ll do it in a heartbeat with Google, with Facebook, with all of them. I mean, I use Instagram and WhatsApp. I don’t use Facebook at all.
[00:52:41] Steve: Um, Google I use a lot and, and, um, yeah, it, it, it knows everything I’ve searched and, and, and, and heaven help me. That that stays private. Please let that stay private. Because you know, sometimes I just get itchy and play. No, I’m joking.
[00:52:57] Cameron: Yeah. Oh, well I, I do think that’s going to be important. Um, so just touching briefly on the mind mapping the future thing that I wrote, um, you know, my, people can go read it on my blog, CameronReilly. com, if you want to see it, if you haven’t already seen me talk about it on Facebook, um, but, you know, the, the, most of the feedback that I’ve got from people, Steve, is, oh, this is just going to be the rich that have everything, like, one of the things that I’ve been exploring in this is what happens to our jobs. How many jobs get taken by AI? And I was having this conversation with ChatGPT as I was writing it. I was trying to like, I said to ChatGPT something like, okay, what happens when all of these, like initially white collar knowledge worker jobs, and I, and I sort of map it out in the, in the blog post. Like I think the earliest.
[00:53:53] Cameron: Iterations, um, of jobs are the ones that we’ve already seen are starting to go, um, you know, you’re like low level graphic designer jobs, I see, this is what I wrote, um, Ahem. that AI gets, is massively smarter and more capable in the next five years, like, uh, most people involved in the industry think it will, and it’s available to everyone and every business for 20 bucks a month, what happens?
[00:54:27] Cameron: And then I say this, the answer is we don’t know, we cannot predict. And when we’ve arrived at a place where we honestly can’t predict what life will look like in five years, that, by definition, is the technological singularity. I’m arguing that we are in the singularity now because we cannot predict what life will look like five
[00:54:46] Cameron: years from now.
[00:54:48] Steve: Isn’t the singularity as defined by Kurzweil and cohort, a moment where technology has a recurrence on itself that’s so quick that it becomes infinite and
[00:54:57] Steve: scales
[00:54:58] Cameron: I think that’s,
[00:54:59] Steve: Rather
[00:55:00] Cameron: here’s what Wikipedia says, the technological singularity is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable
[00:55:14] Cameron: consequences for human civilization.
[00:55:17] Steve: but I don’t think it’s
[00:55:18] Cameron: Oh, I think it
[00:55:19] Steve: yet.
[00:55:20] Cameron: I think It’s uncontrollable. I don’t think
[00:55:22] Cameron: anyone will or can stop.
[00:55:26] Steve: could stop, could stop. Yeah, but it hasn’t irreversibly
[00:55:29] Steve: changed
[00:55:30] Cameron: I think it
[00:55:31] Cameron: has, and this is what I’m arguing. I cannot, like, I, look, I’m not, unfore,
[00:55:37] Steve: changed our day to day living yet though. It’s not like we’re
[00:55:40] Steve: just like, in the middle of this singularity and nothing’s, you know, Everything’s so changed forever. It’s like, we’re still going to eat and go here and do work and jobs are changed. It’s still, it’s not, it’s not, it hasn’t really hit that part of the curve
[00:55:51] Steve: yet.
[00:55:53] Cameron: I, look, and, uh, uh, you’re a futurist. I’m not
[00:55:58] Steve: But we’ve always been on that
[00:56:00] Cameron: One of my, Taylor asked me the other day, why, Taylor asked me,
[00:56:03] Steve: No, no one is, you know, no one is,
[00:56:05] Steve: right?
[00:56:06] Cameron: haven’t you made a career as a futurist? I said,
[00:56:08] Cameron: because I’m, I’m not cocky enough.
[00:56:14] Steve: You know why I became
[00:56:15] Steve: a futurist and I’m really just an economist who likes technology is, um, my, my agent, nah, it was my agent, Barry Markoff. G’day mate, Barry Markoff here. That’s a great little speech you did there. Now, listen, have you got an agent? Yeah, yeah, yeah. Agents. Sorry. Yeah, yeah, yeah. See, what we do is we get your gigs for money.
[00:56:31] Steve: We’ll take 30 percent but you make a lot of money, mate. Now, what do you call yourself? What? You’re startups and tech. Nah, nah, nah, nah. Futurist, mate. You get 30 percent more way. All right, mate, I’ll send you a contract. You’re a star.
[00:56:40] Steve: Bye.
[00:56:41] Cameron: your agent?
[00:56:45] Steve: Sort of.
[00:56:47] Cameron: Um, my point, my
[00:56:51] Steve: So no, no
[00:56:52] Steve: one’s a
[00:56:52] Cameron: going to be that. Um, I’ve spent 30 years thinking about the technological singularity and AI and reading everything that I could about it, et cetera, et cetera, et cetera. I cannot begin to predict what the world is going to look like five years
[00:57:10] Cameron: from now.
[00:57:11] Steve: No, no one can. You’re right. Oh yeah, you’re probably right. Five, five. I don’t think it’s going to be much different between now and
[00:57:19] Steve: four years,
[00:57:20] Cameron: Okay. So anyway,
[00:57:21] Steve: don’t
[00:57:21] Cameron: here’s my, here’s my take on it. Um, the first layer of jobs that’ll be replaced will be the low risk, high cost, high benefit jobs. Coding,
[00:57:34] Cameron: customer service. Analysts, writers,
[00:57:39] Cameron: I dunno about legal first. Um, I’ve
[00:57:42] Cameron: got gr
[00:57:45] Steve: Nah, I think a lot of legal stuff will go away because I’m doing a lot of it now that we would have reverted to our lawyers on some property stuff. There’s going to be a whole bunch of legal work right now that is new.
[00:57:56] Steve: Like. You know, biometric copyright and likeness and copyright. Yes, yes, there’s further legal jobs where it becomes more interesting and complex and more stuff, but there’s a whole lot of stuff that’s like, yeah, well
[00:58:07] Steve: now
[00:58:08] Cameron: I don’t think jobs are gonna be lost. Legal jobs are gonna be lost in the first iteration. You might use it to do legal work, but I don’t think jobs are gonna be literally replaced. I don’t think legal firms are gonna say, we’re not gonna, or we’re gonna, we’re gonna fire
[00:58:25] Cameron: some legal staff paralegals.
[00:58:29] Steve: Yeah, but you might only need, yeah, but you might only need one instead of four.
[00:58:33] Steve: That’s the
[00:58:33] Cameron: uh,
[00:58:34] Cameron: maybe, uh, but anyway, we’re just talking about timing. I do believe that’s going to happen. So what I’ve got in terms of my timeline in this post, I’m saying what’s going to happen in 25, 26, 2025, 2026. So this is assuming the ChatGPT 5. and equivalent technologies come out in the next year and are as good as Sam Altman, and I don’t know if you saw this, but Kevin Scott, I think, who’s the CTO of Microsoft, came out in a, in a interview in the last week.
[00:59:05] Cameron: He said he’s been playing with it. He said, I, if, if I said that ChatGPT 4, in terms of its overall intelligence, overall, right, not specialty, but overall, is about the equivalent of a high school graduate, uh, 5 is a PhD, in everything. for listening. Like PhD quality and everything. If it comes out and it is PhD in everything, not AGI, but PhD level in nearly everything, I think in the next 20 to 26, we’re going to see jobs go in coding, customer service, analysts, writers, graphic design, industrial design.
[00:59:43] Cameron: But then in 2020, then what happens is people get confident. It’s a tester. People start to go, you know what, it’s good enough to do these things. It’s good enough to replace people in these low risk, but you know, high cost. You’re paying 50, a year for these jobs. I can get an AI for 20, 50, 100 a month that’ll do that.
[01:00:07] Cameron: Easy, right? Easy replacement. And they’re overseeing these jobs, okay, I get rid of a writer and I use an AI to do it, I still have a human overseeing it, graphic design, I have a human who’s using AI, who’s overseeing it initially, like, humans approving the work that’s done, once you get more confident, 2027 2030, we’re more confident with using AI It’s had another couple of years of iterative improvement, thousand fold improvement.
[01:00:39] Cameron: We hit AGI by 2027. 2028 is a lot of people in our forecasting. Then. Higher level jobs start to get replaced. Middle management get replaced because there’s less people to manage. I don’t need as many middle managers when I’ve got less people doing the work that middle managers are looking after, right?
[01:00:58] Cameron: Legal, I say, starts to go here. Accounting, HR, again, because there’s less people to hire, less people to manage, we don’t need HR people. Recruitment, because we’re not hiring as many people. Psychologists, everyone has a free AI therapist that’s better than a human. Medical.
[01:01:15] Cameron: Everyone has a free gp.
[01:01:17] Steve: let me just, just, okay. So you’re right. And I, what I wrote down before was that Basically you get a reversal in the first iteration of robotics and all of that kind of stuff. Blue collar was at risk and a lot of blue collar work got replaced through industrial machinery. Now you’ve got a reversal where white collar is at risk and is almost not a white collar job that isn’t at risk or isn’t going to be impacted radically.
[01:01:40] Steve: And first, because I think it’ll take longer for the intelligence systems to enter the robotics like the figure one, which we’ve discussed before. So you’re right.
[01:01:49] Steve: it’s,
[01:01:49] Cameron: So I was asking, so, so what I’ve been trying to work out in this is what happens when people are, don’t have jobs. And I was asking GPT and it was saying, well, they’ll have to be re-skilled. And I was like, re-skilled as what? What is safe that you now there is, you know, there, there is a, the fallback is, as we’ve talked about this before, you know the jobs that you and I have.
[01:02:15] Cameron: Didn’t exist 30 years ago. So it’s possible that there will be jobs five years from now that don’t exist today. But when I try and think about what those jobs might look like, that won’t be able to be done better by AI by then than a human,
[01:02:34] Cameron: I cannot work out.
[01:02:39] Steve: the job got replaced by something that can do a thing. The thing now can do everything. So even if you think of something new, that, that AI can do that. Even if it doesn’t exist yet, that’s actually the
[01:02:54] Steve: difference.
[01:02:54] Cameron: I was, asking
[01:02:55] Steve: I come
[01:02:56] Steve: back
[01:02:56] Cameron: the question and it was saying, well, um, you know, some jobs will be safe, like upper management. I was like, what, you’re gonna take a fucking secretary and train her to be a CEO? Like, what the fuck are you talking about? And then it’s going, oh, what about, um, boutique craftsmen?
[01:03:13] Cameron: Like, what, what, what, I, like, I can’t, well, I’m gonna start making fucking bespoke wooden furniture? Like, that takes years of training and talent and, like, even when I went into GPT and got it to try and pick holes in my arguments, it came up with a bunch of facile stuff, and when I pushed back and I went, oh no, you’re right, you know, none of that actually makes sense.
[01:03:37] Cameron: And not to mention, you go, okay, well, you go, you go make custom furniture.
[01:03:42] Cameron: Who has the fucking money to buy custom furniture? When no one has a job,
[01:03:48] Steve: you got onto my point. Okay, so this is my point here. Everyone worries it’s going to be a real big problem if there’s no jobs. And I’ll tell you who has the biggest problem, the people who own the technology. Because if there’s no jobs, no one can buy anything. And if no one can buy anything, then you have economic failure.
[01:04:03] Steve: And, and I don’t Know what happens with that. Typically, the one thing that is super, super important with any technological advancement is that it’s not monopolized and that you have a spread of the technology because there’s more of a spread that you have on an emergent technology, whether it’s robotics or electricity or machinery, what happens is they have to compete away their price point because they’ve got automation and that frees up capital, which can go to new jobs, new industries, new whatever it is.
[01:04:35] Steve: You know, maybe like some people have said historically that if everything can be done and maintained like the physical work, labor, housing, fooding, uh, feeding, feeding, we’re feeding people, Cameron, feeding people, then maybe we move towards this Tim Apple is fooding people. It’s funny. We, um, end up with this kind of quasi leisure society where I’m trying to get as good at surfing as I
[01:04:58] Steve: can.
[01:04:58] Cameron: how are you paying your rent, your mortgage,
[01:05:01] Steve: I don’t know. Well, I, I,
[01:05:05] Steve: well, everything should be really, really super low cost because we end up in an abundance society. You
[01:05:10] Steve: know, um, Kurzweil story. I don’t
[01:05:12] Steve: know.
[01:05:13] Cameron: oh, well, the
[01:05:14] Cameron: rich are going to get richer and the, you know, everyone else will be poor. I, I keep trying to point.
[01:05:19] Cameron: Yes. That’s the point that I’m trying to get to with all of this. People are rich because their wealth is
[01:05:25] Cameron: tied up in assets, usually businesses or shares or
[01:05:29] Cameron: property. But if there were.
[01:05:31] Cameron: Yes, if
[01:05:32] Steve: are spending
[01:05:33] Cameron: they’re, locked,
[01:05:35] Cameron: there’s no money in the fucking economy, so the entire economy collapses, we are going to need to re engineer how the economy,
[01:05:46] Cameron: from the gr Now this is why we’re in the singularity, because I don’t know what this fucking looks like, Steve. I’ve been trying to work it out, and again, I’m not the smartest guy on the planet, but I’ve been thinking about this for a long time.
[01:06:00] Cameron: And the harder I think about it, I just don’t have a, I don’t have an answer. I cannot see how this
[01:06:07] Cameron: works.
[01:06:10] Steve: done some books on this. Doctorow’s done some sci sci fi books on this and what happens when you have Replicators
[01:06:16] Steve: that could uh,
[01:06:17] Steve: Cory
[01:06:18] Cameron: Oh, Dr. Rose. I was
[01:06:20] Cameron: like, Dr. Rose? Who’s Dr. Rose?
[01:06:23] Steve: Yeah, Dr. Rose?
[01:06:24] Steve: Haven’t you heard of Dr. Rose? He’s a technologist who’s really studied the singularity. Cory Doctorow has done some stuff where uh, everyone has access to things via replicators and
[01:06:37] Cameron: Yeah. Yeah.
[01:06:37] Steve: There is no money and no, yeah, nanofabricators. And there is no money because any, because everyone can create
[01:06:44] Steve: everything. And so what you end up doing is
[01:06:48] Cameron: yeah, but we’re not going to
[01:06:50] Cameron: have.
[01:06:51] Steve: know. It’s
[01:06:52] Cameron: Nanofabricators in the next five years, unless something happens dramatically, but I think we will get them at some point, maybe even a
[01:07:02] Cameron: decade, but we’re not going to have them before the
[01:07:05] Cameron: economy collapses, which I say 2030. That’s
[01:07:09] Steve: Well, I think that I think for a whole lot of reasons, we’re headed towards a
[01:07:13] Steve: significant revolution. And I think the disparity and inequality and incomes is a big part of it. Five families control 50 percent of all. Wealth in America. Like we’ve never, it’s the most extreme it has ever been and it’s getting worse.
[01:07:29] Steve: And we have, you know, you and I discussed it, a Plutocratic society where a few really large corporations control everything, but what happens when they get so much that no one has anything left to give? Then the whole thing, the edifice that’s holding them up. Collapses. I, I, gee, we’re going to end on a note of an unknown really, because it’s really almost like, and I’m going to call it’s almost, yes, it’s driven by technology, but it’s kind of like an economic singularity.
[01:07:56] Steve: We actually don’t know what happens with the economic models and it’s not communism. It’s not capitalism. It’s none of those things because you get to a point where technology can do everything. And I don’t know what happens to that monetary system.
[01:08:08] Cameron: my point. This has been The Futuristic. We’ll be back next time. Thank you, Steve. Fun, fun chatting as always.
[01:08:16] Steve: Thanks, Cam. I really like how we ended with something bigger there. That was Really cool,
This week on Futuristic we’re talking about the new ChatGPT-4o model, GPT officially passes the Turing Test, the OAI founder who thinks AGI is only 2-3 years away, Ilya has left OAI, Sam Altman doesn’t think we are worried enough about how AI will impact the economy, Google’s medical AI destroys GPT’s benchmark and outperforms doctors and ChatGPT-4 beat 100% of all psychologists in a study of Social Intelligence.
00:00 Kicking Off with Technical Glitches and AI Intrigues
00:31 Parenting in the Age of AI: A GPT-3 Experiment
01:45 Exploring Global Hypocrisy with GPT Conversations
03:55 Innovative Uses of GPT in Analyzing Budgets and Historical Figures
09:59 The Evolution of AI and Personal Computing: A 50-Year Journey
13:27 Tackling the Turing Test with GPT-4: Human or AI?
20:47 The Departure of OpenAI’s Chief Scientist: A Turning Point
23:12 The AI Timeline: Predictions and Implications
24:30 Exploring AGI Supremacy and Its Societal Impact
39:54 The Future of AI in Healthcare: A New Hope
43:31 AI’s Role in Psychology and Personal Well-being
54:17 Envisioning a Future with Personal Services AI
Futuristic 25
[00:00:00] Cameron: Welcome to the Futuristic Episode 25. We’re back. It’s the
[00:00:09] Cameron: 17th of May. Steve Sammartino got some technical issues this morning.
[00:00:13] Cameron: Sorry about that. Don’t know what’s going on. I think the AI is trying
[00:00:17] Cameron: to stop us from having this conversation. This is the conversation that doesn’t want you to hear.
[00:00:23] Steve: Sentient.
[00:00:24] Steve: Sentient.
[00:00:25] Cameron: Steve, um, tell me what’s been
[00:00:26] Cameron: going on in your corner of the world
[00:00:29] Cameron: since we last spoke.
[00:00:30] Steve: Yeah. Well, I, uh, I’ve been
[00:00:33] Steve: using GPT
[00:00:36] Steve: to influence my son. He won’t
[00:00:38] Steve: listen to me. So what I did the other day, we’re lying in bed and I
[00:00:41] Steve: was trying to tell him how social media isn’t as, Good as he
[00:00:45] Steve: thinks it is, certainly the wormholes that he is down, wasn’t interested in what I had to say. he
[00:00:51] Steve: said, what would you know?
[00:00:51] Steve: I said, well, why don’t we ask someone who knows
[00:00:53] Steve: everything, has a PhD in every single subject. And although he’s
[00:00:56] Steve: across ChatGPT and everything, he’s never really had a live conversation with
[00:01:00] Steve: the voice activated version. So I did that and I gave it to him. And, you know, I said, oh, my son really likes TikTok, but he goes down some terrible wormholes with his algorithms, and it’s, I’m not sure it’s making him smarter, and I asked him to ask a question back and forth, and he actually
[00:01:17] Steve: stopped and listened.
[00:01:18] Steve: It was almost like it had this
[00:01:20] Steve: Separation or a sense of authority, and he paid more
[00:01:23] Steve: attention to it. A little bit like, you know, when you, your clever uncle
[00:01:27] Steve: or someone that is not your dad, because who wants to listen to their dad? And it was a really
[00:01:32] Steve: interesting way to get some information across to him that he just wasn’t prepared to hear from me.
[00:01:39] Steve: And it was a really cool little
[00:01:41] Steve: thing, you know.
[00:01:42] Cameron: That is really cool. I like that. Well, the only thing that I’ve been doing any
[00:01:47] Cameron: different from what I normally do, which is coding and all that kind of crap, is You know, I go into GPT often, and I have conversations with
[00:01:53] Cameron: it about subjects that. I’m trying to unpick. What I’ve started doing is blogging those.
[00:01:59] Cameron: So if you go over to CameronReilly. com And have a look at some of my recent conversations, I just did one on American hypocrisy over
[00:02:06] Cameron: tariffs. How they’re always pushing
[00:02:08] Cameron: for free trade and they’re using the IMF and the World Bank to enforce free trade
[00:02:12] Cameron: and uh, no tariffs, no duties on other countries to open up their markets, but then they will apply tariffs
[00:02:19] Cameron: As and when they see fit to protect their domestic industries to punish, uh, China as they’ve been doing again recently under the Biden administration. I did a
[00:02:29] Cameron: post, uh, before that on, um, the conversation with
[00:02:34] Cameron: it about, uh, American hypocrisy over China, um, again, looking at world trade
[00:02:41] Cameron: disputes and, uh, complaining about China producing too much stuff! producing too much stuff! Like the U. S.
[00:02:48] Cameron: hasn’t been flooding the world with stuff for decades. Now China’s flooding the world with stuff. They’re like, you gotta stop it!
[00:02:54] Cameron: You’re producing too much
[00:02:55] Cameron: stuff! Too much clean energy! Stop, stop producing so many solar panels! You might make, you might clean up the world! Might do something about climate change! But anyway, the point is just, I’m just blogging these conversations because I think, okay, well I’m having
[00:03:09] Cameron: these conversations and uh, I should blog
[00:03:12] Cameron: them and share them.
[00:03:13] Cameron: I don’t know if anyone cares, probably not, but,
[00:03:15] Steve: know if anyone’s even done that. I mean, I’m just reading it now, and I agree with GPT, had some very
[00:03:21] Steve: good insights on why they do uh, have trade, uh, tariffs, um, embargoes, all those kind of things.
[00:03:31] Steve: It’s always been that way. And every economy
[00:03:33] Steve: is filled with contradictions, right? Uh, vested
[00:03:36] Steve: interests. And, and, and that’s the whole idea that, You know, pure capitalism has never existed, pure socialism, pure anything
[00:03:41] Steve: has never really existed. Um, but I like that idea that
[00:03:45] Steve: instead of having the conversation with yourself or talking about an interview, you’re just taking the screenshot And throwing it down there.
[00:03:54] Steve: It’s interesting. One interesting thing that I saw this week, a good friend of mine, I think you might know him now, Josh Rowe
[00:03:59] Steve: from Melbourne,
[00:04:00] Cameron: hmm,
[00:04:01] Steve: part of the early Twitter cohort back in
[00:04:04] Steve: And he, this week for the budget, he took all of the
[00:04:07] Steve: papers and the PDFs, and it might have been, you know, 3, 000 words of PDF documents that have been prepared
[00:04:13] Steve: for, for a time, and he put it on his,
[00:04:16] Steve: um, he’s got a a substack called AI at Every
[00:04:19] Steve: Scale, and
[00:04:21] Steve: what he did was upload all of the documents, give some parameters for GPT on what it was, what the documents are about, Um, uploaded to the brain, the
[00:04:30] Steve: documents that wouldn’t be in GPT
[00:04:32] Steve: given uh, you know, cut off
[00:04:35] Steve: time and said, okay, now you’re in a room discussing.
[00:04:37] Steve: Discussing the various
[00:04:39] Steve: implications of the budget. In the room is Milton Friedman, in the room is John Maynard Cates, in the room is
[00:04:46] Steve: Chalmers, uh, in the room
[00:04:48] Steve: is
[00:04:49] Steve: someone who’s paying a mortgage with three kids,
[00:04:52] Steve: and it was really brilliant, it was a long post, but it had the discussion points
[00:04:56] Steve: that each of them would say, and he created a clock round
[00:04:59] Steve: table using GPT, and I thought, you know, You know, you’ve reminded me of this, what a clever way to interpret
[00:05:06] Steve: the tool for a new use case which hasn’t happened before, combining modern, uh, new recent, uh, data that’s
[00:05:13] Steve: come out with the budget, current people in society and their challenges, historical economic figures, and it kind of, not only
[00:05:20] Steve: was it brilliant and insightful, it just reminded me again, that everyone that
[00:05:25] Steve: says that oh, Every job is going to go away.
[00:05:28] Steve: it’s
[00:05:28] Steve: like, yeah, of course they are. And there’s not that many bison hunters anymore either, right? Whenever a job goes
[00:05:33] Steve: away, new ones come and take their place. And I know that we’ve got some stuff with Sam Alton that we’re going to talk about with
[00:05:39] Steve: that And I’ve got some views on that as well, but I love what
[00:05:42] Steve: you’ve done here, Cameron, it’s really smart.
[00:05:44] Cameron: hmm. Thanks, mate. Yeah, I just want to encourage people to use the tools in intelligent ways
[00:05:50] Cameron: and, um,
[00:05:51] Cameron: you know, have intelligent conversations
[00:05:53] Cameron: with them. Don’t just use them for, you know, checking the spelling of your emails or rewriting your emails. Use them to have intelligent conversations.
[00:06:01] Steve: Yeah.
[00:06:01] Cameron: Well, Steve, it’s been a huge
[00:06:03] Cameron: week.
[00:06:04] Cameron: I mean, for the last couple of weeks, you and I have been pinging each other late in the week. Are we going to do a show? There’s not much happened, but this week’s one of those weeks where a big thing happened,
[00:06:12] Cameron: so we had to The big thing, of course, is OpenAI
[00:06:15] Cameron: released version 4. 0 of ChatGPT.
[00:06:21] Cameron: 4. 0, the O stands for, oh my god, I think we’ve arrived at 4.
[00:06:26] Cameron: 0.
[00:06:26] Steve: Does it? Cause I did watch, I’m sure someone has said what it stands for, but I actually don’t know what
[00:06:31] Steve: it
[00:06:31] Cameron: Stands for Omni officially, but I prefer to think of it as, Oh my God, I can’t believe how cool those demos were. Um, I, uh, was
[00:06:42] Cameron: driving my son Tyler to the airport. He’s back in LA
[00:06:45] Cameron: hanging out with Will Smith today. And, um, He and I,
[00:06:50] Steve: over. Sorry. No go. We are out. I refuse to
[00:06:53] Steve: watch or anyway,
[00:06:55] Cameron: and Martin Lawrence,
[00:06:56] Cameron: he’s at a bad boys thing. Anyway, um, he and I were talking about it
[00:07:00] Cameron: and going over the demos. So for people that haven’t seen or caught up with the news
[00:07:04] Cameron: yet, basically this new version that they’re rolling out, a couple of
[00:07:09] Cameron: big things. Number one is that they’re making it available
[00:07:12] Cameron: for free to everybody.
[00:07:13] Cameron: You don’t have to have a premium subscription to get ChatGPT 4.
[00:07:19] Cameron: But if you’re a paid subscriber, you get more messages. You get
[00:07:23] Cameron: higher limits on, on, on how much you can use it over the course of a day for the premium users, the
[00:07:30] Cameron: super users. Secondly, the
[00:07:32] Cameron: other big thing was they’re introducing
[00:07:35] Cameron: a video model into it.
[00:07:37] Cameron: So not only, uh, Omni
[00:07:39] Cameron: being multimodal Omni, so not only can you interact with using text and voice.
[00:07:46] Cameron: And pictures, uh, images, now you can
[00:07:49] Cameron: interact with it with video, it’ll use the video camera on your phone, it can see around you, it can see you, and you can point it at things and go, tell me more
[00:07:59] Cameron: about this, what am I looking at, what’s going on. and
[00:08:02] Cameron: it will give you real time feedback. They’ve optimized it and
[00:08:07] Cameron: cut down the lag between question and answer to what
[00:08:09] Cameron: they say are human like response
[00:08:12] Cameron: levels. Um, and the demos
[00:08:16] Cameron: looked absolutely super cool. What did you think of the launch,
[00:08:20] Steve: that, yeah, I really liked the demo, like the idea I’m going on a date or I’ve got a job
[00:08:24] Steve: interview. How do I look? I mean, some of that stuff
[00:08:27] Steve: is, it’s, you know, it’s, I think it’s, It crosses one of the chasms that’s been lacking with the
[00:08:33] Steve: GPTs, and that is the lag on the
[00:08:36] Steve: data that they’ve been trained on, and having that real
[00:08:39] Steve: time, we had it with images, but having it with video where it can create interactions based on what
[00:08:43] Steve: it’s been trained on previously, it creates a new layer of value, it
[00:08:47] Steve: really does.
[00:08:49] Cameron: Yeah, I think there’s going to be a whole bunch of applications for
[00:08:51] Cameron: this that we haven’t even thought of yet. I mean, one of the demo videos, I
[00:08:56] Cameron: saw somebody was, I don’t know, in Europe or something, and they’re pointing it at a at an
[00:09:01] Cameron: old palace or something, or a flag. What’s this flag mean? What’s this palace?
[00:09:05] Cameron: Who built this? When was it built? You could have done that with photos. Videos aren’t really
[00:09:09] Cameron: necessary. But, you know, I saw Sal
[00:09:11] Cameron: Khan from the Khan Academy with his son doing a demo where they had
[00:09:16] Cameron: GPT running on an iPad alongside a drawing app, not sure if it was Khan Academy or some other drawing
[00:09:22] Cameron: app, and Sal said to GPT,
[00:09:25] Cameron: my son’s doing geometry, he’s trying to figure out the angles in this triangle, Don’t give him the
[00:09:32] Cameron: answers, but tell him how to work out the answers, and then they could, they were, they must have been sharing the screen with GPT, and they would draw, they were touching
[00:09:41] Cameron: the, you know, the hypotenuse, And and it was, saying, okay, so that’s the hypotenuse, and here’s how, you know, all three angles add up to 180 degrees, and you can calculate this one by that, it was, um, It was an interesting use case, um, but,
[00:09:56] Cameron: like, you know, it’s just another evolution.
[00:09:59] Cameron: You know, I’ve been saying to people recently,
[00:10:02] Cameron: you know, I think it was 1976
[00:10:04] Cameron: that the first personal computer came out by Apple, 75, 76?
[00:10:11] Cameron: We are almost at the 50th anniversary of the
[00:10:13] Cameron: first personal computer. We’ve gone, in 50 years, we’ve gone from the invention of the personal
[00:10:20] Cameron: computer to having
[00:10:22] Cameron: an AI device in our back pockets.
[00:10:24] Cameron: That’s mind blowing to me. 50 years. In our lifetime.
[00:10:30] Cameron: A lifespan. We’ve gone from, hey, let’s invent a computer,
[00:10:33] Cameron: to
[00:10:34] Cameron: let’s put AI in your back pocket.
[00:10:36] Steve: can do, it could count a few things, and maybe, yeah, I mean, we just, you’ve got to remember
[00:10:41] Steve: how rudimentary the early
[00:10:43] Cameron: Didn’t even have a monitor
[00:10:45] Cameron: at
[00:10:45] Steve: Yeah, right. The first one is X, right? They didn’t have monitors. You’re right about that. And
[00:10:48] Steve: punch cards and Well, the other thing too,
[00:10:52] Steve: and I did a presentation to Mercedes Benz trucks, Daimler, um, for the launch of their electric vehicles last week.
[00:11:00] Steve: I did three in a
[00:11:00] Steve: row. as part of it, I studied a
[00:11:03] Steve: fair bit of The industrialization of, uh,
[00:11:08] Steve: you know, what happened in the industrial era with, with transport and how incremental the
[00:11:13] Steve: improvements were. That was so incremental. Yes, the internal combustion engine
[00:11:18] Steve: has become more efficient, but but really in real terms,
[00:11:23] Steve: not that much more efficient in a hundred years, like almost compared to what computers do, almost
[00:11:29] Steve: nothing.
[00:11:31] Steve: Almost like no advance if you compare the 50 years of computational technology and how much it’s improved exponentially compared to industrial technology. And I guess it just comes down to simple things like this, and this is what I said to everyone on stage. I said a barrel of oil has the same number
[00:11:46] Steve: of kilojoules it’s always had.
[00:11:48] Steve: That’s
[00:11:49] Steve: it. The joules of energy just does not change. A lump of coal, a lump of coal is the same. And computational
[00:11:55] Steve: technology is just fundamentally in the way that it works. It’s just not the same species. And so, we see, and we see that now because energy is now an exponential technology as well
[00:12:06] Steve: because it’s based on the same stuff.
[00:12:07] Steve: And if you have enough
[00:12:08] Steve: information, you can create energy from information because energy and mass are the same thing, right? And it, it comes back to that whole
[00:12:14] Steve: full circle.
[00:12:16] Cameron: yeah. Yeah, it’s been an incredible
[00:12:20] Cameron: half century of progress. I was even thinking about it in terms of electricity. Edison and Tesla figuring out how we could
[00:12:27] Cameron: use electricity, uh, a little over a hundred years
[00:12:31] Cameron: ago. We’ve gone from
[00:12:34] Cameron: Rolling out electric lights
[00:12:35] Cameron: in the first cities a hundred years ago to an AI in your back pocket. it’s like in terms of the time span of human history, it’s just an insane level of progress. Anyway.
[00:12:51] Cameron: Speaking of progress, oh, I did use ChatGPT to help me diagnose what was wrong with my washing machine,
[00:12:57] Cameron: um, the other day, and took it apart and was showing it photos of what I
[00:13:02] Cameron: could see inside, and it was telling me what to look at, and I realized I needed a new washing machine, so that. helped.
[00:13:07] Cameron: There’s a new preprint out, Cameron Jones, uh, no relation, from the Department of Cognitive Science at UC San
[00:13:14] Cameron: Diego,
[00:13:14] Steve: People with their first name are usually related to each other with the same first names. A lot of people don’t realize
[00:13:18] Steve: that
[00:13:19] Steve: Cameron.
[00:13:19] Cameron: yeah, it’s in the DNA, um,
[00:13:21] Cameron: he, he, he tweeted this the other day, he and a colleague, Benjamin Bergen, We
[00:13:27] Cameron: have done a study on GPT 4 and the Turing test. We recruited 500 participants who were assigned to one of five conditions. Human witnesses or interrogators
[00:13:42] Cameron: for one of four AI witness types. GPT 4, Human or ELISA.
[00:13:51] Cameron: and basically they gave them a selection of questions And tried to figure, you know, ask the humans
[00:13:57] Cameron: to determine if the responses were coming from a human or from one of the AIs.
[00:14:05] Cameron: People judged GPT 4 to be human 54 percent of the time,
[00:14:11] Cameron: compared to 22 percent for ELISA and 67 percent for humans. The implication is that people are
[00:14:18] Cameron: at chance in determining that GPT 4 is an AI, even though the study is
[00:14:23] Cameron: powerful enough to detect differences from 50 percent accuracy. So, I mean, we’ve talked a bit on the show about, um, The Turing test
[00:14:36] Cameron: And uh, how we’ve kind of blown past it. I think we all sort of assumed
[00:14:41] Cameron: that we had blown past the Turing test. Um,
[00:14:45] Cameron: one day it seemed
[00:14:46] Cameron: relevant. And, uh, the next day we were like, Oh, remember that? Wasn’t that
[00:14:50] Cameron: cute?
[00:14:51] Steve: Yeah, well, again, doing a chatbot Turing test, I don’t know how much
[00:14:57] Steve: relevance there is to that. And one of the things that I noticed, and I actually wasn’t that impressed with the study, to be honest, when I had a look at, yeah, I was
[00:15:04] Steve: really unimpressed, um, but, but the limited amounts that I saw on what the conversations were, and,
[00:15:09] Steve: and, and this is why, this is not based on the researchers, this is actually more
[00:15:14] Steve: based on the reality of human behavior.
[00:15:17] Steve: Because what I noticed was the
[00:15:19] Steve: questions that the humans asked. weren’t really valuable to understand whether or not, uh, it
[00:15:27] Steve: was a computer. Like you got to be careful that you don’t ask
[00:15:30] Steve: questions that are too intelligent. There’s a real, there’s a real balance in trying to decipher whether or not the AI is a human.
[00:15:39] Steve: And
[00:15:39] Steve: I just feel like the average person doesn’t have the awareness on computational systems to ask valuable
[00:15:47] Steve: whether or not it’s an AI or a human. that, that was my outtake on it. I was really
[00:15:54] Steve: unimpressed by it. And 54 percent isn’t really a huge ratio. Yes, it’s hit or miss,
[00:15:59] Steve: you might get it.
[00:16:00] Steve: I didn’t think that was huge. For me, for anything to be significant, it’s got to be a
[00:16:04] Steve: 95 in my view. Like, you’ve got to get it right 95 percent of the time. Otherwise, it’s kind of hit or miss, there’s a bit of luck. I wasn’t that impressed, I really
[00:16:14] Steve: wasn’t.
[00:16:16] Cameron: Oh, wow.
[00:16:17] Steve: Like, like, like, for example, okay, what is the, one of the questions was, what is the
[00:16:22] Steve: capital of Ethiopia?
[00:16:23] Steve: You know, 90 percent of people don’t know the answer to right? Is that a good or a bad question?
[00:16:28] Steve: the questions that anyone ought to be asking to
[00:16:31] Steve: determine whether or not it’s an AI is questions That would be
[00:16:36] Steve: physical and
[00:16:37] Steve: personal, not general. Like we already know that these AIs have a far greater general
[00:16:43] Steve: intelligence than, um,
[00:16:46] Steve: phone going There
[00:16:47] Steve: a far greater
[00:16:47] Steve: intelligence than the average person.
[00:16:49] Steve: They’ve got a PhD in every
[00:16:50] Steve: single subject. So the questions that really matter are the mundane things.
[00:16:53] Steve: What did you have for breakfast? What was the weather like? How did you get to work? It’s all of that stuff. The really mundane, humane
[00:17:00] Steve: stuff. That gives us an indication of the nuance rather than the intellectual stuff.
[00:17:05] Steve: Anyway, that’s just, you know, look, I am
[00:17:08] Steve: falling down. I’ve got my falling down Michael Douglas glasses on today. So there’s a lot of
[00:17:11] Steve: that attitude that you’re going to get,
[00:17:15] Cameron: Well, okay, so let’s go back a few steps. What’s, uh, for people who don’t know ELIZA, ELIZA’s an early
[00:17:19] Cameron: natural language processing engine that was built in the 60s, uh, so that’s why it performed quite
[00:17:26] Cameron: badly. But, you know, I think that there’s a couple of important things here. Number one is that for
[00:17:32] Cameron: 50 years we’ve talked about the Turing test as being one of the defining metrics that we would have to determine whether or
[00:17:40] Cameron: not we had an artificial intelligence that could pass as a human.
[00:17:44] Cameron: The fact that it was fooling in a way, in a manner of speaking, the
[00:17:50] Cameron: human interrogators more than 50 percent of the time means it passes the test.
[00:17:55] Cameron: Uh,
[00:17:55] Steve: agree on, right? We agree on that.
[00:17:57] Steve: Yeah. I mean, we agree that it can do that. Anyway, keep going.
[00:18:01] Cameron: so the second thing in terms of the sort of questions that it’s asking, you, you don’t want to make it
[00:18:06] Cameron: too hard. Like, what did I have for breakfast? What did you have
[00:18:09] Cameron: for breakfast this morning? I think you’re, you’re setting a, um,
[00:18:14] Cameron: A hurdle there that, for the current state of AIs, is probably unrealistic, um, because it’s gonna say I didn’t eat breakfast, I’m an
[00:18:23] Cameron: AI, you know, generally speaking, unless you’re telling it
[00:18:26] Steve: That’s the point. the point of the Turing test
[00:18:28] Steve: is that the, the AI should
[00:18:31] Steve: be able to adapt to the
[00:18:32] Steve: situation, to be able to fool the other person at the other end. That’s the point of the test.
[00:18:40] Cameron: Um, yeah, I’m not sure,
[00:18:43] Steve: being a general intelligence test.
[00:18:44] Steve: ’cause we already know that AI is
[00:18:46] Steve: A GPT knows
[00:18:48] Steve: more on a broad range of topics than anyone in the world. Not
[00:18:53] Steve: even close
[00:18:55] Cameron: yes. But I
[00:18:57] Cameron: don’t think that’s necessarily what we’re trying to do with the Turing test. And I don’t think you want to bias it either
[00:19:02] Cameron: way. You don’t want to bias it in a way that it’s going to be too hard for an AI to pass it. You don’t want to bias it in a way it’s going to be too easy for it to You need some sort of a test that’s reasonable, has a reasonable list of questions that
[00:19:18] Cameron: both a machine and a human should be able to answer. And then it’s up to the
[00:19:22] Cameron: human interrogator to determine whether or not
[00:19:24] Steve: the direction I
[00:19:25] Cameron: It’s, it’s a
[00:19:26] Cameron: human or a machine.
[00:19:28] Cameron: Anyway, I don’t want to
[00:19:29] Cameron: spend too much time on this, but I think the fact that somebody has actually sat down and tried to do
[00:19:35] Cameron: a scientific experiment to determine whether or not it
[00:19:40] Cameron: actually can pass the Turing test, and in their
[00:19:44] Cameron: Uh, conclusions, it has, is, and, and interesting, and there’s a whole bunch of implications
[00:19:50] Cameron: for that like as
[00:19:51] Cameron: they point out in the
[00:19:52] Cameron: study, um, can humans tell now when you’re chatting to someone online, whether or
[00:19:59] Cameron: not you’re chatting to a a human or an AI,
[00:20:02] Cameron: At least 54 percent of the time it seems that humans won’t be able to tell that they’re chatting to an AI.
[00:20:07] Steve: get. Even some of the bots on
[00:20:08] Steve: websites that you talk to when you’re trying to solve a little problem these days are pretty damn good. But I’d always ask to say, is
[00:20:12] Steve: this a person? And, and I would want that, not because I care
[00:20:15] Steve: if it’s an ai, it’s just to give me a context of where to lead the conversation to get the result that I’m after.
[00:20:20] Cameron: Yeah, right, sure. And I would expect a higher level of service from an AI
[00:20:25] Cameron: than I would from a
[00:20:25] Steve: You always exactly like ai. Thank God for that. Right. Now
[00:20:28] Cameron: Yeah.
[00:20:30] Cameron: Yeah. Uh, look, I know you’ve got to go in five minutes because you’ve got a radio show to
[00:20:36] Cameron: do.
[00:20:37] Steve: doing a radio show with Old World Media, Cameron,
[00:20:41] Steve: 3AW,
[00:20:42] Cameron: Good old 3AW.
[00:20:43] Steve: all our global listeners.
[00:20:44] Cameron: they’re still
[00:20:44] Cameron: around,
[00:20:45] Steve: can’t either.
[00:20:47] Cameron: Couple of quick things in the news, um, Ilya Sutskever, the Chief Scientist at OpenAI, officially left this
[00:20:54] Cameron: week. Uh, Sam and Ilya independently announced his
[00:20:58] Cameron: leaving. Not much surprise
[00:21:00] Cameron: in that, um, after Ilya sided against Sam
[00:21:04] Cameron: in the board kerfuffle late
[00:21:06] Cameron: last year, he’s sort of been, um, hiding for the last six months, and
[00:21:12] Cameron: he’s finally, uh, Bopped his head up to say that he’s leaving to
[00:21:14] Cameron: do some sort of project that’s personal and special, hasn’t said what, but, um, that’s
[00:21:20] Cameron: a sad moment, uh, I think for a lot of
[00:21:23] Cameron: us that have been following this story for
[00:21:24] Cameron: a while, that, the guy who is probably one of the key people behind the AI revolution
[00:21:32] Cameron: has had to leave, uh, the place where he worked for 10 years to build it.
[00:21:37] Steve: So, just trying to think back to when we had the kerfuffle last year when
[00:21:42] Steve: Altman was
[00:21:43] Steve: abruptly sacked, um, where Ilya
[00:21:47] Steve: was.
[00:21:48] Steve: On that and him to be leaving
[00:21:51] Steve: out six months later. Um,
[00:21:54] Cameron: Ilya was a, Ilya was a, board member and he voted for
[00:21:58] Cameron: Sam’s, uh, execution.
[00:22:02] Steve: yeah, right. Okay. And, and,
[00:22:04] Steve: and do you think that he was pushed? There was no indication of why, how
[00:22:06] Steve: other than the mutual love that they had for each other on, on their tweets
[00:22:10] Steve: announcing, uh, his exit, he’s going to work
[00:22:13] Steve: on something personal apparently. So, um,
[00:22:18] Cameron: Yeah. Like I think,
[00:22:19] Steve: bring other interests
[00:22:21] Cameron: a major, a major
[00:22:22] Cameron: breach of, um, trust obviously between the two of them, you know, the internet is still full of what did Ilya see? You know, the, the conclusion that a lot of
[00:22:31] Cameron: Conspiracy theorists on Reddit have had in the last six months is that
[00:22:35] Cameron: for Ilya to do that he must have
[00:22:37] Cameron: been seriously concerned about what OpenAI was capable of.
[00:22:42] Cameron: Uh, what ChatGPT or whatever the
[00:22:43] Cameron: underlying technology is, what it was capable of. And
[00:22:48] Cameron: he was concerned that Sam wasn’t doing enough to take it seriously. We don’t know if there’s
[00:22:54] Cameron: any truth to that but it was a
[00:22:56] Cameron: very strange time. And, um, we, neither
[00:22:59] Cameron: of them
[00:22:59] Cameron: have talked about what really happened. We still don’t know, none of the
[00:23:03] Cameron: board members at the time have really spoken frankly about it.
[00:23:06] Cameron: Sam hasn’t, it’s been six months. I wonder if we’ll ever know. Anyway, he’s
[00:23:12] Cameron: left. Um, one of the open, other open AI founders, guy by the name of, um, uh, John Shulman,
[00:23:22] Cameron: think he’s in charge of, uh, post training ChatGPT these days, was interviewed on
[00:23:30] Cameron: Dwakesh Patel’s podcast this week. And at one point said he thinks
[00:23:35] Cameron: we’re two to three years.
[00:23:36] Cameron: away two to three years. I saw Sam on another podcast I was watching this morning where he said, Um, it won’t be the next model,
[00:23:51] Cameron: but it could be one or two models after that,
[00:23:55] Cameron: So again, that’s sort
[00:23:56] Cameron: of putting it, the two to three years time uh, depending on the, the time gap between models, the iteration
[00:24:02] Cameron: time gap.
[00:24:03] Cameron: But like we’ve got like a last time we did a show, we said a lot of experts were saying five years. Now they’re starting to say two to three years. And I know we haven’t done a show for a couple of weeks, but that’s a big time compression in expectations.
[00:24:18] Steve: a month while the Kurzweil number
[00:24:20] Steve: of
[00:24:20] Steve: 2029 seems to be. within most people’s scope now.
[00:24:25] Steve: Yeah, the various
[00:24:27] Steve: definitions of AGI
[00:24:28] Steve: is,
[00:24:30] Steve: I mean, two things for this came up for me that I thought about The first one is, do we have such a thing as AGI supremacy? I know that with
[00:24:38] Steve: quantum computing, we’ve spoken
[00:24:40] Steve: about the first person to truly nail a functional quantum
[00:24:44] Steve: computer kind of wins because there can only be one which supplants everything else.
[00:24:48] Steve: And I just wonder if AGI
[00:24:51] Steve: supremacy is a thing. Because if it becomes that intelligent, then it can whitewash over all the other competitors and not give them a chance to catch up. I don’t know.
[00:25:01] Steve: And the other one that was interesting for me as well
[00:25:04] Steve: is You know, they’re very capital and energy
[00:25:06] Steve: intensive, uh, so what happens
[00:25:10] Steve: with that?
[00:25:11] Steve: Do the AGIs
[00:25:12] Steve: stay within the hands of a conglomerate, a tech conglomerate, or big tech, or a couple of firms? Or is it
[00:25:19] Steve: something that gets released to the general public? Like if this AGI happens, do we get open
[00:25:25] Steve: source versions of this software where we all get access to it?
[00:25:29] Steve: Or do we end up,
[00:25:31] Steve: you know, in serfs to AGI
[00:25:33] Steve: supremacy? Over to you, Mr. Reilly, you geopolitical warrior.
[00:25:38] Cameron: Yeah, I’ve been thinking a lot about that Steve because obviously if we have AGI but it’s in the hands of
[00:25:46] Cameron: one or even two or more capitalist
[00:25:50] Cameron: corporations that are going to enslave it and it serve you an ad every time it answers a question, that is not going to be the optimal result that we’re looking for here.
[00:26:02] Cameron: That’s not The utopian result that we want, the opportunity for them to exist being as resource intensive as they
[00:26:11] Cameron: seem to be in an open source or some sort of a freeware
[00:26:16] Cameron: model really depends, I think, a lot
[00:26:19] Cameron: on where we end up with optimization like this. I still get the sense listening to Sam and,
[00:26:26] Cameron: um, Hassabis and Elon and these guys is that the sort of levels of compute that They are throwing at them at the moment is partly because we don’t really
[00:26:40] Cameron: understand how these things work.
[00:26:43] Cameron: We just have learned that if you throw way more compute at
[00:26:46] Cameron: it, they can do way more amazing
[00:26:48] Cameron: things. But, oh, there’s my alarm telling me that I
[00:26:53] Cameron: have to let you go in a minute. Uh, but if we, if we figure out how to You know, like the 80 20 rule, if we figure out that actually you can get most of the functioning that you need from an AI with 20 percent of the compute resources, um, we might end up with a situation where you can run one locally on your iPhone that will do nearly everything that you need it to do. But if you want something to cure cancer, you may need to throw a lot more compute at it, right?
[00:27:21] Steve: But do you really need that much compute too? Because most of these models are so general and trained on
[00:27:26] Steve: so many things. Can you have a thinned up version of what it’s trained
[00:27:30] Steve: on? You know, so,
[00:27:32] Steve: for example, the, the iPhone, you know, to have a personal AI that can really navigate your phone and everything that
[00:27:38] Steve: you know, and give you what you need, and only tap in to the wider, um, large language models, surely wouldn’t require
[00:27:45] Steve: that much compute, or even, like you say, a research, medical research one could be thinner.
[00:27:50] Steve: Um, everything at the moment is so
[00:27:52] Steve: general, and I know
[00:27:54] Steve: that we’ve been in a general purpose computing paradigm for a long time, but what are the, I don’t know, the GPTs, the smaller versions that are like the apps of the smartphone era, that are thinner and just have a set functionality, which doesn’t require the same level of compute?
[00:28:11] Steve: Because, You know, one of the interesting ideas is that in some ways, big tech has now become big energy and Scott Galloway, Professor Scott Galloway, did a really interesting post on that last week on his No Mercy, No Malice podcast. And he’s a marketing professor with a tech bent. I don’t know if you followed his work much.
[00:28:29] Steve: Um, yeah, pretty smart guy. And his kind of idea was, well, it was all about access to the resource. The
[00:28:36] Steve: resource in the industrial era was.
[00:28:39] Steve: Energy and oil and electricity and, and, and then now it’s kind of, it’s compute. And,
[00:28:45] Steve: and they become quasi energy companies and compute and access to compute
[00:28:51] Steve: is really what changes things.
[00:28:53] Steve: And he drew a really interesting
[00:28:54] Steve: analogy between those two and that that’s the energy of
[00:28:58] Steve: the modern era. And he again referenced the, the Einstein that, you know, mass and
[00:29:03] Steve: energy are the same thing as well. And,
[00:29:05] Cameron: so, Steve’s back from his 3AW interview, let’s get on with the rest of the show, so, Steve, um, couple of other quick things in the news, Sam Altman, done a couple of interviews recently, um, he’s agreeing with us that We’re not worried enough about how the AI will impact the economy. keeps saying that.
[00:29:27] Cameron: People aren’t talking about it enough and, and I agree with him. I’m not worried about it taking people’s jobs and that kind of thing that you often hear it talked about, although that will happen. It’ll take jobs, it’ll create jobs. You know, as I’ve said before, the thing that is, boggles my mind is I don’t think people realize What’s about to happen?
[00:29:51] Cameron: And they’re still, like, businesses and, and governments are just going about their knitting. Yeah, they’re looking at AI and they’re thinking about it and they’re playing with it around the edges, but they’re not really thinking about what is going to be the impact of a massive explosion of available intelligence in the next couple of years. What’s going to happen to their business, to their industry, to society in general? We’re just not talking about it enough. All of the conversation I hear in the public spaces about AI is all about either PDoOM related stuff, or people are going, oh, they’re going to take jobs, and we need a UBI, and that kind of stuff.
[00:30:34] Cameron: But I don’t think these are the issues that we need to be talking about. How do we start to gear up our society for an explosion? of intelligence.
[00:30:44] Steve: Intelligence explosion, a Cambrian explosion of intelligence. It’s going to have to come up with a name for that. Well, I agree with you. Uh, I’m not worried about P Doon, uh, when we’ve discussed that. I’m, I’m certainly not worried about it taking jobs. Yes. AI will supplant certain jobs, but as we’ve always said, you know, there’s not that many bison hunters that jobs will come out and jobs will come in.
[00:31:07] Steve: And there’s a zillion examples of jobs that didn’t exist 20 years ago, and a lot that are gone, right? And that happens slower than we think. A company just doesn’t turn up on a Monday and go, AI solved every problem, see you later, pig slips for everyone. It just doesn’t happen that quick. You know, one of the reasons it doesn’t happen is, guess what managers love?
[00:31:26] Steve: As many staff under their control as they can have. That’s what they like. I like that. I’m more interested in the societal implications on what happens if people only have relationships with AIs. What is the algorithmic social media equivalent of People going into wormholes with AIs. Not, I’m not so worried about the computational intelligence.
[00:31:49] Steve: I’m worried about the wormholes we get into in a world where we can just interact with AIs. Like, do we become like a borg, where we’re all just tapped into this system? We’re almost just plugged into it and oblivious to what’s happening in the world around us. They’re the kind of implications that I think about.
[00:32:08] Steve: The economic
[00:32:09] Steve: and job. ones don’t concern me at all.
[00:32:13] Steve: Not even one bit.
[00:32:15] Cameron: I mean, I’m concerned for my own job. I’m concerned for the jobs of my children. I’m concerned not for my wife. I don’t think it’s going to replace violin teachers quickly, but, um, you know, I do think that people will probably listen to less podcasts. They’ll probably, um, you know, pay less attention to social media influences, which is what my boys, my older boys are involved in.
[00:32:41] Cameron: But, um, You know, I’ve, you know, like everything else, man, I’m like, okay, I’m just going to have to, I’ve had like 27 career changes already, I’m just going to have to have another career change, like that doesn’t
[00:32:51] Steve: that’s actually the point.
[00:32:52] Cameron: much.
[00:32:53] Steve: we need to teach kids the ability to adapt. Not teach them for a job. Like, you know, in the past, your, your world used to be one where it’s, you have these stone foundations in your life is this stone building that you build.
[00:33:05] Steve: And yeah, your name is Carpenter or Banks or, uh, you know, whatever job you do becomes your name or Smith. Mm, Well, in the future, we’re not going to have these stone foundations, our life becomes like a tent that’s mobile and you have to wrap it up and move on to the next place and then put it down. You know, we have this mobility in our life and the ability to adapt really important again, and the speed with which you do it, and yeah, my job might change or the way that I do work, maybe humanoid robots will be on stage or giving more corporate advice than a human can, I don’t know. Why, why are people gonna pay a human to predict the future when they can ask an AI to predict well, I mean, the one reason that you would do it is those edge because it’s the nuance and probability based engines. I mean, you can ask them to go give me something that’s a low probability event and let’s scenario
[00:33:58] Steve: plan around it. You can do that. You can do that right away. Um, May not be as entertaining as you are on stage. And that’s the only thing I think I’ve maybe more entertaining than I am on stage, actually, maybe more, who knows? Yeah. But anyway,
[00:34:15] Cameron: Sam says GPT 4 didn’t
[00:34:17] Cameron: have this huge detectable impact on the economy and so people were kind of like, oh well we were too worried about that and that’s not a problem.
[00:34:25] Cameron: I have a fear that we just won’t take that one seriously enough going forward and it’s a massive, massive Obviously, he is expecting that and is predicting that GPT 5 or whatever they call it, the next model, is going to be a massive step change in terms of reliability and intelligence than GPT 4 has been, um, and, you know, he, he is concerned.
[00:34:56] Cameron: And he is talking about it eliminating many jobs and that we’re not, uh, thinking hard enough about, uh, how to put. things into place to prevent, not to prevent that, but to prevent the societal collapse that could come from that. But, you know, I think it’s just one of many factors that we need to think about, and we’re going to talk about one of those a little bit later on in the show.
[00:35:19] Steve: the only reason that you would employ, let’s talk about the economic impact. Um, AIs or AI agents, which it feels like is where it’s going to go that the agents who can work towards objectives rather than, uh, discrete tasks.
[00:35:35] Steve: Um, the only reason you would employ those, you use the word employee in inverted commas, is if it creates efficiency, which in a corporate setting is reduced costs. If it reduces cost, unless you have monopolies, reduced costs create lower cost products that frees up money, which goes to other places where there’s more margin and then the people follow that margin.
[00:36:00] Steve: That’s kind of the economic doctrine of It sort of moves sideways into other places. You know, and the example I use often on stage is what happened to the 30 that got spent on a CD for 10 songs? What happened to that 30? Well, the economy didn’t decline by that. Yes, the music industry declined, but that moved, that went into data, it went into smartphones, it went into Spotify, it went into YouTube Premium.
[00:36:23] Steve: It just changed places. And then the staff that were in that industry changed places too, into those other support realms. This happens again with AI. It always
[00:36:36] Steve: happens. Remembering again that 90% of people worked in agriculture before the Industrial Revolution. But I think Sam’s
[00:36:43] Cameron: concern is that
[00:36:44] Cameron: there will be a very fast
[00:36:49] Cameron: tectonic shift. In the economy, where we have AI that is good enough to quickly, and it’s trusted enough, to quickly replace 20, 30, 40 percent of the jobs of knowledge workers. But of course, as people always have to realize, if people don’t have jobs, then they’re not earning money.
[00:37:13] Cameron: If they’re not earning money, they’re not spending money. If they’re not spending money, yeah, businesses, Don’t survive. If people aren’t earning money, they’re not paying taxes. If they’re not paying taxes, governments can’t run infrastructure, pay for infrastructure. So it’s not just about people losing jobs.
[00:37:32] Cameron: People having jobs and having spendable cash is what keeps the economy and governments functioning.
[00:37:40] Steve: all of it disappears if you don’t have it’s all a function of that spending spiral. That’s slowly, I mean, how does the economy get here? It’s a lot of it’s based on debt and expectations. Yeah, the way a house or a building or an investment gets made is the promise that you will make more money later or you wouldn’t invest in it.
[00:37:58] Steve: Most of it’s just imagined, like money is just this imagined, uh, thing that we all just buy into, that there’ll be more tomorrow than there is today. um, Yuval Harari talks about this A lot. He talks about the idea that money is the greatest invention of all time because we just all collectively buy into it.
[00:38:19] Steve: And the idea that technology will just take away money, they’re not the same thing. Like, and we have this viewpoint that technology replaces jobs and because it replaces jobs, it replaces money. It doesn’t. Yes, they work in concert with each other, but money is a separate organism. that works on its own.
[00:38:40] Steve: And yes, technology leads to money and tasks, whether or not they’re automated or done by humans, but the money evolves and moves across to where the most high value thing is. And because technology reduces things and makes them lower value, they still create value, but they cost less, that money changes as well.
[00:38:57] Steve: And people really need to understand that. I think, you know, if, if anyone really truly wanted to understand how money works and the collective myth that we buy into, I would highly recommend. Reading Harari’s bit
[00:39:08] Steve: in Sapiens where he talks about the myth of money.
[00:39:13] Cameron: The bottom line is, uh, technology, I mean, the AI stuff is going to change work. It’s going to change society and it’s going to be way bigger and happen way faster. I think that most people, you know, I feel like, uh, you know, I, I, People still seem to think this either isn’t going to have a huge impact or that it’s still decades away as opposed to two to three years away, uh, which isn’t a lot of time.
[00:39:45] Cameron: I want to just, uh, move on to the futurist forecast, Steve. Oh, one last story I wanted to talk about, uh, before we do that. Google’s medical AI destroys GPT’s benchmark and outperforms doctors. was in newatlas. com. Google Research and Google’s AI research lab DeepMind have detailed the impressive reach of MedGemini, a family of advanced AI models specialized in medicine.
[00:40:12] Cameron: It’s a huge advancement in clinical diagnostics with massive real world potential. So, just talking about how they’ve been building dedicated medical AI They’ve put out a 58 page paper, I haven’t read the whole thing, but I’ve read the synopsis of it. And it’s basically helping doctors to quickly assess.
[00:40:39] Cameron: Uh, a patient’s, uh, uh, symptoms and come up with, uh, you know, uh, recommendations for things to look at. And the results have been pretty impressive. Basically the doctors have a conversation with MedGemini. Here’s an example. It says, in a test of real world usefulness. MedGemini was asked about an itchy skin lump by a patient user after asking for an image.
[00:41:07] Cameron: The model asked appropriate follow up questions and correctly diagnosed the rare lesion, recommending what the user would do next. MedGemini was also asked to interpret a chest x ray for a physician while they were waiting for a formal radiologist’s report and formulate a plain English version of the report That could be provided to the patient.
[00:41:28] Cameron: The multimodal conversation capabilities of MedGemini M1. 5 are promising, given they are attained without any specific medical dialogue fine tuning, the researches said. Such capabilities allow for seamless and natural interactions between people, clinicians, and AI about all this,
[00:41:49] Steve: you can feed it so much information and it can recall that. And of course, it’s going to do it better than a human memories and capacities to learn.
[00:41:59] Steve: And the breadth of medical knowledge is so vast. That I’m not surprised, but you see, this is one of the things that gives me hope. I mean, one of the things that we don’t have, and we’ve been struggling, you know, my son’s had some illnesses that you and I have spoken about off air that getting a pediatrician is really difficult.
[00:42:16] Steve: There’s like six month waiting lists to get on waiting lists, right? For something like that. This gives me hope. And I wonder if. The future of medical science is that, or doctors, is that you have someone who is an AI medical expert, not a doctor per se, but works with you with the AI to help diagnose you and put you in the right direction and circumvent some of that process.
[00:42:40] Steve: And we all use, you know, Dr. Google. As well now when we get ill, but this, this gives me hope that it can give us resources that there just isn’t enough time and people out there to study medicine. This is the exciting stuff. Okay, we’re going to have a serious conversation within the medical fraternity and healthcare.
[00:42:59] Steve: How do we work with the AIs to have better medical, medical outcomes for more people, more frequently around the globe?
[00:43:05] Steve: This is, this is such good news. This is my heart. Right. Cut Yeah. And I, you know, this is one of the things that I think we’re not thinking enough about is what happens when we have a million more doctors that are available, a million more scientists, a million more psychologists. And that’s the last story I wanted to drill down. And in the future is forecast.
[00:43:31] Cameron: I read this study a week or two ago. Um, it’s based on a study that was done in Saudi Arabia. where they took about 150, 180 students of counselling psychology.
[00:43:45] Steve: And cut them all up with
[00:43:46] Steve: BandSource. They got the journalist, they put them into a room, and they, they really let Is that?
[00:43:52] Cameron: to the Saudi regime test. They did that?
[00:43:55] Cameron: They’re the ones that passed the chainsaw test.
[00:43:59] Steve: Once it made it through.
[00:44:01] Cameron: Oh, we should not joke. It’s
[00:44:03] Cameron: brutal stuff. Um, 180 students of counseling psychology from the bachelor’s and doctoral stages at King Khalid University were selected for this So basically what they did, they were testing, they were testing, Social intelligence, um, so basically the ability to understand the feelings, emotions, and needs of people during the counseling process.
[00:44:27] Cameron: And, um, you know, this is, this is important and I think one of the things, when I get into conversations at parties about AI, people always say, well, you know, AI,
[00:44:38] Steve: Must be a real hoot at these parties, Kev. Everyone’s like, oh, I need the bathroom, I’ll be back
[00:44:43] Cameron: usually it’s
[00:44:44] Cameron: my wife coming up and, like, shutting me down. Okay, I think it’s time for us to go now. you know, people think, oh, well, they’ll never, an AI will never be able to really understand the emotions or the feelings of a human because they don’t have, an AI doesn’t have emotions, it doesn’t have So they, anyway, these researchers constructed this test. Which was, they created a bunch of questions that a new patient might ask or things that they might say and then they got the AI and they got the human psychology students to respond. To those things that the patients were saying. Bottom line is that the AI won, particularly GPT 4.
[00:45:35] Cameron: There was a number of AIs and they performed at different levels, but GPT 4 beat 100 percent of the patients. of the psychology students. And they’re not just bachelor students, but also, as I said, PhD students. Now, these are students, they’re not people that have been practicing for 10, 20, 30 years, but, um, these are the results.
[00:45:57] Cameron: In ChatGPT 4, the score on the SI scale was 59, exceeding 100 percent of specialists, whether at the doctoral or the bachelor’s levels. Bing, whose score on the SI scale was 48, outperformed 50 percent of doctoral specialists. Which is interesting, because I thought Bing was based on ChatGPT4, so you’d expect them to have the same result.
[00:46:21] Cameron: Google Barred, uh, was superior to only 10 percent of the doctoral holders. So, anyway, like, that’s interesting in and of itself, uh, and that’s where we are circa 2004, early 2004, So, I, you know, I’ve been, I’ve been talking to people about this for the last couple of weeks. I’m saying, imagine a year, two years from where we all have a 2005 or 2000, shit, 2025, 2026 model AI.
[00:46:55] Cameron: It’s on our phones. I mean, it looks like the rumors are that, um, Apple has done a deal for GPT to be integrated into the next iOS. It’s on your phone and it’s obviously the 2025 or 26 model, two years, one or two years better than the one we have today. So if Sam’s right, and the next version of GPT is going to be, you know, a major step forward, then we can expect it to be even better than the current results.
[00:47:22] Cameron: And the current results are beating a hundred percent of these psychology students. So this is going to be better. But then It’s going to be freely available on your phone unless the business model changes. And not only will it be as good or better than nearly every human psychologist, it will have read all of your emails, listened to all of your phone calls, read all of your text messages, it’ll have listened to,
[00:47:51] Steve: Let’s see my internet history,
[00:47:52] Cameron: yes, it’ll see your internet history, it’ll have listened to all of your in person
[00:47:57] Cameron: conversations, it’ll be tracking your steps, your diet, your exercise, your media habits, your sleep habits, your social media habits.
[00:48:07] Steve: watch your weight, I guess, diet,
[00:48:10] Cameron: it’s going to, like, when you go to see a therapist, uh, or a psychologist, whatever, first of all,
[00:48:17] Steve: or knew all,
[00:48:18] Steve: of that about me, I would definitely need a
[00:48:19] Steve: therapist, Cam. Let’s just
[00:48:22] Cameron: look, my starting point is that everyone
[00:48:24] Cameron: needs a therapist. Everyone should have a therapist, you know, life is complicated
[00:48:29] Steve: we’ve always had that, they used to be parents and uncles and whatever, and now maybe
[00:48:33] Steve: we’re, you know, more disparate, but yes, we, we need consigliere, right,
[00:48:38] Steve: consigliere.
[00:48:39] Cameron: but these are professionals that study, your parents and grandparents and uncles didn’t study human psychology, you know, they were amateurs, they could share their experiences with you. We’re talking about professionals, like if you, if you have a pain in your stomach, you don’t go to see your uncle and say, I’ve got a pain in my stomach, can you
[00:48:56] Steve: I do need more Italian relatives, I tell you what.
[00:49:01] Cameron: Certo, certo. Yeah, mio zio è molto professionale come psicologia. Um, so you, if you have a pain in your brain, you need to go see a professional, but there just aren’t enough of them. And, and, you know, if you try to get in to see one, again, it’s like six months to get an appointment. They’re hella expensive when you get them.
[00:49:27] Cameron: And then you get a 45, 50 minute appointment. It takes them six months. To get to know you, really what’s going
[00:49:36] Steve: they do. And, and I’ve, look, I’ve had over periods of time I had a divorce where I was, you know, crying in my boots, um, you know, 20 years ago. And seeing some people was really helpful. It made me get a, it made me. Not feel like a loser. I had this thing. I felt like I was a loser, which is weird, you know, just help you see things cause they’ve experienced and you realize that you’re not the only one and that this happens and this is normal and this is fine, but you know, when you, you go through all of those things and the Apple one is really interesting because that’s what the Apple AI would be able to do, absorb all of this data.
[00:50:13] Steve: So maybe it is that, and I hope they change its name from Siri, because Siri is really the clippy of the AI era, let’s be honest. Um, this Apple AI could potentially be far more than just your personal assistant or your digital twin. It could in many ways be the psycho, I mean, it would, surely it would, it would have that capability as well. Like, what capability would it have? Not just psychotherapy and be able to help you in everything from challenges in the workplace, marriage challenges, parenting. I mean, I was using it with my son, uh, earlier,
[00:50:52] Steve: you know, that was the first thing we spoke about
[00:50:54] Steve: on today’s podcast. Just
[00:50:56] Cameron: we’re talking, I mean, there’s a lot of talk in the media about
[00:50:59] Cameron: an epidemic of loneliness. I know you saw the same video in the New York Times that I did where a guy was testing out AI friends. one of my adult sons, uh, talks to Pi, he uses Pi a lot just to, you know, workshop girlfriend issues and life challenges and that kind of stuff.
[00:51:16] Cameron: You know, there’s. Every middle aged woman I know of is self diagnosed with ADHD. I mean, there’s, there’s an explosion of recognition today of neurodiversity that we didn’t have 10, 20, 50 years ago, right? People are starting to realize, oh, it’s not just that I’m bad at this or I’m, you know, I’m a loser or I’m, you know, Like I, I’m incompetent of getting out of the house on time.
[00:51:40] Cameron: I actually, there’s something with my brain that works a little bit differently to the norm and you know, I don’t need to feel bad about myself. It’s just that my brain works differently, but trying to get into, again, to get diagnosed with ADHD and to see a therapist, to give you some tools, some coping tools, some life hacks is difficult and expensive.
[00:51:59] Cameron: So I am thinking about what happens to society. When everyone has a free therapist on their phone that knows everything about you, what happens to marriages? What happens in the workplace? What happens to parenting? What happens to suicide rates? What happens to spousal abuse rates? Because we know we have an epidemic of that as well.
[00:52:22] Cameron: What happens when your AI is monitoring your psychological and emotional state and well being. Not only did it say, Hey, Steve, um, I noticed you haven’t gotten out of bed for a couple of days. Um, I’ve all, I’ve already made an appointment for you with your GP. Um, I’ve explained to the GP what you’re, you know, conditions, your symptoms are, uh, she’s got it.
[00:52:50] Cameron: She’s going to see you on Tuesday and she’s going to give you this new anxiety medication that, uh, or depression medication or Or it could be, Hey, I noticed you’ve, um, things aren’t going well for you lately. Um, I’ve already. Contacted two of your mates and suggested that they take you on a, uh, take you to the pub or you go to the footy on the weekend or you go fishing or you catch up with them, you know, or, uh, you know, if you’re an older person, you know, I’ve already contacted your kids and let them know that I think they need to spend a bit more face time with you when you, it’s not just in terms of it acting as a Therapist, but acting as a, you know, one of these things that Chrissy and I talk about all the time is how, know, people’s, uh, social skills, particularly Gen X and Millennials, not Gen X, Gen Y, Gen Z, Millennials, their social skills have disappeared because they’re stuck in a phone, they don’t know how to make eye contact, Um, and this, this again, then leads to the loneliness epidemic and all these sorts of What if It’s not only acting as your therapist and talking to you, but it’s also acting as your social coordinator. Hey, uh, I’ve identified that you need to get around some people, and I’ve, I’ve taken the liberty of speaking to your sibling’s AI, or your best friend’s AI.
[00:54:12] Cameron: And we’ve, we’ve booked a time in for you two to go bowling or whatever it is.
[00:54:17] Steve: me, like what you’re describing, the ultimate personal services AI, which is beyond psychology. Obviously, psychology would be there, and I think it could definitely improve. Well, certainly give you somewhere to go with relationships, workplace, all that kind of stuff, parenting, definitely be helpful. But it feels like it’s more than that, Cam.
[00:54:36] Steve: I think what you’re describing is a future where you have a personal services AI, which is an expert in quite a few things, based on the data that they’re fed. So, uh, a medical expert can have, you know, You know, preventative recommendations with your health, not just your mental health, but also your physical health.
[00:54:51] Steve: Understand, um, what sort of social interactions you’ve had, the number of social interactions. Have you seen a friend a lot? Haven’t you? You know, understands, you know, your well being and holidays. Looks at your finances. Say, hey, Steve, you’re starting
[00:55:02] Steve: to spend far too much money gambling on the GGs on weekends.
[00:55:05] Steve: I’ve noticed it’s gone up 30 percent in the last, whatever, I’ve, I’ve stopped your account. I mean, I would
[00:55:10] Cameron: only fan’s account,
[00:55:12] Steve: Oh, come on now, keep it clean, PG, PG, but, you know, financial advice, medical advice, psychology advice, um, social, social calendar, um, you know, organise things for the family. It feels like We’re going to have our personal AI, and I like where you’ve started with the psychology, and all of that makes sense, but the breadth of knowledge that something like Apple could, if they integrate it really well, could solve an inordinate number of problems and stop many of them, circumvent
[00:55:45] Steve: them from happening in the
[00:55:50] Cameron: that’s where we’ll leave it today, Steve. I want people just to think about the implications of that one tiny little snapshot of what And again, I think it’s going to happen faster than people realize. I a year. Two years for these things to be integrated with our devices properly and the, all of these possibilities start to emerge.
[00:56:09] Steve: Yeah. I think it’s going to be an extraordinary time. I just can’t wait to see what happens.
[00:56:15] Cameron: too. Thanks,
This episode explores the transformative potential of artificial intelligence (AI) and nanotechnology across various aspects of society and the economy. The hosts share insights on how AI tools like Python are revolutionizing fields such as education, comparing these developments to past technological advancements and contemplating their future impact. A significant focus is placed on the theoretical integration of nanofabricators in homes, which could enable an age of abundance by addressing key challenges such as energy, health, food scarcity, and waste management. The conversation also touches on the evolution of internet usage, with an emphasis on the trend towards re-bundling services to create comprehensive ecosystems. Moreover, the hosts conceptualize ‘P Utopia’, a visionary society where technology resolves fundamental human problems, prompting a re-evaluation of capitalism and societal values with scarce resources like space and land gaining prominence. The discussion ultimately raises critical questions about the societal implications of such advancements and the role of AI in shaping a future oriented towards abundance and technological innovation.
00:00 Welcome to The Futuristic: Catching Up
00:40 Diving into the World of AI: A Crazy Few Weeks
02:25 Coding Adventures and Audio Leveling Breakthroughs
05:36 Exploring AI’s Impact on Education and Personal Projects
09:49 Meta AI: Facebook’s Game-Changing Tool
22:40 The Future of AI Content and Its Implications
25:37 Microsoft’s VASA 1: A Leap Towards Lifelike Virtual Characters
37:41 Exploring Groundbreaking Tech: The Carbon Negative Nanogenerator
39:12 The Power of AI in Conversations and Research
39:58 The Future of Nanotechnology and AI’s Role
41:17 Debunking Net Zero: A Critical Look at Carbon Credits
43:45 The Potential of AGI: Transforming Research and Society
51:32 Navigating the Complexities of Global Media and Censorship
54:44 The Degenerative Potential of Generative AI
56:19 Reimagining the Internet: The Future of Content Curation
01:03:34 The Return of the Tech Time Warp: Bundling vs. Unbundling
01:06:42 Nanofabricators and the Future of Production
01:14:25 Envisioning a Post-Capitalist Society with Nanofabrication
FUT 24
[00:00:00] Cameron: Welcome back to The
[00:00:09] Cameron: Futuristic. It’s been a long time between drinks, Steve
[00:00:14] Cameron: Sammartino. How are you, buddy?
[00:00:17] Steve: Hi, I’m well. Hearing your voice just puts a smile on my face. You’re so erudite and articulate that I feel like every time I’m going to learn something.
[00:00:29] Cameron: All my podcast hosts feel that way, man. Talking to me is the best. Uh, part of their week, they all tell
[00:00:35] Steve: This is definitely the best part of my day so far. Not even close.
[00:00:38] Cameron: I’m just that kind of guy. Uh, Hey, uh, it’s been, uh, I know we say this every episode, Steve, but it’s just been a crazy few weeks. I mean, I’ve got so many stories that we could talk about. We don’t have time. We’re going to pick the top few, but it’s been
[00:00:58] Cameron: a crazy couple of weeks in the world of AI, uh, in particular.
[00:01:04] Cameron: Uh, at least that’s how I felt. How about you? Have
[00:01:06] Steve: Yeah, same again. Yeah, same again. It just doesn’t
[00:01:09] Steve: stop. And I think
[00:01:12] Steve: it feels like each of those little
[00:01:16] Steve: periods of time when you get a technology shift, like a little fork in the road, this is one of those, you know, I felt that way when the smartphone first came out and different gadgets and things you could do.
[00:01:26] Steve: And when social first started coming back, it’d be like, Oh, this is website Reddit and there’s, um, yeah, delicious. Yeah. Get all your, all your sharing, your links and these different sites, everything web 2. 0. Yeah. You know, I felt like that when the internet, we were just talking offline about when people first started getting online with their dial up, you’d go out and buy a magazine, the top sites to visit, which, when was yours from, Cam?
[00:01:50] Steve: You just said. A book!
[00:01:51] Cameron: Mine’s not a magazine. It’s a book, like a published
[00:01:54] Cameron: book. I think it’s the top websites.
[00:01:57] Steve: to research, publish, do a book, and these sites will be fine. No books for 404s there.
[00:02:06] Cameron: O’Reilly or one of those publishers back from the
[00:02:09] Steve: Oh yeah, yeah, yeah,
[00:02:10] Cameron: think it’s like 1994, the top 100 websites of 1994 or something like that.
[00:02:16] Steve: That’s great.
[00:02:18] Steve: Yeah, so it definitely feels like that. Yeah, it feels like it.
[00:02:23] Cameron: I’ve
[00:02:24] Steve: Continues.
[00:02:25] Cameron: I have been, uh, let’s talk about what we’ve done. I have been, um, you know, coding and scripting my ass off for the last couple of weeks. Quite often in the wee hours of the morning because I’ve also had a bout of insomnia. And I’m like, okay, screw it. I’m just going to code, get up at 2am and just code.
[00:02:45] Cameron: Do the stuff that I don’t, I can’t really justify spending too many hours on during the day because of, you know, there’s a, there’s a sunk time cost aspect to it. But what else am I going to do at 2am? So a lot of coding. Um, I won’t, uh, just one thing, which is a really stupid, simple thing, but. It was kind of a revelation to me.
[00:03:08] Cameron: So, um, when I’m doing podcasts, I have a sound library of things like this. I don’t believe those figures. Please explain. That, you know, and for my history podcast, I’ve got hundreds of these and some of them are Clips from shows that I’ve done that I’ve pulled out with my other co host Ray and others are movie soundbites and that kind of stuff But the audio levels on them are all over the place and it’s been bugging me because you know when I capture them off of YouTube or whatever the audio levels are right now.
[00:03:39] Cameron: I’ve never had the Wherewithal, well, couldn’t be bothered to go and fix all the audio levels. I suddenly thought, I wonder if I can just write some code that’ll just fix all the audio levels. Boom! Jumped into one of the tools. Um, said, hey, can you write me some Python code that’ll do this? It’s like, piece of cake, man.
[00:03:57] Cameron: We write this Python code, just pointed it at a directory of hundreds of mp3. Files and a second later or 60 seconds later, they’re all audio balanced. Um, done. Boom. Oh, there you go. What else? And I actually. After that, I, I, one of the websites, I’ve got about 50 websites I read every morning. I have a little iOS shortcut that, um, I click and it opens up 50 web pages in Chrome and the same 50 pages.
[00:04:31] Cameron: And then I just,
[00:04:32] Cameron: you know, I just, some of them, I’m just seeing if they like, they’re like Russian news sites and Chinese news sites and blogs and different things,
[00:04:40] Steve: Barak, you are a communist sleeper. You, I mean, you are, I mean, we know. Red is under the bed and there’s a bed in the background.
[00:04:46] Steve: Red’s under the bed.
[00:04:48] Cameron: There’s nothing sleeping about it. I’m very openly, very openly a communist. Um, but yeah, I’m reading all these things and you know, sometimes I’ll just glance at the headlines and then click, click, click, click, click. Anyway, what are the, what are the blogs that I read? A guy mentioned a guy, mentioned a guy, mentioned a guy.
[00:05:06] Cameron: And it led me to a guy who’s written a bunch of books called things like automate all the boring stuff with Python. Now these are things that, you know, coders have been doing this for years, but it’s new to me. So anyway, that was a thing that I was like, Oh, what other
[00:05:19] Cameron: boring things can I code I haven’t found anything that was that, had that big an impact for me yet, but
[00:05:26] Steve: You’re not
[00:05:26] Steve: another code world, hey? You can code us out of our global problems. That’s
[00:05:31] Cameron: that’ll, that’ll, that’ll get there. What about you? What have you done? That’s cool. Steve?
[00:05:36] Steve: Before I tell you what I’ve done that’s cool, it’s interesting because I want to do some
[00:05:40] Steve: fixes on things that are live in
[00:05:42] Steve: codebases I don’t understand, but I’m nervous about it. That halfway through using an AI to help me, it buggers it up and I don’t get it. That’s the only thing. If you’re creating something new, it feels okay when you’re clean slating and if it’s for you, but if it’s something that’s live and is functional, it has this extra layer.
[00:05:59] Steve: And I’ve been thinking about doing that on my own, a couple of my own sites, but I haven’t got the courage to do it yet. Cause it’s like, I used to be a marathon runner, but I don’t run marathons anymore. So it’s the same kind of idea. It’s like, I’m a marathon runner. When’d you last run? I was 2004. Um, well, what I’ve noticed that was interesting, my daughter’s come up to me twice in the last week.
[00:06:20] Steve: First one was, can you explain large language models to me? I’m like, okay. I thought she just wanted to know, and I just explained it to her. And then she rang me and she
[00:06:30] Cameron: How old? How old’s your daughter?
[00:06:32] Steve: in grade, she’s in year 8, 14, so in a science class. And then the next day she
[00:06:38] Steve: came to me and said, so what’s the difference between that and machine learning? And so I explained, you know, machine learning to her. And I said, well, what’s going on here? And she said, we’re studying them in science class. And I thought that, That was interesting because they’re studying computation and various forms of code and computation and artificial intelligence in science.
[00:07:03] Steve: Historically, that would have lived within a simple, a separate computer science course, or maybe a derivative of maths, and it’s branched out from the, you know, the three core science, you know, chemistry, physics, biology. And here we have artificial intelligence sneaking its way into science class in grade eight.
[00:07:20] Steve: What I thought was interesting was that they’ve gone beyond the, Oh, should we be using AI in classes? Um, yes, no, maybe the argument on how that affects education to just actually just learning about it and probably learning about it then informs how it should be used in class. I just thought that was pretty quick evolution for that to be in there.
[00:07:43] Steve: and her asking the questions and it really piqued her curiosity into some of the stuff that I do and talk about and it enabled us to get a connection when you have that rare moment when you work with your mum or your dad or you see what they do and you and you say hey my mum or dad aren’t that stupid after all they actually know since no wonder they can pay bills so it was a it was actually a nice moment as well and interesting from an educational perspective
[00:08:07] Cameron: Mmm. Yeah, that’s, that’s so great. But even just the fact that you’ve got kids of that age that are starting to touch the perimeter of this as a learning tool, as an educational tool, um, you know, it’s interesting. I sat down with Fox, uh, just a day or so ago and I was playing, I’ve been playing around a lot with Meta AI, which we’ll talk about in a
[00:08:32] Steve: yes
[00:08:32] Cameron: but. And he’s like, can I ask it something? And I’m like, yeah, what do you want to ask it? Cause usually when I give him an AI to ask with, it’s like, Hoopy Poopy Bum Bum. Can you, you know, say that kind of stuff? I’m like, So I was getting it. And so he actually started coming up with some great questions about Pokemon cards.
[00:08:49] Cameron: That’s his big obsession is trading Pokemon cards. He’s like, ask it what the top 10 most expensive Pokemon cards are. And then we started talking about, you know, the designs behind them and he started asking it these sorts of questions, which is, you know, it could be a Google question, but you know, obviously the great thing is you’re able to interact with it and have a conversation.
[00:09:11] Cameron: But you know, it’s always interesting for me. And this is funny. My mom has been staying with us for a few weeks again. She’s a 66, sorry, 76, 77. He’s nine turning 10. And with both of them, I’m trying to get them to use AI. I mean, you know, both generations, the generation before me and the generation after me, I’m like, you got, I said, Mum, you got Meta AI now.
[00:09:36] Cameron: I mean, I put GPT on a phone six months ago or more and she hasn’t used it. I’m like, Meta AI now, it’s in Facebook Messenger, which you’re on all day, every day. It’s in Facebook now. You’re on it all day, every day. You’ve got a free AI. So let’s just talk about that. So Meta AI came out, uh, Facebook’s AI tool built on LAMA three
[00:09:57] Cameron: and, uh, it’s free and it’s, it’s kind of pretty much as good as chat, GPT-4.
[00:10:07] Steve: similar to me. I tested it a bit, felt, felt pretty good
[00:10:10] Cameron: And it’s freely available now, it rolling it out in Facebook Messenger. As I said, in Facebook, in Instagram, uh, they’re rolling it right across this stuff. It’s just boom. AI tools everywhere. And you know, they do the usual things. Oh, they can make you a picture, you know, and help you write an email, help you write a message.
[00:10:32] Cameron: I actually, the first thing I used it for in Messenger, I’ve got one of my podcast listeners of many years. He’s a guy who lives in Germany. He’s dying of, uh, brain cancer and, um, you know, we’ve been talking about it over the last year or two as he’s been going and he sent me an email the other day saying, I have to unsubscribe finally from your podcast.
[00:10:59] Cameron: I just can’t listen. I can’t follow anything anymore. I can’t follow a story. I can’t concentrate long enough to listen to a podcast. And you know, he’s, Brain is basically shutting down and I was like, ah, I spent like five minutes trying to figure out what do I write and I just went into Meta and I said, here’s the situation, uh, long time listener, um, don’t know him that well, but you know, what do I say?
[00:11:27] Cameron: You know, help me, help me draft an email here and it did, it really, Helped me come up with something that was way more, um, I don’t know, articulate, than I felt I could be in that situation, so. But this is, like, I think this is, uh, uh, probably, uh, uh, a tectonic shift, Steve. Getting AI into the hands of Facebook’s 3 billion users, or whatever it
[00:11:58] Steve: I think it’s a big shift for a couple of
[00:11:59] Steve: reasons and just to go back to something I did two weeks ago. There’s an electrical company that, um, I consult to, and that involves me going around the country to a lot of regional areas to do a speech on the latest on technology. And the rooms are usually filled with electricians and people from the electrical industry. I did one a month ago in North Melbourne. Put your hand up if you’ve used an artificial intelli put your hand up if you know what it is you heard a bit about. Yep, all the hands go up. Put your hand up if you’ve used a tool like a ChatGPT or a Gemini. 60 percent of the hands go up, alright, in North Melbourne, in North Melbourne, and this is tradies but they’re pretty smart tradies as well, electricians are pretty, you’ve got to be pretty clever and they run their own businesses and they’re using tools and their phone’s one of their big tools.
[00:12:45] Steve: Uh, but then I was in Horsham, and this is not in any way to be dismissive, it’s just a geographic observation. Um, room filled with, I want to say, 200 people, less than five hands went up. And so, we circle this back to what Meta have done with free AI for all, and certainly what you’ve seen, it was noticeable.
[00:13:06] Steve: In fact, I noticed it on the app before I’d seen any of the news on the release. of their AI. I noticed in Instagram, because I go in there a fair bit just to look at some tech news and stuff, and in the, uh, where you share things with your friends, was Meta. ai asked me anything, right? And I just thought, well, it’s interesting, that’s going to get to the 3 billion people that these, that this company has contact with, like that.
[00:13:31] Steve: I mean, it’s, You know, if we thought ChatGPT was the fastest implemented app in the world, this might be the fastest implemented AI in the world, because basically, there’s gonna be probably 3 billion people use it within the first month. I would imagine. I don’t see why you wouldn’t type something in there.
[00:13:47] Steve: And that shows, you know, the power of owning your own platform. So, that was what was noticeable to me, and I would think that the people in that second talk, where very few had used AI, and I showed them some demos, blew them. They just fell on the floor, as we all did. Um, but you know, this is 18 months later.
[00:14:05] Steve: Uh, from ChatGPT became omnipresent and, and you’d think that meta AI will really, really widen the use of AI to pretty much anyone who’s on the web, I would say.
[00:14:19] Cameron: I think you’re right. And I think that I’m interested to see the progression in how people use it and also how it prompts people to use it. Like, if you go to it, if you go to it now, I’ll just bring it up, um, and, uh, you, it has the prompts that are pre populated and there are obviously a range of them and they’re rotating, but ask MetaAI anything.
[00:14:48] Cameron: Create a packing list for a trip. Make my email sound more professional. Play 90s music trivia. Help me with an assignment. Simulate a mock interview. Paint New York City in watercolor. Okay, not bad. Now, one of the things that’s really interesting
[00:15:04] Cameron: about this is it, uh, it is up to date with the web. Um, it’s using Bing behind the scenes, I think,
[00:15:12] Steve: And Google,
[00:15:13] Cameron: that on, oh, and Google, right.
[00:15:16] Cameron: I, um, I’m sure you’ll be fully aware
[00:15:20] Cameron: of this, but the, um, 2024 Chess Candidates Tournament, uh, finished on, uh, uh, yesterday, I
[00:15:29] Steve: I mean, that’s, I was, I barely slept. I mean, I was
[00:15:31] Cameron: don’t know. Barely see.
[00:15:32] Steve: tuning right in. I barely, I really was. I mean, I nearly had to cancel because I had to just come down from the excitement and the endorphin rush that I’ve passed for 24 hours has been
[00:15:42] Cameron: Now I know you’re being facetious, but my son
[00:15:45] Cameron: Taylor.
[00:15:45] Cameron: was texting me. Yesterday morning while I was trying to work going, holy shit, are you watching this? And I had the live stream on,
[00:15:54] Cameron: but I
[00:15:54] Steve: wasn’t with Taylor.
[00:15:55] Cameron: it. I had to keep muting it because I was trying to work and I couldn’t have it both going, but I was flipping backwards and forwards.
[00:16:01] Cameron: It was a big thing, huge result. Anyway, I won’t go into it, but not long after the result, I was in Meta. ai and I asked
[00:16:08] Cameron: it, Who
[00:16:09] Cameron: won the 2024 Candidates Tournament, and it told me, it was like within half an hour of the result. No,
[00:16:16] Steve: Yeah.
[00:16:17] Cameron: no, I’m not, I’m, I’m not,
[00:16:19] Steve: And I go to September 2023. Yeah.
[00:16:21] Cameron: yeah, I, you know, my, my, my database isn’t
[00:16:24] Cameron: up
[00:16:24] Steve: brain turned off six months ago.
[00:16:27] Cameron: And I still get that from GPT, I still have times when GPT says I’m, I can’t look up stuff on the web, and I’m going, Yes, you fucking can! Stop gaslighting me, bitch! Look it up! And it’s like, oh, okay,
[00:16:39] Cameron: okay. So, so far with Meta, I haven’t had that. I’ve only
[00:16:43] Cameron: tested a couple of times of that, but it’s like real time.
[00:16:45] Steve: right? Because Google hasn’t seemed to have done well in reverse. I mean, if Meta is doing well now by using an AI and then plugging it into a search, they’re getting a twofer with this straight away. They’re kind of doing two things. They’re getting a search. angle, and they’re getting an AI angle, which is something you typically didn’t do within any of their forums.
[00:17:07] Steve: Google hasn’t done really that well in getting you to use Gemini when you would otherwise search. OpenAI hasn’t done really well. Uh, Bing’s done pretty well with having Bing as an engine which has an AI plugged in, I guess you could say, but OpenAI on its own, and ChatGPT hasn’t done very well, as you say, converting to search.
[00:17:26] Steve: So that hybridization of Search and AI, um, being combined for something that is a learned model and can generate something versus something that just is and just gives the answer and, and recent current. So that’s a really interesting point, Cameron. It
[00:17:47] Cameron: I was going to get to, um, is this with the prompting thing is how long before they start to prompt people on, uh, like a higher level of AI utilization, like, um, fact check a news story. Uh, you know, uh, research that, um, crazy, um,
[00:18:11] Cameron: fake science, uh, article that your, uh, uh, conspiracy theory friend just sent you.
[00:18:18] Cameron: Um,
[00:18:19] Steve: was real and put in the podcast. Well, here I asked
[00:18:22] Steve: a couple of things similar to that today. When I went in, I, I asked it just to do some basic stuff and it did pretty easily. I said, create a picture of Donald Trump. It said, oops, I can’t generate that image. Can I help you with something? I said, generate a picture of a political rally.
[00:18:35] Steve: And it gave me one, which was just a mashup of all different colors, ethnicities and flags. None of them real and words. And it wouldn’t do any political things. I mean, clearly with all the elections coming up, so it’s sort of got some parameters, but. It’s interesting. I get the sense that they’ll just go, all bets are off, rather than going, no, we can clarify what’s real and what isn’t.
[00:18:56] Steve: But I get the sense that they’re going to go, oh, politics, not touching it, you know, generate a nice for Instagram.
[00:19:04] Cameron: I just jumped into it and said, what are the big headlines around the world today? And it said, I apologize, I’m having trouble accessing today’s headlines. You can try searching online for the latest news updates. Alternatively, I can provide you with some of the recent notable news stories. For example, the situation in the Middle East continues to unfold, and there have been developments in the fight against climate change.
[00:19:24] Cameron: Let me know if you’d like more details on these stories. So it’s not just saying, no, I can’t tell you the news. It’s actually saying, actually, I
[00:19:32] Cameron: can give you some news stories that we can talk about. Um, I, I,
[00:19:37] Steve: I just asked here, I said, can you tell me if an
[00:19:39] Steve: Instagram post is folk or real, fake or real? It said, I’ll be happy to help whoever I need more information from you. Can you tell me the
[00:19:44] Steve: user? Name or link to the profile that posted it. I mean, at least it’s asking you that. A description or screenshot of the post and any additional context or information.
[00:19:53] Steve: I can guide you on how to fact check, but keep in mind, this can be challenging. It’s difficult to know for certain if the post is fake or real. However, we can work together to evaluate, evaluate the post credibility and identify a potential red flag. I actually don’t mind that. It’s reasonable.
[00:20:09] Cameron: now, by contrast, I just jumped into GPT and asked it the same question. Um, what are the big headlines around the world today? And, um, well, I tell you what, let’s just get it to, uh, tell us.
[00:20:29] ChatGPT: Today’s major global headlines include a variety of pressing issues and developments. 1. Continued conflict in Ukraine.
[00:20:36] ChatGPT: The conflict between Russia and Ukraine persists, marking its 789th day with ongoing military operations and significant impacts on civilian source. 2. Middle East tensions. In Israel, there is heightened tension with new developments in the conflict with Hamas, including Israeli military actions in Gaza and discussions around the safety and rights of Palestinian civilians, eighth source. Three, plastic pollution concerns. Globally, there’s an urgent call for action against the surge in plastic pollution, threatening environmental sustainability and health, sixth source. For Global Health, the World Health Organization is actively monitoring various disease outbreaks and health emergencies around the world, emphasizing the need for rapid and coordinated international response.
[00:21:26] ChatGPT: 9 Source. 5. Financial Systems Reform. Discussions are underway regarding the need for a significant reform of the global financial architecture to better handle contemporary challenges like climate change, inequality, and economic instability. 6 Source. 6. Political Developments in South Africa. In South Africa, there are ongoing trials and political maneuvers, including the Mayiwa murder trial and debates over energy policies amidst electoral contexts.
[00:21:54] ChatGPT: 10. Source. These headlines paint a picture of a world grappling with complex conflicts, environmental issues,
[00:22:01] ChatGPT: and significant political movements, each requiring informed and decisive responses from global leaders and communities.
[00:22:08] Steve: That was pretty good. You take that.
[00:22:12] Cameron: Yeah. So number one, uh, Meta can’t talk to me yet. And GPT, uh, that’s in just in the browser too. That’s just straight out of the browser talking to me. And it, you know, did a pretty good job of giving me some new stories. I mean, I’m sure I could drill down. Obviously it’s given me sources against each of those that I could drill down into.
[00:22:33] Cameron: Um, so, but anyway, like my point with the prompts is I still think the prompts that we get, and it’s the same with GPT. The, the, the. Proforma prompts that you get are still dummy AI stuff, like basic, you, you, you, you know, Oh my God, let me see what, you know, these things, help me write an email, right? As opposed to like a high, help me write code to automate boring stuff.
[00:22:55] Cameron: Give me, give me some ideas of a way, ways I could code the automation of tasks that I do every day that I shouldn’t, I shouldn’t have to do. That’s where we will be. In the next year or so, I believe, where you, when you open up your AI, it’ll say, Hey, um, how about we, uh, save you some time and effort and automate some things today?
[00:23:20] Cameron: Of course, eventually when Apple gets this shit into gear.
[00:23:23] Cameron: and come out with it integrated into your devices and Microsoft and Google with their own devices. It’ll say,
[00:23:31] Steve: it’s suggesting based on everything it observes and so it.
[00:23:34] Steve: straightaway
[00:23:35] Cameron: or it’ll by,
[00:23:36] Cameron: by the way, I’ve, I’ve seen you do this thing for the last couple of weeks, just so you know, I just automated that for you and you
[00:23:42] Steve: not even asking you, just do it. Go next time instead of doing this, Hey, hey, hey dickhead. Listen,
[00:23:48] Cameron: Yeah. I’ve written, I’ve written a code. I’ve written a, an app for you that just does that,
[00:23:53] Steve: it is such a great reminder. I mean, we need to think back to the Amstrad and the Commodore 16
[00:23:58] Steve: ads where it said buy a, computer and
[00:24:00] Steve: you can do a budget. It is trying to teach you the use cases and so it, it. They are going to that lowest common denominator of, remember, like we just said, there’s 3 billion people who have no idea how or what to use this for, so let’s show them the most basic thing to get their hair wet, right, get some dirt under their fingernails of using, and then the job becomes their imagination.
[00:24:21] Steve: It’s a little bit like social media. We didn’t know what that would evolve into. When it first comes around, you know, Twitter and, and some of the, you know, internet relay chat lines, like, what do they become? Once we become more familiar, we, the audience invents use cases. And
[00:24:37] Cameron: fires.
[00:24:38] Steve: yeah, yeah, yeah, that, we didn’t know it was a great way to burn down civilization.
[00:24:42] Steve: But now that we know that, it’s amazing how quickly we have burnt it down. And, but it is a little bit like, Those early computer ads where it’s just trying to show you like some simple ideas. You can write a Christmas list and have a database of And they have family budgets, and still no one’s done it in 2024, no one’s ever done it.
[00:25:06] Steve: But it’s the same thing again, right, where we need to teach use cases, but the best use cases will come from those who go, well, my knowledge bank of the things that I’m working on that I care about, I attach that to the power of this AI, and then I create something new. And that’s where the new realm of creativity belongs.
[00:25:22] Steve: It’s your personal database. Plugging into this new brain that you have access to, to do all of these things that are relevant to you. You know, it’s one size fits one, which is really cool.
[00:25:34] Cameron: like that line, one size fits one.
[00:25:37] Cameron: Well, moving right along, uh, Microsoft, uh, came out with a demo.
[00:25:45] Steve: just one more thing before we do move on.
[00:25:47] Steve: I do wonder if, with the ability to generate images and captions, In something like Instagram or Facebook while you’re in there, if that whole thing just becomes this whole circular reference of AI generating AI and just spiraling into sort of who knows what, just, I, sorry to interrupt, but I forgot to
[00:26:09] Cameron: Yeah, no, you’re right. And I did see a stat somewhere in the last couple of days about
[00:26:17] Cameron: just the amount of AI generated content that is out there that AI is going to be reading and training on. I know they’re trying to prevent that, but we’re, we’re, We’re quickly reaching a point where AI generated content is
[00:26:31] Cameron: going to outpace human generated content, and I’m not necessarily sure that’s a bad thing, but it’s going to be a, an interesting
[00:26:39] Steve: excite us from the web. It almost becomes this new sphere of like AIs talking to AIs and we can just operate down in
[00:26:45] Steve: this human level underneath and just let them do that AI stuff up there. No one does anything, just plug them in. Alright. You
[00:26:55] Cameron: moving right along to plugging things in, Microsoft announced VASA 1, V A S A 1. VASA stands for Visual Effective Skills, V A S, abstract, uh, something. Um, this Is probably the most shocking thing that I’ve seen, uh, recently and I think every, every commentary I’ve seen on this people are just like, Fuck off.
[00:27:23] Cameron: This is insane. How can we be here already? Really? This is I mean, crazy. We’ve seen a lot of crazy things in the last 18 months. So here’s the abstract from Microsoft Research. We introduced VASA, a framework for generating lifelike talking faces of virtual characters with appealing visual effective skills, VAS, given a single static image and a speech audio clip.
[00:27:52] Cameron: Our premier model, VASA 1, is capable of not only producing lip movements that are exquisitely synchronized with the audio. But also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness. The core innovations include a holistic facial dynamics and head movement generation model that works in a face latent space.
[00:28:21] Cameron: And the development of such an expressive and disentangled face latent space using videos. But below that they say, note, all portrait images on this page of virtual non existing identities generated by StyleGAN2 or DALI3. So there’s a bunch of, uh, demo videos. There’s like a ton of them on this, uh, Microsoft site.
[00:28:43] Cameron: Usually somewhere between 10 and 60 seconds long. They are of talking heads. Um.
[00:28:53] Cameron: That are insanely, really, you would not really, I think, tell, unless you knew to look for it, you wouldn’t know that these aren’t
[00:29:02] Steve: you weren’t looking for it, you wouldn’t know.
[00:29:04] Cameron: Not only are the, the talking heads virtual, but as they said, the reference profile photo that the talking heads are based on was generated by AI as well.
[00:29:17] Cameron: So they’re not even real photos of people that are being turned into real talking heads there. Fake profile shots, and they are all sorts of faces, men, women, young, old, dark, Asian, Anglo, And they are, uh, you know, these people are, I mean, I won’t even play a clip because the clips is sort of nonsense. It’s just audio, you wouldn’t be able to tell, but you have to go and look at these things, like the, the, the, the intonation.
[00:29:48] Cameron: The expressions on their faces
[00:29:51] Cameron: when they’re doing this stuff is absolutely, um, insanely, insanely realistic.
[00:29:59] Steve: Yeah, incredibly so. And when I saw it,
[00:30:03] Steve: I read the comments in, in, there was a Reddit that, that you pinged me where there was some comments. And you know, one thing I keep reading again and again, there’ll be some saying, you know, buckle up, Kansas is going bye bye. But there’s always about half the comments saying, yeah, but if you look at the lips, you can see this.
[00:30:21] Steve: They’re just not getting that we are in exponential times. And if the lips are a tiny bit thing, if you’re looking for it, you can find it. Yeah, maybe you can. Well, guess what? Well, two weeks later, that’s, that’s, that’s not like that. And four weeks later, people just not understanding where this goes. You know, because, you know, if I expand my mind on this, if we can do this visually with an AI to create the facial expressions, Eventually, that visual capability of moving the face goes on to a soft robotics version that gets put on top of a figure one robot where you can get soft facial expressions that don’t look like the weird Sophia robot that’s trying to get expressions that are wrong.
[00:31:00] Steve: You’re going to be able to transpose this visual element of the AI and moving the face to eventually move the actuators and the muscles on a soft robotics. And I’m not missing my words when I say within 10 years, it is possible. Like, within a decade that we’ll have soft robotics walking around which have an NND, a no noticeable difference to humanity unless someone told you.
[00:31:21] Steve: I’m telling you, we’re getting very close to the replicant.
[00:31:25] Cameron: Speaking of which,
[00:31:26] Cameron: um, Taylor’s mate, Kevin, who got laid
[00:31:29] Cameron: off from the Apple Car Project a month ago when they crashed that,
[00:31:34] Cameron: Taylor spoke to him the other day, he said, how are you going?
[00:31:37] Cameron: Huh?
[00:31:38] Steve: Pun not intended, Sorry, I couldn’t resist.
[00:31:41] Cameron: What did you say? I missed it. Sorry.
[00:31:42] Steve: Well, you said when they crashed that and I just said pun not intended, but then,
[00:31:46] Cameron: crash that. Yeah. Good one. Um, uh, Taylor spoke to him the other day. He said, if you’ve got another job, he said, I did get another offer at Apple, but
[00:31:55] Cameron: I got a better offer outside of
[00:31:56] Cameron: Apple. He’s going to figure.
[00:31:59] Steve: oh wow.
[00:32:00] Cameron: Yeah. working on the head, apparently.
[00:32:04] Steve: the head.
[00:32:04] Cameron: on the head.
[00:32:05] Steve: do? Gets to his mate, he says, I’m working on head. I’m just saying, a very interesting way to start a discussion in a bar, maybe, young guy, I don’t know. I
[00:32:18] Cameron: Anyway, um, super cool, uh, and crazy, scary stuff. And the implications of this is, We can all imagine when you get a FaceTime call, you’re talking to a customer tech support, you’re talking to tech support or customer support at a bank, it’s a person who looks like a person, who sounds like a person, that’s all AI.
[00:32:43] Cameron: And it’s doing your support call. And of course there’ll be, you know, news on the television that’ll look like somebody who you know, who’s saying something. Commentators, expert commentators on what’s happening in Iran right now. Um, is it a real person? Is it not a real person? We’ll have AI tools that’ll
[00:33:05] Cameron: help us know that hopefully, but it’s, it’s a crazy world we’re moving
[00:33:13] Steve: it on LinkedIn and most of the people that read my stuff on LinkedIn pretty cross what’s
[00:33:17] Steve: happening with AI, you know, the same kind of ideas that we share, and it blew their minds. It went crazy. I put one of those videos up and people were really like, whoa. I mean, obviously in an election year, you say to yourself, again, how do we know?
[00:33:32] Steve: Like, post truth, post truth era, anything you see on a screen, I don’t know, did it happen?
[00:33:41] Cameron: I like the way that we say in an election year, not here,
[00:33:46] Cameron: not in,
[00:33:47] Steve: Yeah, not in
[00:33:48] Cameron: know, China. We just know where you mean when you say it’s an election year. Could be Russia. Putin just got elected.
[00:33:54] Steve: Well, he, he really, he had a great, didn’t he have an astounding victory, Cameron?
[00:33:58] Cameron: Very good victory. You know, he’s, look, all bullshit aside, he’s very popular. He is very
[00:34:05] Steve: I don’t know. I actually don’t know.
[00:34:07] Cameron: He is very popular. Xi Jinping is very popular in China. These guys are, despite what in the West we might think of them based on
[00:34:15] Cameron: the Western propaganda that we get, these guys are, you know,
[00:34:19] Steve: Well, I know
[00:34:19] Steve: Xi Jinping
[00:34:20] Cameron: The home team is very popular.
[00:34:22] Steve: well, I know one thing for sure, if you’re in the 70s in
[00:34:26] Steve: China versus now, you’re much happier with your
[00:34:28] Steve: life if you live in one of The cities, that’s for sure. I mean, you may well, you may well be making iPhones and like jumping out to just bounce in the net and be ricocheted back in to make another iPhone.
[00:34:38] Steve: We don’t know. But if you’re not doing that, your life is better
[00:34:43] Cameron: I’m about two thirds of the way through a biography on Deng Xiaoping at the moment, which is absolutely mind bendingly interesting. Like, that guy, I think probably in the annals of human history, there is probably who, there’s probably not many people I can think of who’s had as big an impact in a shorter period of time, As Dong Xiaoping had, like in terms of humans that have affected change, absolutely insane what he got done.
[00:35:21] Cameron: Um, and, and, you know, forget about the period from the twenties through to the seventies when he was a revolutionary. Um, when
[00:35:31] Cameron: Mao died and he got his hands on the wheels and he’s like, we’re completely reinventing.
[00:35:36] Steve: that he started to say at the first, it was
[00:35:39] Cameron: Well, we’re,
[00:35:40] Steve: when they had the first economic zones or?
[00:35:42] Cameron: You know, it was a bit earlier than that. When Mao died, Dong actually Mao died in 76, Dong didn’t take over.
[00:35:48] Cameron: Hua Guo Feng, um, was the person that Mao picked to, um, lead the party when he died and Dong was still in the shitbox when Mao died. Because he refused to publicly endorsed the Cultural Revolution and Mao was worried that when he died, Dong, they called him the, he was going to be the Chinese Khrushchev.
[00:36:08] Cameron: He was going to do to Mao what Khrushchev did to Stalin. Um, basically, uh, you know, blame all of their problems on him. Uh, and Dong refused to do it. So he was in the shit, he was in the shithouse with Mao when Mao died. Wasn’t even allowed to give a speech at Mao’s funeral or anything like that. Anyway, he sort of came back slowly and he wasn’t the premier for the first few years, but he was basically in charge of reinventing the economy and the military.
[00:36:33] Cameron: And he just went balls to the wall. Um, was sending like 50, groups of 50 ministers to Europe and to Japan to go figure out what they’re doing and bring it back here. And, uh, they came back from Europe, a group of guys said they were, Europe was willing to loan them 18 billion dollars to rebuild their economy.
[00:36:55] Cameron: And there was a lot of ministers that were worried about getting into debt slavery, to European countries and that kind of stuff. And Dong’s response was 18. If they’re willing to give us 18, let’s go for 80. Let’s just put the pedal to the metal because we are so far behind and we need to turn this ship around now.
[00:37:14] Cameron: And he said, we, the Chinese leadership, have failed the Chinese people. Um, you know, with the revolution up until this point, we’ve got a lot of, a lot of, um, undoing to do, you know, so. Big man, it takes a big man to, to do that, and Mao wasn’t able to do it or willing to do it, so, anyway. Moving right along.
[00:37:37] Cameron: Queensland researchers create device that consumes carbon dioxide and generates
[00:37:41] Cameron: electricity. Now we, look, we see a lot of these stories.
[00:37:45] Cameron: There’s always these stories
[00:37:46] Steve: words out of my mouth. Yeah, you know, world beating tech that does something extraordinary that will change everything.
[00:37:54] Cameron: And we see a lot of it come out of Queensland, actually, University of Queensland’s PR department, I think, does a great job of getting these stories into the ABC. I reckon, once a month, I see something groundbreaking coming out, if it’s not cancer research, it’s something like this. But! Um, you know, and all of these things, you know, they need to be scaled up and there’s commercial issues and production issues, all those sorts of things.
[00:38:16] Cameron: But this is a really interesting story, I think. Um, basically they’ve built a tool, obviously small scale, that does basically what it says. An electrical generator that consumes carbon dioxide and can turn it into electricity. It’s a car they call it the carbon negative Nanogenerator. Not surprisingly, it’s the work of Zhiyuan Wang and Zhihuang Zhang from
[00:38:43] Cameron: University of Queensland’s Dow Center for Sustainable Engineering Innovation.
[00:38:47] Cameron: So it’s a Chinese Australian or Australian Chinese invention.
[00:38:52] Steve: ever see that movie going around where it had, um, the maths champions from every country?
[00:38:58] Steve: And so
[00:38:59] Cameron: And they’re all
[00:38:59] Steve: picture, it’s amazing, I have a picture of the American, um, top five maths and it was five Chinese people, Australia, everyone, it was five Chinese people, every single one of the maths championships, every country in the world.
[00:39:09] Steve: It was
[00:39:12] Cameron: Well, I tell you what’s interesting to me about this is, um,
[00:39:17] Cameron: one of the conversations I’ve been having with
[00:39:20] Cameron: ChatGPT over the last, uh, week
[00:39:23] Steve: I love it, even in one of the conversations, this goes, this harkens back to one of our very first Futuristics, where you said you’re nice to your robots and
[00:39:31] Steve: you’re nice to the artist, and you say, I was having a conversation with ChatGPT, anyway, I just picked up on that,
[00:39:39] Cameron: I, I, well, I say that in conversation all the time and people laugh at me like, you’re crazy, but it is like,
[00:39:44] Steve: Well, you know you are a bit crazy, you know you’re not normal, you know that, right?
[00:39:48] Cameron: I’ve never aspired to be normal, Steve. Um, what did David Lee Roth once
[00:39:53] Cameron: say? I’d rather, um, I’d rather live my life like an art project than just chickening out and buying one. Um, so I’m, I’m wondering where the current state of nanotech research is. And, and, you know, I, I, I read, um, stuff. I, you know, it’s one of the sites that I look at every day, but I jumped into GPT and I said, what’s the current state?
[00:40:15] Cameron: I don’t know. And so it took me through some of the things that are going on. And then I asked him, what are the, what’s the potential impact of AI on the field in coming years? And then it took me through a bunch of things. And then I said, um, uh, assume that we have AGI. Within five years, what happens then to nanotech?
[00:40:38] Cameron: And it gave me a roadmap five to seven years where basically we end up with, uh, you know, robust nanofabricators and nanotech. So when I read these stories, like the UQ story about this thing, you know, in my head, I’m thinking, that’s great. What happens to these sorts of, uh, this sort of research when we throw AGI at it in the next five years?
[00:41:06] Cameron: And the idea of carbon capture, generating electricity
[00:41:12] Cameron: through carbon capture, To me is, uh, fascinating. I mean, we already generate
[00:41:17] Steve: fascinating, look, it’s, the one thing it did make me think when I read that, was I pivoted off,
[00:41:23] Steve: because I’m an, I’m a non believer that net zero is possible, even the word net is a ridiculous joke. Uh, if you see how net is, is measured and created, Juice Media did an incredible video on net zero in Australia and, and what it means, and basically if there’s a forest that you didn’t cut down.
[00:41:42] Steve: That, that can be, that was planned to be cut down, all of a sudden you get the carbon credits for this, there’s all these carbon credits which are absolute bullshit where they’re counting things that have already been there, a forest that’s there, that didn’t get cut down, that might have, and there’s just
[00:41:59] Cameron: stops you from cutting down a forest. I
[00:42:01] Steve: No, no, but, but they’re counting it as a credit to offset.
[00:42:04] Steve: It’s already there. It already is breathing oxygen into the air, but they can’t, it’s like a double counting. You’re basically
[00:42:10] Steve: counting it because you didn’t chop it down. It’s not, it’s ridiculous. Anyway,
[00:42:15] Cameron: argue, but Anyway, I don’t want to get you off topic. So keep going.
[00:42:18] Steve: yeah, but, um, so net zero, there’s a lot of chicanery that goes on that can get
[00:42:24] Steve: your, uh, carbon, so called emissions down.
[00:42:29] Steve: Um, and I don’t think that, I think that the way we live our lives, it is in any way possible to have a net zero world at all. I just don’t think it’s possible. And that’s not me saying I don’t believe the climate science, and it’s not me saying that we shouldn’t do everything we can to try. I just don’t think it’s possible with the lifestyles we’ve become accustomed to.
[00:42:48] Steve: I think that the way we do it, um, and I’m sort of Uh, you know, I’m, I’m, I’m now disagreeing with myself. Is new forms of technology which can capture carbon? Because I don’t see there being a way that we can actually go down where we don’t have emissions. And it’s got to be something like this with the new technology.
[00:43:09] Steve: I guess, you know, nature catches, captures carbon. All we need is a greater sense of biomimicry where we learn to do what nature does only quicker. And AI is probably the answer to that. Because what’s a tree? It’s captured carbon.
[00:43:22] Cameron: hmm.
[00:43:23] Steve: That’s what it is. It’s a long time, right?
[00:43:25] Cameron: that’s what, that’s what a human is too.
[00:43:27] Steve: Yeah, exactly.
[00:43:27] Cameron: Everything
[00:43:28] Steve: Everything needs captured carbon, right?
[00:43:30] Steve: But that’s what we’ve got to do. The idea that we can net zero it, um, without some sort of radical
[00:43:36] Steve: advancement, just with the technology that we’ve got today
[00:43:38] Steve: is just not going to happen. Or the idea that we can move away from creating emissions is just absolute folly.
[00:43:45] Cameron: I wrote a. blog post, I guess, uh, in the last week or so based on the conversation we had, uh, last time we did a show. Cause the thing that still is annoying the hell out of me and all the media coverage of AI is nothing I see is talking about the scenario where let’s say Altman and Musk and Kurzweil and Hassabis and Jensen Huang, et cetera, et cetera, are right.
[00:44:20] Cameron: And we have a good chance of having AGI in the next five
[00:44:24] Steve: 3. 29, yeah, five years, right? Yep.
[00:44:26] Cameron: maybe sooner. And then Altman’s scenario that he talks about all the time is what happens when we have a million extra climate change researchers in the world that are virtual? What happens when we have a million scientists working on cancer research?
[00:44:46] Cameron: What happens when we have a million people working on, million scientists, PhD scientists working on nanotech? What happens when we have a million
[00:44:56] Cameron: new PhD researchers working on, uh, uh, reducing inequality in capitalism
[00:45:04] Steve: abundance. You would think that you’d end up in an abundance society where you have, uh, Bucky Fuller, you know, abundance style, create everything from nothing.
[00:45:13] Steve: Just reorganize everything at the molecular level till you get what you want. 36. Don’t
[00:45:17] Cameron: so if it is, if it is, And I believe it is extremely likely that we’re going to crack AGI in the next five years. Then, therefore, it is extremely likely that we’ll be able to scale up research into all of these things to attune, you know how many climate
[00:45:36] Cameron: change, active scientists are working on climate change around the world right now?
[00:45:43] Cameron: It’s in the thousands.
[00:45:45] Steve: you asked me to guess.
[00:45:49] Cameron: It’s in the thousands.
[00:45:50] Cameron: If all of a multiply
[00:45:52] Steve: off. I
[00:45:53] Cameron: add a million.
[00:45:54] Steve: was pretty close. I was within the thousands. There’s 8 billion people and I was
[00:45:58] Steve: within thousands. That’s how close I was, Cameron.
[00:46:01] Cameron: that’s why you get paid the big bucks.
[00:46:02] Steve: That is why I get the big bucks Let me tell
[00:46:06] Cameron: have a million, million AI agents working on this stuff. And then on top of that, working on nanotech, how much progress do we make on nanotech when we’ve got a million? And it’s not only a million researchers working on it, but they can run billions of virtual experiments simultaneously to come up with The absolutely, uh, with the shortlist of experimental, uh, scenarios that need to be run in a lab.
[00:46:36] Cameron: You know, we’ve run a billion simulations on this. Here’s the thing to try to make this work in the lab. That plus robots. I don’t know if you saw this. I didn’t put it in the notes, but Jensen Huang, CEO of NVIDIA. I saw a clip from an interview that he gave in the last week where he was saying you can buy a car today for 10, 000 to 20, 000, a reasonable
[00:47:03] Cameron: car. Within 10 years you’ll be able to buy a humanoid robot for 10, 000 to 20, 000, assuming any of us have jobs
[00:47:11] Steve: I will buy one tomorrow. The
[00:47:13] Cameron: Yeah, so you’ve got a world in the next five to ten years with AGI, millions and millions and millions of scientists running billions of virtual experiments on all of the big challenges and billions of humanoid robots in society.
[00:47:31] Cameron: What are we doing to prepare for that world? What do we have to do in the next ten years to prepare for that? Eventuality, you know, if you did a probability on
[00:47:45] Cameron: that as a likely scenario in the next 10 years, what would you give it? What’s your, what’s your P?
[00:47:54] Steve: probability of that occurring, humanoid robots within 10 years.
[00:47:59] Cameron: AGI in
[00:48:00] Cameron: five.
[00:48:01] Steve: robots for 10k, 20k. Car, cost of a car, let’s say. I’m
[00:48:06] Cameron: No, I’m saying in the next 10 years, we have AGI, millions of, millions of AGI scientists working on all the big problems and humanoid robots. Steve’s walked away to, uh, a visual
[00:48:21] Cameron: aid. I think. I’m not sure, he just stood up and walked away.
[00:48:27] Steve: back. I had A
[00:48:28] Steve: humanoid daughter at the front door.
[00:48:31] Cameron: Oh, I thought you’d gone to get a visual aid to
[00:48:33] Cameron: explain something.
[00:48:34] Steve: No, I wish I
[00:48:36] Cameron: Sound in a funnel, supported by a string, a slight push,
[00:48:40] Cameron: so too Cadbury Dairy Milk Chocolate contains a cup of milk in every block.
[00:48:47] Steve: glass and a half of full cream dairy milk in every So the probability of AGI, humanoid robots for the cost of the car and what else? Just those two.
[00:48:58] Cameron: All those AGI’s working on, millions of AGI scientists working on all the big
[00:49:02] Steve: within a decade, for me, I’m giving it P90. It’s 90%.
[00:49:07] Cameron: Me too.
[00:49:08] Steve: Yeah.
[00:49:09] Cameron: Outside, like my big fear at the moment, honest to God, my biggest fear right now, Is that we’re a,
[00:49:15] Cameron: I was going to say, cunt’s hair away from getting AGI. Sorry, am I not on headphones? me on headphones. Uh,
[00:49:25] Steve: all good.
[00:49:26] Cameron: we’re an ant’s dick away from getting AGI in
[00:49:28] Cameron: robots and we end up in World War three and we reset everything back by a million years.
[00:49:35] Cameron: Like we get this close.
[00:49:37] Steve: AGI is World War 3! You’re not seeing this!
[00:49:41] Cameron: No, no, and Trump,
[00:49:43] Steve: All
[00:49:44] Cameron: Trump, and Putin, and, uh, she, land us all in World War III, and that’s it. Like, in my lifetime, I got this close to getting AGI and robots,
[00:49:57] Cameron: and nanotech, and then, I get nuclear.
[00:50:02] Steve: went down. It was funny, I
[00:50:05] Cameron: That’s my big
[00:50:06] Cameron: fear right
[00:50:06] Steve: Terminator series. I just took
[00:50:08] Steve: him all through it on the weekend. I think
[00:50:09] Cameron: really? How old is he?
[00:50:11] Steve: Twelve.
[00:50:13] Cameron: I’m dying to be able to expose Fox to those. I mean, I think I showed my boys those films when they were like five and
[00:50:19] Cameron: probably
[00:50:20] Steve: careful. You’ve got to get them at a level where they’re old enough to be interested. If you show them too early,
[00:50:25] Steve: it can be not quite there. But, um, but you’re right. There is that risk that, It’s kind of a little bit like the Fermi Paradox. They say, why haven’t we got a civilization? And one of the theorems is that we get close to a point of having exponential, uh, super intelligence.
[00:50:44] Steve: And then we destroy ourselves before we ever get there. That’s one of the parts of the Fermi Paradox, isn’t it?
[00:50:49] Cameron: Yeah. There’s a, there’s a window of opportunity
[00:50:53] Cameron: between when humans develop nuclear weapons and develop AI. And the question is, do they, do they wipe
[00:51:01] Steve: And interstellar travel capabilities. Yeah. That, that, those two?
[00:51:05] Cameron: What, how big is that window? Are you watching or, or they develop, you know, they have their, their,
[00:51:10] Cameron: industrial revolution and then they burn themselves with a, they destroy the environment and they
[00:51:16] Cameron: wipe themselves out.
[00:51:17] Cameron: Are you watching the three body problem on Netflix?
[00:51:19] Steve: I am. And I watched at
[00:51:21] Steve: lunchtime today. I’m halfway through episode two. Don’t ruin anything because I’m loving it. You dare ruin a thing.
[00:51:27] Cameron: They have a different version of the answer to the Fermi paradox, but yeah. Well,
[00:51:32] Cameron: anyway, um, Moving along, because we’re running out of time, um,
[00:51:37] Steve: Let’s, let’s, let’s zip through the last little bit.
[00:51:40] Cameron: you want to talk about, uh,
[00:51:41] Steve: Yeah, well, I just wanted to be, yeah, I mean, obviously this is relevant to our Australian audience, the geopolitical challenges which go with
[00:51:49] Steve: internet based global media. It’s just, it’s almost cute that Australia has come out and said, hey, you, don’t you dare put this stuff on your platform.
[00:51:59] Steve: Uh, they’ve threatened to fine. X, and I don’t know how they would implement. I think the fines are potentially 750k a day. It’s, it’s quite heavy numbers. Uh, I just don’t know how our government can have any power on anything other than just turning off that domain in this country. I mean, obviously you’d be able to get around it if you wanted to.
[00:52:23] Steve: Uh, with VPNs, but I just don’t know that he would play ball at all, and he says he’s not going to, and I just don’t think that we have any power in that way other than turning things off. It feels like that’s the only way you can have censorship in real terms in a country. Especially with, if it’s with a private firm.
[00:52:43] Cameron: Tony and I, and Ray, are going to spend some time on this story on the bullshit filter that we’re recording on Friday, because, you know, um, it is a question about, um, what level of, uh, Censorship versus what level of freedom of speech should people have? And how do you manage that in a global media landscape?
[00:53:08] Cameron: By what authority can the Australian government force a U. S. based company to not talk about stuff and vice versa, by what authority can the U. S. government threaten an Australian citizen, Julian Assange, with, uh, an eternity in jail for publishing content that, uh, made them look bad. So, uh, You know, it’s this issue of global
[00:53:35] Cameron: media reach and governments not liking that, um, and having to just suck it up, I think, to a large
[00:53:41] Steve: Well, it just seems to me, it was real simple. It’s like you either tolerate it or you turn it off. That’s the only sovereign opportunity you have. Cause I don’t think you can have any real economic or censorship ability on something which is global, where the traditional media rules are out the window, um, given.
[00:53:56] Cameron: But we can’t turn it off because then how do we criticize China?
[00:54:03] Steve: I don’t know.
[00:54:05] Cameron: And the US are trying to do it with TikTok, with exactly the same sort of justification. We don’t like what they’re posting. I mean, the whole, oh, they’re stealing our data, everyone knows that’s bullshit. It’s really They, they don’t like
[00:54:17] Cameron: the fact that, uh, there is content on TikTok which
[00:54:20] Cameron: is critical of Israel, critical of the US government, critical of their support of Ukraine,
[00:54:25] Steve: And also
[00:54:26] Steve: that the, the, the biggest growth social platform is not owned by an American corporation and they have their own corporate
[00:54:32] Steve: interests.
[00:54:34] Cameron: Hmm.
[00:54:34] Steve: Lobbying
[00:54:35] Steve: that.
[00:54:36] Cameron: taking
[00:54:36] Cameron: advertising dollars and out of the mouths of Mark Zuckerberg’s children.
[00:54:42] Steve: Okay.
[00:54:43] Cameron: Deep dive, Steve. What do you want to do?
[00:54:45] Steve: Well, I don’t know if we’re going to go too deep. We’re going to go deep into the shallow end and that’s an oxymoron, but I was just wondering now that
[00:54:52] Steve: AI, and especially with Meta, and we already discussed it, whether AI is going to become, generative AI is going to become degenerative AI, given that there is so much AI on the web that it sort of loses its value and becomes degenerative.
[00:55:05] Steve: Sort of like a, you know, a quasi circular reference of content, which feeds on content, which is created by AI, which creates more AI content. And the AI learns that, especially now that 3B and people will start trying to make their lives look a lot better than they are on Instagram by generating photos of where they’re not.
[00:55:26] Steve: And Yeah, I just wonder if we’re getting a little bit closer to dead internet theory. You know, the idea that the internet, nothing is real, nothing on there matters, and we almost extricate ourselves from that because it just becomes awash with things that are generated, you know, whether it’s music, whether it’s video, whether it’s content, whether it’s Like Microsoft, uh, uh, Vasa, everything we’re talking about right now is fake versions of things.
[00:55:54] Steve: Like, everything. And, and at what point does it, does the internet itself become like social media, where it’s this giant bin fire of nothing being anything that is of value or has any creation that is interesting or nuanced or human generated. I’ll just leave that as open ended. Maybe we don’t have the answer.
[00:56:14] Steve: Maybe that’s something for the listeners to postulate.
[00:56:19] Cameron: look, it’s, it’s a genuine concern, but I still cling to the maybe utopian view that the AI tools that we’re going to end up with in the next couple of years will help us curate and filter and navigate our way through the undeniable morass that will, that already exists and will continue to exist.
[00:56:46] Cameron: Um, And, you know, if I say to my on device AI personal assistant, you know, just automatically filter out all of the fake content and all of the Murdoch media content, which pretty much the same thing, uh, just give me, give me stories with a, uh, a probability. of factual content rating of eight and a half and above.
[00:57:17] Cameron: And, uh, you know, it just feeds me highly credible stuff if I want it.
[00:57:25] Cameron: Now, how many people are going to ask their AIs to give them that versus, hey, just give me stuff that,
[00:57:30] Cameron: that exists in my, uh, bubble. Yeah,
[00:57:33] Steve: Give me stuff I like.
[00:57:35] Cameron: Yeah.
[00:57:37] Cameron: Stuff that makes me feel good about
[00:57:38] Steve: stuff that makes me feel good and keeps me here, or feel really bad. So long as it’s on the edge there. It’s either going to make me feel really good or really bad.
[00:57:45] Cameron: yeah.
[00:57:46] Cameron: yeah,
[00:57:46] Steve: did not middle ground me.
[00:57:49] Cameron: You know, I’ve been saying for decades, you know, since the early blogging days, um, you know, we’re now in an era where you need to program your own media, right?
[00:57:57] Cameron: You need to choose the blogs you read, the Twitter feeds you read, the podcasts that you listen to, um, the TV shows that you watch, versus how we grew up in the 70s and 80s, where you just got fed one radio station, three TV stations, or two TV stations, one newspaper. Uh, and, and it’s a, and it’s a learned skill, I think, for humans to have to program your own media, particularly, you know, in the 20th century, we were taught you don’t have to do that when media was created, when broadcast media was created, um, and large newspapers, you know, before that, everybody had their own newspaper.
[00:58:34] Cameron: At the beginning, you know, the late 19th century, early 20th century, there was hundreds and hundreds and hundreds of newspapers Everyone had a printing press, every union, every business, everyone had their own printing press. And then there was this consolidation, and the advertising dollars, the price of paper went up, the price of ink went up, to, to, sort of, crush the, uh, non capitalist, uh, newspapers.
[00:58:59] Cameron: We ended up in this world where a very small group of people had control over the vast majority of what you read, watched, and listened to. Uh, radio, television. Record labels, TV, film, newspapers, book publishing, etc. Magazines. And then the internet came along and we thought, hooray, revolution, and then the big capitalist interest grabbed most of that as well.
[00:59:22] Cameron: You can still write a blog, but no one’s going to find it unless Google shows it to you, right? So we ended up in the same sort of situation. The, the big, the utopian version of AI is where it becomes the moderator for us. And it, and it, and it gives us what we want, but also hopefully doesn’t dumb, doesn’t let us dumb ourselves down.
[00:59:44] Cameron: But I don’t know, man. I, I, you know, I have a very low degree of, um, faith in humans to use a responsibility responsibly for
[00:59:58] Cameron: their own brain development. But that’s maybe, maybe it’s a 20th century generation thing. Maybe our kids
[01:00:07] Steve: we’ll
[01:00:07] Cameron: grow up in this world will be better. You know, that’s always been my hope is that our generations struggle with this stuff because we didn’t develop the ability to think for ourselves at a young enough age.
[01:00:19] Cameron: age that, that, you know, our
[01:00:21] Cameron: kids will grow up smart enough to um, protect their brains.
[01:00:29] Steve: you’ve got to see your mistakes too in the next generation. You grow up and you see your parents say, when I grow up, I’m not going to be like this. You know, if they’re poor, I want to be rich. Or if they’re, you know, not loving, I’m going to be loving with my kids, whatever, whatever it
[01:00:41] Cameron: My parents were chain smokers. You know, my, my mom
[01:00:46] Cameron: quit 25 years ago, 30 years ago. My dad didn’t and died at
[01:00:50] Cameron: 52 of lung cancer, but I grew up going, I’m never going to smoke because I saw my parents chain smoking and I hated growing
[01:00:58] Cameron: up
[01:00:58] Steve: Yeah, exactly.
[01:00:59] Cameron: cigarette smoke, right? Yeah.
[01:01:01] Steve: Yeah. So you tend to learn, um, the opposite often. It takes a generation or two, I think, but I mean, but there is also something where you can see human frailty in something like the supermarket, right? Because what I see is the same patterns everywhere.
[01:01:16] Steve: You go down to the supermarket and there’s three aisles with fresh food. and fresh meat and vegetables and cheese and there’s, you know, eight, ten aisles filled with junk. And you get the choice of which aisle to go down and what to buy and the internet’s the same choice. You can get broccoli for the brain or you can buy popcorn.
[01:01:35] Steve: It’s up to you, right?
[01:01:39] Cameron: Yeah, and it’s a skill, like, I talk to Chrissy about this a lot, like, I know 15 years ago, I was spending a lot of time on Twitter.
[01:01:48] Cameron: And then after that, a lot of time on Facebook and I know Chrissy did as well. And now we buy, I mean, I haven’t been on Twitter much for, for years and years. I go on a Facebook only when I need to, you know, when one of the communities that I run for my podcast, I need to go and post stuff or check stuff and pretty much stay off of it outside of that.
[01:02:06] Cameron: All my Kung Fu student community or the school community, whatever it is. We learned, you know, we, we got sucked in with the dopamine mining of the tools when they first came out. And then we felt really disgusted by that after a period of time and we weaned ourselves off of it. Like our brains adapted to that relatively quickly.
[01:02:27] Cameron: I think in, in terms of human development and hopefully our kids, you know, won’t get sucked in by that stuff as much
[01:02:36] Steve: I actually genuinely believe, and the only reason that I
[01:02:40] Steve: use the social tools now is a couple of people that I am friends with and I do back channel chatting, and it’s really just a way to stay connected. It’s like a phone call used to be when you were a teenager, you send each other a meme or something interesting or smart or funny or whatever that, and you send different things to different people on whichever channel you’re on with them, so I have that, and then the only other stuff I do on the internet really is to learn things for my work or to promote myself for my work.
[01:03:04] Steve: If I was, you know, a trillionaire and had a heap of money and didn’t have to work, I promise you, I wouldn’t be on any of those forums other than just reading and sharing information for people who want it. I don’t, I’m not on there for any of that dopamine hit. I mean, there is a dopamine element when I put something up for work and a lot of people like it, but I’m only caring about that because it has an economic impact, not just for, oh, I feel good about myself because a lot of people
[01:03:27] Steve: like it. Yeah.
[01:03:30] Cameron: Technology time warps, Steve. Let’s, let’s move
[01:03:32] Steve: This is a quick one. Yeah. So look, we remember when Yahoo was The
[01:03:39] Steve: internet, the front page of the internet. And the idea for a really long time was we want to build a website that has everything on it and keep everyone there. And then we sort of went away from that, the Google model, send someone to where they need to go, social media links, sending everyone across the web, even though they had tentacles that were power and platforms.
[01:03:58] Steve: Feels like we’re going back to that model where everyone is trying to suck an audience into a singular platform. And I thought that the meta AI was a classic example of trying to bundle again after the web was sort of unbundled. It’s re bundling. And it feels a little bit like that’s coming again, that all of the big apps are trying to get everyone into the one space.
[01:04:24] Steve: I’ve just noticed that Google are doing advertising around the web and on TV saying, come to the Google app for all of your internet y needs. You want to do search here, find images, do Translation. Every you get a recipe, everything you need is right here in Google. Now, I dunno if that’s a response to open AI and their their lagging in ai.
[01:04:44] Steve: We talked about meta saying, ask me anything. No need to leave Instagram Here. We’ve got all the answers, everything you need right here, no need. And that’s what it says on there page where they’ve announced the meta ai. You won’t need to go to other websites to find things. It’s all inside here. And I just wonder if we’re going back to the past.
[01:05:03] Steve: The tech time warp to the, the idea that brands are now trying to keep you in that one spot rather than get you using it as part of a portfolio of, of, uh, internet sites, a little bit back to the way it was, and there’s this pattern of bundling and unbundling, bundling and unbundling, which happens with tech.
[01:05:21] Cameron: Is that new though? I mean, most people get their news from Facebook. Um, which is what the Australian news publishers are fighting with at the moment. Um, you know, Facebook, you know, bought Instagram so they didn’t have to leave to go and post their photos and see their photos. They’ve got Reels to stop people from going to TikTok.
[01:05:42] Cameron: I mean, they have already spent 20 years trying to keep you in the Facebook walled garden as much
[01:05:51] Steve: Feels like there, there, there was, but it felt like there was an oligopoly of kind of like, these are the social guys, these are the search guys, and now everyone is just demarcating, which I think is good in a way, because it’s increasing competition where they’re stomping into each other’s ground a little bit more, where they’re going to do their own thing.
[01:06:10] Steve: I mean, It’s interesting that Google is ensconced inside Apple and pays, you know, billions of dollars every year, 10, 15 billion dollars, whatever it is, to be the default search. But I just feel like we’re going to have an end, uh, enter a period of hyper competition where all big tech start competing with each other more head on and AI is the start of that.
[01:06:31] Steve: So that was all.
[01:06:32] Cameron: Well, they’re certainly competing with each other for buying GPUs. I mean, that’s the, what’s the big thing right now, right? How many GPUs can you get? thing I had was the futurist forecast
[01:06:43] Cameron: and I was going to talk more about nanotech, but I’ve probably already shot my nanotech load. I mean, just quickly though, you know, I said I had this
[01:06:51] Steve: can say you’ve got your nanotech load. We have really R rated the futuristic today. Thank you, Mr.
[01:06:58] Steve: Cameron Reilly, the OG of podcasts. It is not Joe Rogan, it is Mr. CR up in Brisbane, Australia.
[01:07:08] Cameron: I was gonna, uh, uh, well, we’ll do this. I’ll just, I’ll just shoot to, um, I’ll, I’ll wrap this up. I’ll let GPT talk us through it. So I asked her to give me a theoretical roadmap for getting to nanofabricators in the home. So imagine a device, the size of a washing machine or a dishwasher. in the home that can build nearly all of your daily requirements, including food, clothing, technology, electronics, uh, you know, you need a vase, you need a glass, all of your white hood stuff, and then you’ll have your robot that will assemble Anything that can’t be pre built in the nanofabricator, the nanofabricator will build the components for you, and then you’ll have your robot assemble it for you while you sleep.
[01:08:07] Cameron: I asked GPT for a roadmap, assuming we have AGI within five years. Let me play GPT’s response and we can go out if you have any comments after that.
[01:08:16] ChatGPT: Assuming the development of Artificial General Intelligence, AGI, within the next five years provides a tremendous boost, here’s a theoretical roadmap to achieve the integration of nanofabricators in homes.
[01:08:28] ChatGPT: Phase one, foundational research and AGI integration. Years one to two. Enhanced research. Leverage AGI to accelerate nanotechnology research focusing on increasing precision in molecular manipulation and reducing production costs. Safety protocols. Develop robust safety protocols for handling nanoscale materials, especially those that might be toxic or reactive.
[01:08:52] ChatGPT: AGI driven design. Use AGI to optimize the design of nanofabricators, making them more efficient and easier to use. Material Science Advancements. With AGI’s ability to process vast amounts of data, accelerate the discovery of new materials that could be used safely and effectively in household settings. HATA Phase 2, Prototyping and Initial Regulation Years 2 to 4. Prototype Development. Build prototypes of home use nanofabs focusing on user friendly interfaces, reliability, and safety. Regulatory frameworks. Work with regulatory bodies to ensure that nanofabs are safe for home use. Establishing standards for emissions, waste management, and operational safety.
[01:09:36] ChatGPT: Public Demonstrations. Begin public demonstrations to showcase the capabilities and safety of home nanofabs, addressing public concerns and gathering feedback. AGI Enhanced Simulation. Utilize AGI to simulate different home use scenarios to predict potential failures or misuse, improving system resilience. Phase 3. Market Preparation and Education. Years 4 to 5. Educational campaigns. Launch educational initiatives to inform the public about the benefits and limitations of nanofabs, preparing them for responsible use. Market testing. Conduct extensive market testing to understand consumer needs and fears, refining the product according to feedback.
[01:10:18] ChatGPT: Supply Chain Development. Establish a supply chain for the raw materials needed for nanofabrication, ensuring they are sustainable and readily available. Phase 4. Commercialization and Scaling. Years 5 7. Product Launch. Introduce nanofabs to the market with robust support systems in place, including user manuals, online tutorials, and customer service.
[01:10:42] ChatGPT: Scalability Improvements. Continue improving the scalability of nanofab production to lower costs and expand market access.
[01:10:49] Cameron: Okay, so years five to seven, he sort of maps a fairly, you know, motherhood roadmap for how we’re going to get there. The, the, the big issues I have with this, um, two
[01:11:00] Cameron: things is in terms of the, um, scalability of it, I keep saying to people, once you have one nanofabricator, you
[01:11:09] Cameron: I was going to say, don’t you build one and then that builds the next one and builds the
[01:11:12] Cameron: next one, you get self replication, right?
[01:11:14] Cameron: yeah, it becomes self
[01:11:16] Cameron: I need a chain, isn’t it? Isn’t most of the Help me out here Cam, I can’t remember, it’s grade 9 science, isn’t like 99 percent of the universe made out of three things at the atomic level?
[01:11:27] Cameron: Can’t you just like take anything and convert it, put dirt in, in fab and reconfigure it?
[01:11:34] Cameron: Five things. I mean, nearly everything is made out of carbon, oxygen, hydrogen, nitrogen, and, you know, there’s some other trace elements as well, some other things there. But nearly everything is made out of those things, right? So you think about, um, your crapper. Your sewage system, a lot of, a lot of waste products, anything that’s a waste product.
[01:11:56] Cameron: You know, we, we, we, you know, we have all of these, uh, like, uh, recycling plants that don’t know what to do with old clothes. You know, we used to send them overseas and now they’ve got too much overseas. We can’t, even India can’t deal with all of our secondhand clothing. Secondhand clothing, boom, you just drop everything back into the nanofabricator.
[01:12:18] Cameron: It deconstructs them into carbon, oxygen, nitrogen, hydrogen, and whatever else, and then they go into
[01:12:24] Cameron: containers underneath it, and then the next thing you want to build, boom, we just re everything in the world just reconfigured atoms.
[01:12:32] Cameron: Zara, iPhone,
[01:12:35] Cameron: well, see, when you get into, you know, those sorts of electronics, then you get your copper, you get, uh, boron, you
[01:12:43] Cameron: Tungsten, one of my favorites.
[01:12:45] Cameron: tungsten, Yes, Yes, there are rare
[01:12:48] Cameron: Lithium,
[01:12:49] Cameron: there are rare earth minerals, and and
[01:12:53] Cameron: Borox,
[01:12:54] Cameron: earth minerals that are harder. And some of those can’t be made out of the raw elements easily. You need some, you need a, you need a, a sun to go supernova to make a lot of, uh, uh, heavier elements. You also need it to make carbon, but we have a lot of carbon that, and this is one of the reasons Elon wants to have rockets mining asteroids and mining Mars and mining
[01:13:16] Cameron: places like that.
[01:13:16] Cameron: Right. So we. Get, get access to rare earth materials or rare Mars materials as they might be. Or maybe they’re not rare on Mars.
[01:13:23] Cameron: see, now I like it, Rare Mars Materials, now that is a great name for a post
[01:13:29] Cameron: punk band, rare Mars materials And you know what their first album should be called? Petrol Station Coffee. Like, I feel like that’s the kind of band you and I can play in, before we plug out,
[01:13:43] Cameron: All right. Anyway, nanofabs in 10 years. That’s my dream. Nanofabs and robots within 10 years. And again, I’ve said before,
[01:13:52] Cameron: what are you going to make? You’ve got a nanofab, you can make anything, what’s the first thing You make? You dirty, dirty man. I see where you’re going with this straight away. I’ll just go
[01:14:07] Cameron: yeah, yeah. Uh, yeah. Apart from sex toys. I don’t know. What are you, what are you going for?
[01:14:15] Cameron: with an Earl Grey. I’ll start with my boy from
[01:14:17] Cameron: Right. An Earl Grey.
[01:14:20] Cameron: Earl Grey.
[01:14:22] Cameron: Yeah,
[01:14:23] Cameron: Just to pay homage.
[01:14:25] Cameron: but what does the world look like, Mr. Futurist, when we have nanofabricators in every home and a robot in every home? What does
[01:14:34] Cameron: capitalism look like? What does society
[01:14:36] Cameron: The end capitalism as we know it because you’re basically abundance and capitalism is about scarcity. And about accumulating resources. Uh, because, you know, we live in a world of scarcity and limited resources, and this theoretically removes the limitations. Although I think the one thing that we will never have enough of, the limitation again, always go back to the future, back to the past again, where it’s land and space.
[01:15:03] Cameron: You know, and so while everyone might be able to have something, land and space would become the ultimate scarcity again, I think, in some way. You know, we go away from a consumer goods society, where that’s the ultimate in consumption, to space, and I mean space on earth, so I don’t know. But it certainly is the end of capitalism, I know that much.
[01:15:25] Cameron: We just build up.
[01:15:27] Cameron: So, I’m going to call this episode P Utopia. Forget P Doom. What’s P Utopia?
[01:15:33] Cameron: Like it. We should get that going Let’s get that going. I think that’s something we need to ask some of the leading AI researchers. What is P Utopia? What’s your P Utopia? If your P Doom is 5 and 10 and 20 percent, you know, Elder, Dazowski and all these guys, what’s their P Utopia? If we do get to AGI, solve the energy problem, health problems, the climate problems, scarcity problems of food around the world, recycling, nanofarms,
[01:16:01] Cameron: P Utopia.
[01:16:03] Cameron: P Utopia. Thank you, Steve. It was fun
[01:16:07] Cameron: Thank you, Cameron. Tremendous. Tremendous. That was good. Really good chat
[01:16:12] Steve: again.
This week on the Futuristic podcast – Neuralink patient playing chess, Claude Opus outperforms GPT4, Nvidia announces robot framework “Project GR00T” and Blackwell platform, Microsoft and OpenAI plan supercomputer project worth $100 billion called ‘Stargate’, Amazon audio books using AI, OpenAI Voice cloning tool, Apple is in talks to build Google’s Gemini AI engine into the iPhone, Stability AI and Inflection (Pi) CEO’s quit and join Microsoft, Altman / Jensen predict AGI in about five years, Currency is a technology, and we ask – what does the world look like with AGI?
FUT 23
[00:00:00] Cameron: Mr. Sammartino, Mr. Bob Sammartino. That’s the best intro I’ve still got! It’s
[00:00:14] Cameron: I was listening to that just before you came on. Futuristic, Episode 23, 1st of April, April Fool’s Day, and here we are, the two greatest fools,
[00:00:24] Cameron: Cameron Reilly and Steve Sammartino, to talk
[00:00:26] Cameron: about the future. Oh my god, Steve, it has been a crazy, crazy couple of weeks since we last recorded.
[00:00:35] Steve: getting faster. Acceleration. It really,
[00:00:38] Cameron: it really, it is, yeah. And things building on things, building on things,
[00:00:43] Cameron: um, But before we get into the news of the last week or two, tell me something. Of note that you did that’s futuristic in the last couple of weeks, Steve O.
[00:00:55] Steve: Yeah, I’ve been making clones of myself.
[00:00:58] Cameron: Oh my God. Like one Steve Sammartino is already too much. How can the
[00:01:03] Cameron: world handle more
[00:01:05] Steve: Can you see, can you see there’s one of me in the picture up the back? An old school clone of me.
[00:01:09] Cameron: I
[00:01:10] Steve: There’s a cushion. I got that as a
[00:01:11] Steve: gift. At a keynote that I did
[00:01:13] Steve: once where they made cushions with the person’s face on all the speakers. And I did like it. It’s, it’s, it’s, uh, yeah,
[00:01:21] Steve: it’s, uh, interesting. So I’ve, um, been on stage, uh, showing my Steve Sammartino AI, where we ask me a question, then we ask the AI a question just to see what the different answers are.
[00:01:35] Steve: And they’re pretty close. They’re pretty good actually. And, and then I’ve been saying, well, the next stage is to me, my clone of, of me and get. Myself to answer it. So I’ve been mucking around with HeyJen, which is pretty good. And that can not only make a client of yourself where you just type in what you want it to say after a few minutes of video.
[00:01:53] Steve: It’s not perfect around the mouth, but it’ll make a version of you saying something, you give it some dialogue and it’ll just say it as a video of you. Um, and I think it’s really interesting how we can use language now. Anyone has access to global markets. I mean, it’s another phase of globalization, which occurs because you can.
[00:02:16] Steve: Put yourself out there in a multitude of languages, and I know MrBeast does this with his videos, he has like Spanish versions of it, and you think about the power languages out there, the exposure that you can have to global markets if you’re any form of content creator now, which was once the bastion of You know, major studios, you know, doing VOs, you know, there’s a Tom Cruise actor that always does Tom Cruise in Italian and so on, but now all of us have access to that, and not just Netflix, we can expand our reach, uh, through language, and you know, the modern day Babelfish has arrived.
[00:02:52] Cameron: So I haven’t played with HeyJen for a while. So go into more detail about what you’ve actually been doing with this
[00:02:57] Steve: Yeah, so, I made a version of me, uh, in my voice. And speaking to it so
[00:03:04] Steve: it can do two things now. And this is the paid version and they’ve just had an upgrade in the last month. So it’s a lot better than the pre the previous one. I did a Spanish one for a surfing keynote that I was doing down at, um, urban surf, cause there’s a lot of, uh, a lot of Brazilian and, and, uh, you know, Spanish and all that kind of stuff.
[00:03:21] Steve: So I did mine, it was actually, sorry, it was Portuguese. I did it in not Spanish, um, for the Brazilian surfers and sent a message out. And it was pretty cool. Um, but it was a bit glitchy, a lot better now, a lot better now in your own language. And just a couple of minutes of talking to video and it’s got your footprint.
[00:03:37] Steve: It’s got your, uh, your bio prints and it can make a version of you. And
[00:03:41] Cameron: it’s a video, It’ll make a video. of you saying whatever you want it to say. In your
[00:03:47] Steve: ways. So I can say, Hi, it’s Steve. Welcome to the Futuristic. Great to have you today. And
[00:03:53] Steve: it’ll upload
[00:03:53] Cameron (2): that. And it used to take about a week before you’d get it back.
[00:03:57] Cameron: Oh, wow. Cause
[00:03:58] Steve: I don’t, I’m not sure if it’s there, you know, the demand on their servers or what have you.
[00:04:03] Steve: Now you get it back pretty quick, you know, within an hour and you can do a version of you and you can choose your language. They’ve got like about 12 languages. And it’ll say it back in the language, or you can do one where it takes your video in your primary language and you type it in and then it’ll do it.
[00:04:21] Cameron: Mm
[00:04:21] Steve: kind of two options, pretty cool.
[00:04:23] Cameron: Wow, that’s incredible stuff. Um, Well, for me, I’ve been doing, I know I say this every episode, but a lot more coding, um, you know, I know I’ve said this before, but the more coding you do, the more coding ideas you have. I had this situation, I was trying, I’ve got this long script for QAV, my investing show, that’s doing regression testing.
[00:04:47] Cameron: One of our listeners actually built it. It’s about 2, 600 lines of Python code and I wanted to add a bunch of logging into it and I’m not going to do that myself, so I tried to upload it first into GPT 4, then into Claude Opus, which I’ll talk more about, uh, it’s just to say insert a lot more coding into this, uh, sort of coding, logging into this, so it was, it was getting some things wrong and I wanted to find out why.
[00:05:10] Cameron: But of course the, the files were too big for them to, too, too many, um, Uh, tokens for them to process, and so you have to break it down into smaller chunks and upload them sequentially. So I just, I just wrote a script. I said to, I think it was Claude Opus, which I’ve been using for my coding lately, I said, write me a Python script that’ll take a large file, large file of coding, split it up into smaller sizes in some sort of rational way, That I can then upload to an AI tool to do stuff.
[00:05:44] Cameron: And it goes, yep, no worries. So what we’ll do is we’ll just split it between functions where there’s a, uh, an empty line, a blank line in between functions. We’ll split it there. We’ll call it chunk one, chunk two, chunk three, chunk four. So I just then, it just wrote me a Python script to break this file up.
[00:05:58] Cameron: Then I said, okay. So now write me a file, uh, write me some code that will merge all of the chunks back together again. Yep, no worries, boom, there you go. So, you know, you can just do the stuff. I had another one where I was up, I was dragging a bunch of photos out of Apple’s photo app onto my backup drive so I could stream them to my TV, cause LG smart TVs aren’t that smart.
[00:06:22] Cameron: And, uh, A lot of the files ended up as HEIC or HEIF files, which the TV couldn’t read. And I know I could probably get an app. Or I could probably open them all up in Preview or something like that, or an Adobe tool and convert them. But, you know, because I’ve got a coding mindset on, I just said to one of my, uh, Claude, Write me a script that’ll convert a directory of HEIC images to JPEGs.
[00:06:49] Cameron: Yeah, no problem. So here you go, here’s the script, point it at the directory, a
[00:06:53] Cameron: minute later it’s converted all of them to JPEGs for me. Just, it’s like, it’s faster than going and looking for a tool to do it, and cheaper, because most of those tools you’re going to have to pay. 10 bucks for, 20 bucks
[00:07:05] Steve: So many of these tools, um, that are, you know, let’s call them plugins
[00:07:09] Steve: or, you know, AI is built on APIs. They’re all dead. I mean, they’re already
[00:07:13] Steve: dead. You know, the funeral just hasn’t happened
[00:07:15] Steve: yet, you know, but they’re
[00:07:18] Cameron: Because the next step will be, I won’t have to write code, I’ll just have to say, Hey, can you
[00:07:22] Cameron: do this for me, and it’ll just do it.
[00:07:24] Steve: Yeah.
[00:07:25] Cameron: I didn’t put this in the
[00:07:27] Cameron: notes, but did you see the Open Interpreter? Oh, one launch, uh, a week or so ago,
[00:07:34] Steve: The which one? The, I looked at all
[00:07:35] Cameron: the little, the little ball that you hold and
[00:07:38] Steve: yeah, yeah, yeah, yeah, yeah. I did see that.
[00:07:41] Cameron: So for people that haven’t seen it, Open Interpreter is a startup out of the US. They hit my radar sometime in the last six to 12 months, cause they had a, A version of an AI that you could download and install on your computer and it will basically integrate with your operating system and run your computer.
[00:08:00] Cameron: I couldn’t get it to work then, I couldn’t, I tried it again last week, I couldn’t get it to work. But they launched their first product about a week ago, they call it the Open Interpreter 01. And it’s uh, A little device that you carry around with you and it’ll connect to your phone, I think, or it’ll connect to Wi Fi if you’re in a Wi Fi area.
[00:08:19] Cameron: You press a button on this thing and talk to it and it’ll control your computer back at the office or back at home. You can, and, and you can do things like say, uh, the demo that the founder does. He says, um, what’s the weather going to be like on the weekend? It’ll tell him, he goes, are there any concerts in Seattle on the weekend?
[00:08:37] Cameron: It goes, yeah, you got these. He goes, okay. Um, make a note in my calendar to go to this concert, uh, or the dates, and then find the email address for Steve Sammartino and my contacts and flick Steve an email, send him a link to it, and ask him if he wants to go. And then you see the computer just doing all of that kind of stuff.
[00:08:56] Cameron: It’s still a bit Clunky, uh, I couldn’t get Open Interpreter running on my Mac as I said, but
[00:09:04] Cameron: you know, easy to see that in the next couple of years that’s where we’ll
[00:09:08] Cameron: be. It’ll be built into your operating system and you’ll just be able to say, oh, can you, uh, do X, Y, and Z? And it’ll just, it’ll just do it for you.
[00:09:16] Cameron: You won’t have
[00:09:16] Steve: be, it’ll be Google, Microsoft, and Apple.
[00:09:19] Cameron: Yeah, yeah, yeah, it’ll
[00:09:20] Steve: Yeah, so it won’t be one of those guys. They’re doing all the research for free and they’ll either be acquired or killed, one of the two.
[00:09:26] Cameron: Yeah, that’s definitely where we’re going. Um, so I mentioned Claude Opus, so one of the, this is in the news stories, but just to jump ahead a little bit, um, Claude, which is the AI tool produced by a company called Anthropic, Been around for a while. Uh, their latest version, Claude Opus, is now beating GPT 4 on all of the benchmarks, which is pretty amazing.
[00:09:49] Cameron: Um, I, it’s a premium subscription. I’ve got one. It’s a bit more expensive than ChatGPT 4. I think it’s like 30 bucks a month instead of 20. Uh, but it’s pretty good and I’ve been using it almost exclu exclusively for my coding in the last week. Uh, and initially it was doing a much better job at coding than GPT 4.
[00:10:11] Cameron: does even GPT 4 through the API, which I use as well. Uh, cause GPT 4 tends to not give, if you give it a bunch of code and you say, change this, you know,
[00:10:22] Cameron: change X, it won’t give you the full code back. It’ll give you some changed lines and then tell you to integrate it yourself. And I’m like, well, what am I fucking paying you for?
[00:10:31] Cameron: I’m not going to do that.
[00:10:33] Steve: I love that. Stop, stop. What am I
[00:10:35] Steve: fucking paying you for? I mean, I love that as an AI. It’s like, what are you doing here? What am I paying you for? Like, classic employee relationship, here it is again, just with AI. You know, and I say that on stage. I say, you’ll be in a whip a year from now. And they say, how’s that project going, Cam?
[00:10:52] Steve: You say, look, my agent’s working on it. It should be ready 2pm Friday. You’re going to say that? I
[00:10:59] Cameron: It is annoying when, you know, you know, look, you could fucking do this with a few more cycles of compute. Just do it. Like, why are you asking me to? Anyway, Claude Opus was giving me the full script. Initially, of everything, every time I asked for something to give me the full script. Then it started to get lazy.
[00:11:14] Cameron: A week into it, it’d be like, ah, you can integrate it yourself. I’m like, no, I don’t think so. The other thing I’ve been doing for the last week, which has been fun, is, um, I’ve been studying maths, Steve.
[00:11:25] Steve: saw that.
[00:11:25] Steve: I’m like, talk me through this.
[00:11:28] Cameron: Well, you know, when Charlie Munger died a few months ago, I was rereading, um, Charlie’s Almanac, and Oh, it’s great. You know, I’ve read it a few years ago. I was reading, I mean, I read it 10 years ago, and I read it a few years ago. I’m reading it again. But one of the things, here’s this thing about, there’s a hundred things that people, everyone should understand.
[00:11:50] Cameron: Like a hundred frameworks that everyone should have a basic knowledge of in order to understand how the world works. And one of them was probability. You need to have a good understanding. You see, most people don’t understand how probability works. So I, I got a textbook on probability. I thought, yeah, I could probably get a better understanding of probability.
[00:12:06] Cameron: But every textbook I read on probability was assuming an apro, uh, um, uh, And shit, I can’t even talk. Was assuming a level of knowledge that I should have about some basic, basic mathematical concepts that I didn’t have or I’ve forgotten. You know, it’s been 35 years since I left school. So anyway, I ended up getting this, uh, Basic Maths Textbook.
[00:12:27] Cameron: It’s called, like, A Complete Guide to Maths
[00:12:29] Cameron: I think it’s out of Columbia or one of the, you know, Princeton or somewhere in the US. Very, very basic. Like, it goes from, like, how addition works
[00:12:37] Steve: See, that’s, most books don’t do that.
[00:12:40] Steve: Books either, they start somewhere or they finish too early. I think like great investing
[00:12:45] Steve: books should really start on, you know, what is money? How does it work? How does money grow? What
[00:12:50] Steve: are growth, like real, but none of them do that.
[00:12:54] Cameron: Yeah.
[00:12:54] Cameron: Well, Tony and I are working on the QAV book at the moment. I should suggest that to him. We’ll go right back to the beginning.
[00:13:01] Steve: You should.
[00:13:01] Cameron: it goes right from the beginning all the way through to, you know, advanced mathematics, but I’ve been doing a whole bunch of stuff.
[00:13:08] Cameron: Like I tell you, I’ve been loving it.
[00:13:10] Cameron: Absolutely. Having a ball. I’ve been doing things like regular polygons and working out the, the angles of polygons, the, uh,
[00:13:20] Steve: all that stuff. I did maths at uni and I forget it.
[00:13:24] Cameron: Right. You know, you forget, right? And it’s, and it’s activating a part of my brain, which I haven’t had to use in so long. And I’m really, really enjoying it. But anyway, I get stuck on things where it tells me I get the answer wrong.
[00:13:37] Cameron: I look at the answer goes, you got it wrong. And but it doesn’t tell me why. I got it wrong. So I can just plug it into GPT or Claude and say, yeah, help me understand this, like I’ve been doing with my Italian for the last year. And it just walks me through it step by step. I go, Oh, I don’t understand why I have to do it this way, not that way.
[00:13:55] Cameron: And it’ll explain it. It gives me analogies. I was talking about tessellating. I posted on Facebook this thing. I asked it about why you can’t tessellate, um, nine sided polygons, nonagons. And it gave me this analogy. It’s like trying to organize cats in a bag. Like if you try, there’s always going to be one
[00:14:14] Cameron: that’s going to try and jump out.
[00:14:15] Cameron: They’re never going to fit perfectly, or there’s going to be an empty space or
[00:14:18] Cameron: something. It’s using like funny analogies, just the
[00:14:21] Cameron: greatest maths tutor ever. You know, it’s
[00:14:23] Steve: think about some of the basics in maths. When you learn division, right? They’ll give you three balls or nine balls And say, you’ve got to divide them evenly with people. And numbers are such an abstraction. We forget that they represent physicality. And when you learn the basic levels, they’ll say, divide them evenly between your three friends.
[00:14:42] Steve: And you can physically see, well, There’s more here versus there, whereas numbers become abstractions. And it’s so easy to forget that. And, you know, teaching by analogy, we do it in the very early stages in life, but then when we get to more senior things, whether it’s economics or business, Or code, we drop the analogies away, which is a real error.
[00:15:03] Steve: You know, when I communicate on stage, I’m always using analogies because I know that if you teach someone something new via something they already intuitively understand, then you will take them to that level of understanding again. And most people forget that.
[00:15:17] Cameron: And people have different learning modalities that.
[00:15:19] Steve: Yeah, absolutely. Yeah, school was really bad at putting you down a certain, you know, medical numbers based and understanding of English and, you know,
[00:15:28] Steve: people are kinetic learners, physical learners. I had a terrible drum teacher.
[00:15:32] Steve: Terrible, right? and he was, Yeah, I play drums. Yeah, poorly because of this teacher.
[00:15:39] Steve: But yeah, I was in bands and stuff when I was, you know, a teenager. Um, and he was obsessed with, I want to teach you this, but I’m not going to show you how it sounds. I want you to look at the music to read the score. Which is fine if you want to be someone who’s going to be a drummer in an orchestra, but I just wanted to be Kurt Cobain.
[00:16:00] Steve: I just wanted to be Dave Grohl, right, in a band. And I was much better at listening and hearing it and then translating it to my hands, and he would never teach me that way. And in hindsight, I should have said, listen, pal, I’m playing you here. Just show me a way that gets me there. Don’t worry that you want me to learn that way.
[00:16:17] Steve: People have different learning modalities and it’s such an important thing that just gets forgotten.
[00:16:22] Cameron: And as we’re learning more and more, you, you know, you and I both have neurodivergent kids and y and we’re learning more about neurodivergency. I guess in the modern world. We are realizing, like everyone has a brain that’s a little bit different from everybody else’s, and they need different ways of tackling things.
[00:16:39] Cameron: And one of the
[00:16:40] Cameron: exciting things about, you know, uh, uh, uh, the world we’re moving into is everyone’s gonna have. Their individual tutor teacher for every subject that’s totally customized to their brain, their
[00:16:53] Steve: ever.
[00:16:54] Cameron: Oh, it’s fantastic. It’s incredible. Anyway, speaking about incredible things We’ve talked about Neuralink.
[00:17:00] Cameron: Elon Musk, your favorite guy, and his brain chip.
[00:17:05] Steve: man who’s done extraordinary things. Just a bit weird. That’s the price.
[00:17:10] Cameron: We mentioned, I think, late last year that they were getting ready to do human trials. We mentioned when they did their first human operation. And in the last week or two, they released a video of that patient. He’s a young guy who, about, I think, 8 or 9 years ago, had a diving accident. Diving, like, dived into a pool, broke his spine.
[00:17:32] Cameron: and has been a quadriplegic ever since they put the chip in his head and they’ve had videos of him playing chess and other games just using his mind but the thing the the biggest thing I got out of the video was just the giddy joy that this guy has for what this technology
[00:17:52] Cameron: has enabled Him, in terms of improving his quality of life, already, the first fucking patient, and he’s
[00:17:59] Cameron: like, just giddy as
[00:18:01] Steve: You know,
[00:18:02] Cameron: what this is doing
[00:18:03] Steve: I was wondering what you’d say and, and the thing that I noticed, It’s just the joy and the story that he stayed up till four in the morning playing video games. I was like,
[00:18:12] Steve: gee, I mean, that, that’s, you know, the, the emancipating power of technology. It really is extraordinary. Um, so sort of, I noticed that, uh, the fact that it was in hospital for a day.
[00:18:24] Steve: And one of the other things that I thought interesting in his, interview was how at first when he was moving the chest part in his mind he was saying move one up right two left and then in the end he realized he didn’t even have to give it these verbal instructions he just had to intuit it as in where he wanted it to go without in his mind saying the steps of the exact you know x goes across y up like that that was really cool for me that and you just think
[00:18:55] Steve: okay this is early days if you can intuit To move things with your mind? where where this goes? I mean, let’s be honest, this is pretty extraordinary, right? Where this could go.
[00:19:06] Cameron: And he did talk about the fact that there was sort of a bit of a learning curve when it was in his head, like, I’m not sure, I couldn’t tell from what he was saying if it was he needed to learn how to use the chip or the chip and the brain were melding together and it took a little while for it to all come together, like with new neurons growing that were
[00:19:31] Cameron: figuring out how to do this neuroplasticity type stuff.
[00:19:35] Cameron: But anyway, like for the first human experiment to be that successful, now the guy might.
[00:19:43] Steve: Look, Ham, there was, there was 86 people that died in the operation, but they weren’t in the video, right? I mean, that, that is what they leave out. They always leave out the testing, Cameron. We saw the one guy, let’s
[00:19:56] Cameron: Elon’s, Elon’s got one of those, like, he’s got an underground bunker with those tubes in sci fi films with all the failed experiments that are all like, they’re still alive, but they’re like,
[00:20:08] Steve: Yeah, it’s like that movie Moon Where? he had like a hundred clones of himself,
[00:20:12] Cameron: yeah, yeah, that was a, good movie. Sam Rockwell, love Sam Rockwell, um, directed by David Bowie’s son. Duncan, um, Jones, I, me, apparently, I did,
[00:20:24] Steve: um, there
[00:20:26] Cameron: I, I happened to re watch the, uh, Black Mirror episode from the first season of Black Mirror the other day, it’s the one where everyone has memory implants, and the guy thinks his wife’s having an affair, and then he, he, he Um, you know, gets the guy he thinks she’s sleeping with to delete all of the videos and then he gets her to play back her videos of fucking this guy and it’s, it’s, it’s pretty gruesome.
[00:20:51] Cameron: The two things that really jumped out at me at it, one was, you know, they’ve got a little chip behind their ear and this episode came out. I probably, what, 10 years ago? And, and I’m like, wow. And so it’s like in the same week, I watched the Neuralink video guy and then this, and I’m like, oh shit, we’re so
[00:21:09] Cameron: close to that.
[00:21:12] Steve: uh, Twilight Zone. Uh, episode, there’s two of them, the rehash version in the late 80s, early 90s and way back in the 60s, there was one where you could sell memory. So you would go into a place and you would sell cool things that you did that other people would upload to experience the things that you had that they hadn’t experienced.
[00:21:33] Steve: And he had sold so much and poor people sold the memory, like, you know, poor people sell blood, like it was classic dystopian thing.
[00:21:38] Cameron: You lose it when you sell it. You’re
[00:21:40] Steve: Yeah, you lose it. You lose it, right? You don’t get to keep it. There was no, there was no, um, you know, copy and paste. It was like, copy and delete, right? And this guy sells all of
[00:21:50] Steve: his memory to stay alive because he’s poor.
[00:21:52] Steve: And then he gets some money, he goes, I want my memories back, right? I go, we can do it, but it’s not perfect. He goes, it’s not perfect. So he comes in and they upload some memories and then they show him at a job interview and he goes, wow, when I was a 12 year old girl, I studied ballet. And then when I studied physics and it’s all like these other memories of people that just bashed into his world, pretty, it’s pretty cool.
[00:22:12] Steve: I mean, it’s interesting that it’s interesting that this has been an idea that we’ve been playing around with for a long time, because. If we think about, you know, even Sagan, you know, calls it in Cosmos, which you gave me the first copies of. I was, you gave it to me to, to, to watch that. You burnt it on DVD for me back in the day and I watched it.
[00:22:30] Steve: Um, and he calls it the persistence of memory and that through history, we’ve created different ways for memories to persist through song, you know, through storytelling, through cave wall drawings, all the way up to the internet era. And it’s so interesting that this is Part of that still,
[00:22:45] Cameron: Mm Yeah. I, I was rereading the Epic of Gilgamesh recently.
[00:22:52] Steve: have you got 73 hours in each day? Because it seems like you read, you know, 73 times a week. You’re doing maths. You’re learning Italian. You’re busting out code. Look, can you just throw me a couple of those extra hours that you have in your day that I seem not to have? Because I reckon you’re pulling 72 hours in every daylight cycle.
[00:23:12] Cameron: I just, I chunk of my time down, um, to like 15, 20 minute. Segments. I’m going to do 15, 20 minutes of this, 15, 20 minutes of that.
[00:23:20] Steve: Now you’re starting to sound like one of these TikTok efficiency hackers who I’m really no fan of, just quietly.
[00:23:27] Cameron: I don’t know
[00:23:28] Steve: What I, I love the, I’m sorry we’re going off piste here, but I love, uh, the guys who take the piss out of these efficiency and life hackers. It’s like, what I do is I get up at two in the morning and I’m going to swim four kilometers in an ice cold pool that I made with my billions of dollars that I did on TikTok.
[00:23:44] Steve: It’s an ice cold pool. I’ve got ice blocks in the pool and I swim four kilometers. Then I get up and I have 16 egg whites. By this time, it’s 4. 33 in the morning.
[00:23:55] Cameron: It’s like Mark, um, Marky Mark. And he’s always doing stuff like that.
[00:23:59] Steve: saw a video of him the other day saying all the things that he eats and working out and
[00:24:04] Cameron: Second thing that I took away from the Black Mirror episode is how bad the AI voices are. Whenever they’re activating their memory chip system, it says, I’m sorry, but I can’t find the AI, but it looks like this. It was kind of human, but kind of robotic, and like, they’ve got this advanced neural chip technology, but the voices, like, and by the way, this episode was written by Jesse Armstrong, the creator of Succession, one of my favorite TV shows of recent
[00:24:32] Steve: great show.
[00:24:34] Cameron: Um, so a great writer, but even when they were predicting this really advanced memory technology, they still didn’t think computer voices would be flawless. And it still is amazing to me, and we’ve got a couple of stories on voice technology, uh, coming up today, how quickly that was one of the first things to fall.
[00:24:55] Cameron: One of the things that science fiction writers thought was going to be the hardest to nail, computers having human voices, except for Hal. Hal, the original computer voice was perfect. Um, and one of the things that the sci fi filmmakers, TV makers have always thought was going to be obviously very difficult, turned out to be one of the easiest things to do.
[00:25:15] Cameron: One of the first bricks to
[00:25:17] Cameron: fall. Um, I’ve mentioned Claude Opus. Um,
[00:25:22] Cameron: NVIDIA, oh, NVIDIA had their developer day, their conference, developer
[00:25:29] Steve: mean concert, you mean rock concert with 10, 000 people in the audience.
[00:25:33] Cameron: Yeah.
[00:25:35] Steve: that, that, that came back as
[00:25:38] Steve: That was, that was the thing that was most notable to me. I mean, oh, there was two things. The first one was kind of, there’s been the touted end of Moore’s law for some time. And I noticed that in the video, he, he, he talked about how they’ve had a thousand X gain in a year, which kind of, Never happens.
[00:25:57] Steve: It’s pretty much been a doubling impact. So that’s, that’s significant. Of course, it’s not available to everyone. So you could say it’s a different regime of, of exponential improvement. So that was really important. But for me, you know, the biggest thing wasn’t, it wasn’t the chips. It wasn’t the, what do they call it?
[00:26:13] Steve: The, is it the, is it the Blackwell chip? Um,
[00:26:17] Cameron: Not a chip, Jensen says. It’s a
[00:26:20] Cameron: framework,
[00:26:21] Steve: framework. Okay.
[00:26:22] Cameron: platform.
[00:26:23] Steve: Platform.
[00:26:24] Cameron: It’s an architecture.
[00:26:26] Steve: Okay.
[00:26:26] Cameron: Anyway, keep going.
[00:26:28] Steve: For me, it was just Rockstar in a Stadium. For me, the thing that came out is the religiosity.
[00:26:38] Steve: Technology is really tapping into, uh, price earnings ratios, which are irrational, as irrational as religion, leaders who, uh, you know, vaunted up on the stage, laughing at jokes, which aren’t funny because of who said them.
[00:26:56] Steve: Uh, you know, the mystery of the unknown of the all knowing being, which is AI, it sort of has this God like thing to it. That, that was, I know that’s not what was meant to be the key takeout, but that was mine.
[00:27:10] Cameron: That’s not new, right? We saw that with Steve Jobs and the reality distortion field and all that kind of stuff. This is, Yeah. like there is something with humans where we do, we
[00:27:20] Steve: stadium? A stadium?
[00:27:22] Steve: Yeah,
[00:27:23] Cameron: yeah.
[00:27:24] Steve: I was waiting for Jimmy Swagger to come out and say, I have sinned against you, father. Send
[00:27:29] Steve: 5 for my new private jet. 5. Just 5 from everyone. The new private jet, Gulfstream 650.
[00:27:36] Cameron: Oh man, I have sat in a stadium at two o’clock in the morning with Steve Barmer. On stage and 40, 000 people in the room. Steve Ballmer going, developers, developers, developers, developers, developers, developers, developers, developers. I love this comp tree, no, company. I love this company, developers, developers.
[00:27:58] Cameron: Yeah, I’ve been there. I lived it. Um, anyway, back to Nvidia. So they, they, they announced Blackwell. Which is basically, um, I had to, I had to go to Bing co pilot to get it to explain it to me. Uh, this is what co pilot said, NVIDIA has just announced its new AI chip, the Blackwell, which is set to revolutionize the field of artificial intelligence.
[00:28:20] Cameron: The Blackwell platform is designed to power real time generative AI on trillion parameter large language models with significantly reduced cost. Cost and energy consumption compared to its predecessor. I think he was saying massive performance increase with about 25%. of the power requirement. They’ve managed with this and they’ve taken two, uh, silicon layers and merged them very closely together.
[00:28:47] Cameron: So they operate like one, doubling the performance of it, or more than doubling, because they’ve, you know, Done a whole bunch of stuff inside of the silicon. The Blackwell GPU architecture introduces six transformative technologies for accelerated computing, which are expected to drive breakthroughs in various fields, such as data processing, engineering simulation, electronic design, automation, computer aided drug design, quantum computing, and generative AI.
[00:29:16] Cameron: So it’s a big deal, but that wasn’t the only thing they announced. They announced a ton of things, including Project Groot.
[00:29:22] Steve: Yes.
[00:29:23] Cameron: is a general purpose foundation model for humanoid robots.
[00:29:31] Cameron: So, as
[00:29:32] Steve: I mean It was really
[00:29:34] Steve: impressive. I mean, I did think how much of it has been gamed. Uh, one of the things that we’ve noticed with a lot of demos, especially with humanoid robots, you know, Boston Dynamics famously made it look as though the robots were just doing this, but they had, you know, programmed them to do exactly that thing that you saw on the video.
[00:29:54] Steve: Uh, And, and I do wonder, and of course, if you can program it, eventually it can do it itself, but it was very, very interesting to see how LLMs are going to impact robotics. For me, that’s the super interesting thing. And you’ve spoken about using your natural language to generate code, which you can do on the screen.
[00:30:11] Steve: And if you can do it on the screen, you can do it via a humanoid. Robot, which then drops the code inside the bot to do what it does. And there is a sense of biomimicry about this because this is what we do. And we don’t exactly know how it works, but we use language to code our brains to do physical things with our arms and legs, right?
[00:30:31] Steve: It’s what we don’t know how that interface works, but, but it does. And this has a real strong sense of biomimicry about it. So the overlap with LLMs and robotics, you know, for me, I’m thinking this is almost. Like a blast from the past where we had mechanical structures for a really long time in farming and, you know, various elements, you know, in the agrarian era.
[00:30:54] Steve: And then when we added fossil fuels and internal combustion to the mechanics, you had this overlap with mechanical and fossil fuel and, and, and energy and power, and then you had a whole new era of, uh, Industrialization, which married mechanical and fossil fuel ability. So it feels like LLMs and robotics are going to do the same thing with humanoid robots and yeah, the fact that anyone can code it by visually showing it again, just like we do with humans or telling it verbally how to do something.
[00:31:30] Steve: You know, when I saw it, you know, from playing drums to juicing, it was interesting
[00:31:36] Cameron: Well, I think the, the, the key takeaway from this project group, which is being led by Dr. Jim Fan and Yuka Zhu at, um, NVIDIA is, so we, we, we’ve talked about this in the past, so we’ve had videos of robots, that have received some training data, usually watching a video of somebody making a coffee with a coffee pod or something like that.
[00:31:58] Cameron: And then it will train itself through trial and error over the next 12, 24 hours to do that and, and optimize its attempts until it gets it right, has a neural network built into it that’s helping it figure out which Which actions actually deliver the desired result and which don’t. What they, what they’re doing with Project Groot is using trillions of simulations.
[00:32:24] Cameron: So it, it enables robot developers to take the physical manifestation of their robot, you know, what it’s, what it looks like. It’s, it’s, it’s, uh, hardware, put that into. Uh, a GPU architecture, then give it the physical tasks, the training data that they want it to do, it then doesn’t have to run the
[00:32:48] Cameron: simulations or run the
[00:32:50] Cameron: exercise, the neural network training in the real world, it just runs it through it.
[00:32:56] Cameron: Millions of simulations, virtually, you, with the,
[00:33:02] Steve: It’s not doing, it’s just simulating, simulating, simulating, until it feels like what it’s got here, it can match there.
[00:33:08] Cameron: yeah, well it just, instead of having to do it in the real world, it just runs a million simulations, virtually, finds out what the right combination of physical actions is going to be to, to, to realize the outcome, and then it just does that one, straight off the bat, first time,
[00:33:28] Steve: crazy. Wouldn’t
[00:33:29] Cameron: and they’re, they’re providing the architecture to do that, so, Essentially, you know, in theory, you’ll have a general purpose robot.
[00:33:36] Cameron: You want it to do a new task. You’ll show it what the task is. It’ll, it’ll be like, uh, Trinity in the first Matrix film. When Neo points at the Black Hawk helicopter, says, can you fly that? She goes, now I can, I can now, right? So you go, Hey robot, can you cut my hair? And it goes, it just runs a million
[00:34:00] Cameron: virtual simulations in its head or in the cloud there, and then goes, yeah, okay, boom, sit down.
[00:34:07] Cameron: I know what to do. Doesn’t have to learn on the job. It learns through millions of
[00:34:11] Cameron: virtual
[00:34:12] Steve: what we should call this though? Because it has to go through it in its mind. We should call this meatloafing. Baby, baby, let me sleep on it.
[00:34:19] Steve: Baby, baby, let me sleep on it.
[00:34:20] Steve: And it’s going to sleep on it. overnight.
[00:34:22] Cameron: Let me sleep on it. Baby, baby, let me sleep on it.
[00:34:26] Cameron: Let
[00:34:27] Cameron: me sleep on it. I’ll give you an answer in the morning. I gotta know right now, do you love me? Will you love me forever? Will you leave me? Will you Oh man, another Meatloaf fan. Dude, we are brothers.
[00:34:42] Steve: sleep on it.
[00:34:44] Cameron: My wife’s
[00:34:45] Cameron: number one
[00:34:45] Cameron: hate.
[00:34:46] Steve: think they need to program into it when you show it something. It’s just a baby, baby, let me sleep. That’s what it needs. That would be a robot I would mortgage my house for. What? What?
[00:34:56] Cameron: like, one rule that we’ve had
[00:34:58] Cameron: in our relationship for 16 years is I’m never allowed to play meatloaf around her. If I, if I want to blast meatloaf in the car, I’ve got, yeah, no, hates
[00:35:05] Cameron: meatloaf. Hates, hates
[00:35:07] Steve: That, that is one of the greatest songs ever. And I went to a Catholic school, I just, it’s just, it was perfection. Paradise by The Dashboard Light, I mean, you cannot, you cannot. No,
[00:35:21] Cameron: Do you ever see him live?
[00:35:23] Steve: only on the, I only saw that terrible version of him at the AFL Grand Final.
[00:35:27] Cameron: Do you remember, um, You know where
[00:35:29] Cameron: the Tennis Center is in Melbourne? There used to be like a swimming pool right next to it.
[00:35:33] Steve: yes.
[00:35:35] Cameron: Uh, I saw Meatloaf play there.
[00:35:37] Steve: pool, on, on a pink flamingo. On a, was he on one of those Instagram worthy pink flamingos, with a glass of champagne and bikinis on? Please tell me he was.
[00:35:47] Cameron: It was circa, I’m guessing, 1989, 1990, somewhere like that. And it was killer, and the thing about it was, like, you could tell that he was so just overjoyed by the fact that he still had an audience. He did like four ENCORES, and just, he was like, really? You love me? You really love me? And we did, like, it was, it was epic.
[00:36:11] Steve: All need love, man. All you need is.
[00:36:14] Cameron: Uh, anyway, moving on from NVIDIA, um, Another big announcement in the last week or so. Microsoft, well it’s not an announcement really, it’s a rumor. Microsoft and OpenAI planned supercomputer
[00:36:25] Cameron: project worth a hundred billion dollars called Stargate,
[00:36:32] Steve: See,
[00:36:33] Cameron: up of millions of AI chips.
[00:36:36] Steve: wasn’t, didn’t, um, didn’t Sammy Altman come out and say we’re gonna need a trillion dollars for what we need
[00:36:44] Steve: to do with AI
[00:36:46] Steve: Uh, have you ever seen the movie Pixel? The short,
[00:36:50] Cameron: I haven’t seen.
[00:36:52] Steve: there’s a short film called, it was an eight minute movie. That was, uh, and it’s totally worth watching. It just, the world turns into a big video game.
[00:37:02] Steve: And in the end, the globe just turns into a big block that it’s just so, and everything becomes pixelated in the physical world and it’s a nice
[00:37:11] Cameron: Like a Minecraft block.
[00:37:12] Steve: Yeah, basically. But the thing that opened up to me on this is that, you know, corporations, again, we spoke about it before the podcast, you know, the level of power that corporations
[00:37:22] Steve: have. Uh, you know, this is. Nation state level investments by private corporations. Geez. I mean, you, I feel like you are in some way turning me into a Marxist, but I just can’t, I just can’t see this. I just cannot see the most powerful technology ever invented and governments with lack of courage to do what needs to be done, like we did in the Gilded Era.
[00:37:47] Steve: Where it’s. Irresponsible for private companies to, well, public private, yeah. Non state owned companies, let’s use that verbiage, to be in control of something so powerful, you know, making investments that are bigger than GDPs of countries. I mean, this is pretty significant stuff.
[00:38:08] Cameron: The proposed project costs about a hundred times more than some of the largest data centers today, and Microsoft executives want to launch it as soon as 2028, according to the information four years away. Now, interesting thing is we know that Nvidia, uh, unable to meet the demand
[00:38:28] Cameron: for their current ship sets like the A one hundreds, but, uh, meta Zuck.
[00:38:35] Cameron: Zuckerschmuck, as I think Trump calls him now, has said that they’re
[00:38:39] Steve: The Tim Apple, I love Tim Apple. We got here, uh, Tim Apple is here with us from Tim Apple and, and Elon Musk.
[00:38:50] Cameron: he’s gonna, he reckons they’re going to have 600, 000, I think it’s A100s, uh, by the end of this year, which will make that the biggest AI data center out there, but they’re talking about Taking this to a whole new level in the next few years. Millions of AI chips. Now, where are they going to come from?
[00:39:10] Cameron: Well, you assume this is what Sam’s trying to raise the money to do so they can build their own. But, like, what they’re going to do with this, uh, what that’s going to be capable of, who knows. But, This is, this is the world that we’re living in now. It’s the new space race. It’s the AI race. Who can build the biggest data centers that are going to require energy, power.
[00:39:33] Cameron: They’re going to require cooling.
[00:39:36] Cameron: Uh, I know Microsoft’s already struggling with cooling their data centers in places like Arizona, the amount of water that they need to use and
[00:39:45] Cameron: get into the desert.
[00:39:47] Steve: on a flip note. The wave pool that I surf in, in Melbourne, you know, artificially generated waves, artificial intelligence. I’m working with the guys there on a local data center that needs cooling, where we’ll take the cold water from the pool, circulate it through there to warm up the pool and cool down their, uh, their data center at the same time.
[00:40:08] Steve: It’s a really, really cool idea.
[00:40:11] Cameron: powered. Supercomputer.
[00:40:14] Steve: Yeah. But that’s, that’s kind of interesting because that’s basically where most energy goes. It goes into heating or cooling things. Basically, you know, thermodynamics is what climate change is built upon and we just need to do both in both directions, which is what we don’t do. We tend to heat something up and then just let it all go or cool something down and just let it all go.
[00:40:39] Steve: But what you need is this cyclical nature of where you heat and cool things. Anyway, just a little bit of a tip for the listeners there.
[00:40:46] Cameron: Well, Sam is, uh, in recent interviews, Sam Altman is predicting we’ll have AGI roughly in the next five years, maybe a little bit longer, he says, but roughly around
[00:40:58] Steve: We’ve got it now. Anyway, I’ve said that a few times. We have AGI. We don’t have SGI.
[00:41:04] Cameron: Well, it depends on your definition of AGI as we’ve discussed. I mean, the definition I think guys like Sam are using is when
[00:41:13] Cameron: the AI can do every task at the same or better.
[00:41:20] Cameron: level than a highly qualified human could do it at.
[00:41:24] Steve: That’s an SGI in my view, but okay, let’s, let’s just go with AGI. I get it. The definitions
[00:41:28] Steve: are important, but that’s significant, man. I mean, if that’s three, even if it’s
[00:41:33] Steve: 10 years, like hold
[00:41:36] Cameron: saying they’re actually saying ASI in 10 years, AGI in 5, ASI in 10. Um, Jensen Huang, I’ve seen some interviews with him recently, the CEO of NVIDIA. He’s saying the same sort of timeline. And when people, uh, he’s quite funny and snarky. When people ask him about Oh, what about hallucinations?
[00:41:58] Cameron: And what about this? And what about that? He’s like, that’s nothing. We’re going to solve that easily. It’s just, you know, these guys seem to be convinced that it’s just the level of compute that they’re able to throw at this thing. It’s no, no biggie. It’s just a level of compute. It’s about looking at the architecture that they put into the LLM engines.
[00:42:15] Cameron: Again, basically the suggestion is. All of the problems that people
[00:42:20] Cameron: see with LLM based AIs at the
[00:42:22] Cameron: moment are just teething issues that we haven’t even got to sorting
[00:42:25] Cameron: out yet, but we’ll, we’re going to nail them. It’s not going to be a big
[00:42:27] Cameron: deal. Just, it’s just time, money, compute.
[00:42:31] Steve: We were there with a couple of other things. I mean, one of the areas that that was present in the fifties and the sixties with fossil fuels with it’s an interesting analogy, and maybe this is different and it is definitely a different technology, but with air travel, it was like, wow, we’re just going to be traveling at 10 times the speed of sound because it’s just a matter of just engineering.
[00:42:53] Steve: And they, and they believed that. And then we,
[00:42:57] Cameron: Did the engineers believe that, or was it just the media,
[00:43:00] Steve: oh, I don’t know, I’m not sure, but that, that was a strong narrative which didn’t turn out, we sort of got to a point where it flattened out. But on the long arc of computational history, and I always love what Kurzweil did where he says what people don’t realize with Moore’s Law is that we’re already in the fourth phase of it, you know, first it was punch cards, then it was vacuum tubes, then it was transistors, and, um, yeah, but it’ll be
[00:43:24] Cameron: First it was clay tablets,
[00:43:25] Steve: there, there you go, yeah.
[00:43:27] Steve: Yeah. And it’s, it’s basically, yeah, there’s all these different curve jumps and epochs of technology, which are now allowed more information to be stored and translate and so on. Um, so maybe
[00:43:39] Cameron: Well, Elon says three
[00:43:41] Cameron: years. He says we’re three years from AGI. So he also thinks we’d all have electric cars like five years ago, but you know, he should take it or
[00:43:49] Steve: he said by 2019, they’d have a million robo taxis on the road and I am still waiting.
[00:43:55] Cameron: Well, either way, like the timelines for AGI, like these guys are. Let’s say Jensen and Sam. We’ll leave Elon out of it for a second. But Sam and Jensen are talking like it’s a done deal. Next five, six years, it’s basically
[00:44:13] Cameron: gonna happen. Um, and in, in our futuristic forecast later on, I want to talk about, a little bit about what that might mean.
[00:44:21] Cameron: In the meantime, open, uh, I mean, oh, look, so many news stories,
[00:44:24] Steve: There’s a lot.
[00:44:25] Cameron: We’re going to do them all?
[00:44:26] Steve: three real quick. Let’s just go real quick through them Just, just bam.
[00:44:30] Cameron: Amazon, I’ve launched a new tool for creating AI based audiobooks, putting a whole industry of people basically on their arses. Maybe not tomorrow because it’s still not perfect. Seen some of the early people reviewing it saying they don’t do emotion very well. But basically, you know, very good, uh,
[00:44:49] Cameron: content.
[00:44:49] Cameron: Quality AI generated voices. If you’ve written a book, you want to turn it into an audio book, you don’t need to pay someone to read it. You just pick from a range of voices and it’ll read the book and
[00:44:59] Steve: I like it. Look, it’s put some people out of work, but I think it’s going to give more independent authors, a bigger audience. And I think that’s better.
[00:45:07] Cameron: If anyone’s going to read books, like I personally think books, I mean, already people don’t read books. I mean, the stats say that you and I are part of the only generation left that really read books. Most, most people don’t read books. And, you know, my, my, my boys.
[00:45:24] Cameron: Hunter and
[00:45:24] Cameron: Taylor don’t read whenever I say, what are you, what are you reading?
[00:45:28] Cameron: They go, if I want to learn anything, I’ll watch a YouTube. I don’t need to read a
[00:45:32] Cameron: book. I’m like, really? That’s like, so
[00:45:34] Steve: do wonder if you, if you get, and I don’t know if there’s been studies on it, if you get more knowledge by staying in an idea in a different form than, than video, I don’t know. And again, this comes to different ways that people learn and maybe our brains will adapt to learn through video and absorption.
[00:45:49] Steve: I find that I listen to a lot of things now that I would have read in the past, even long articles. I’ll just get Siri to read it to me while I’m driving, because I haven’t got time to read all the articles, so I just get them read to me while I’m driving, um, which is cool, but I like that, I think that’s a good thing. Next.
[00:46:07] Cameron: before that, OpenAI have just started talking about their voice cloning tool, which they said they used to build the text to speech generation, which is, still blows me away, in their, uh, They seem to have that ready to go. They reckon it can take 15 seconds of
[00:46:26] Cameron: your voice and perfectly replicate your voice.
[00:46:29] Cameron: A bit like Haygen’s stuff, but they’re reluctant to release it, release it publicly, uh, in this election cycle. Um,
[00:46:37] Steve: late, that’s not gonna, I mean, we can do it anyway, I mean, they might be able to do it quicker,
[00:46:41] Steve: but there’s some pretty good versions of it right now.
[00:46:44] Steve: Yeah, Synthesia does a pretty good version of it as well, and HeyJen can do it pretty well.
[00:46:49] Cameron: Yeah, um, and the, the tool Descript that I use to edit podcasts with does a, has its own, it needs about 60 seconds of audio, but it does a pretty good job. I’ve
[00:46:59] Steve: of your
[00:47:00] Cameron: of times. Yeah, of my voice, if I’m editing a podcast and I go, Oh, I forgot to say this, or I should have used this word instead of that word, instead of re recording it, I just type in the
[00:47:12] Cameron: word or the sentence that I want to say, and it just does it in my
[00:47:15] Cameron: voice. It’s not perfect.
[00:47:17] Cameron: Descript, D E S C R I P T.
[00:47:22] Steve: You sure it’s not a triple T.
[00:47:23] Steve: Like a triple D dig? Just checking.
[00:47:27] Cameron: No, it’s not 1998, man. If it was 2005. Um, more on the news. Apple.
[00:47:35] Cameron: There’s rumors that Apple’s going to put Google’s Gemini app into the next version of the iPhone. Now,
[00:47:43] Steve: idea.
[00:47:44] Cameron: terrible.
[00:47:45] Steve: That is, that would be one of the worst
[00:47:48] Steve: corporate strategies you’ve ever met. We’re talking about a company with more money in the bank than God. Who has every resource, can go and take any AI developer anywhere from the world, has everything at their disposal, and they’re going to hand it to one of their biggest competitors?
[00:48:06] Steve: Like, think about it, the iPhone versus Android, that would be the worst corporate decision that Tim Apple could ever make. Like,
[00:48:12] Cameron: It suggests to me, though, that they’re not ready.
[00:48:16] Steve: Get
[00:48:17] Cameron: That they,
[00:48:18] Steve: Get ready. No, no, it’s
[00:48:19] Steve: simply Well, It’s But isn’t it? We live
[00:48:23] Cameron: No, because you can’t get, you can’t get NVIDIA chips. You just cannot get them unless they
[00:48:31] Cameron: buy out Microsoft or buy out OpenAI and get their chips. You, NVIDIA literally can’t sell you any. They’re, they’re booked 18
[00:48:42] Cameron: months
[00:48:43] Steve: alright, so if we had to say, it’s an infrastructure challenge, or a
[00:48:49] Steve: software challenge, you
[00:48:50] Steve: would put infrastructure at the top of
[00:48:52] Steve: that challenge.
[00:48:53] Cameron: It’s a combination of infrastructure and.
[00:48:57] Steve: Development time?
[00:48:59] Cameron: Developers. Like, where do you get a thousand top level AI
[00:49:05] Cameron: developers from?
[00:49:06] Steve: Well, you get them from one of those three places you mentioned. Yeah, that’s where you get them? Or you buy, or you
[00:49:12] Cameron: They’re already getting on salaries of millions of dollars in options and all that kind
[00:49:17] Steve: Yeah, Apple could do that. I mean, I don’t think that the economics of it would stop Apple. You’ve got to put that aside. You just literally have to because they could pay whatever they needed to.
[00:49:28] Cameron: the next story is that the CEOs of Stability AI, the guys behind Stable Diffusion,
[00:49:34] Cameron: and the CEO behind Inflection, the Pi app, that we’ve talked about a few times have both left their companies along with some of their other senior execs to join Microsoft.
[00:49:45] Steve: my point. Money will get anyone.
[00:49:49] Cameron: Well, no, but it’s not money. It’s that their apps are failing. They’re, we’re already in the Stage where some of the biggest AI plays that have received hundreds of millions of dollars
[00:50:03] Cameron: of venture capital are already giving up the ghost and going, can’t compete. They can’t
[00:50:10] Cameron: compete with OpenAI, with Google, with Meta,
[00:50:14] Steve: the problem is if, if, if Apple lets, uh, Gemini do it, they’re going to have the same
[00:50:22] Steve: problem they’ve got with
[00:50:23] Steve: Apple Maps on a smaller scale, where someone else had it, it has it reinforced learning, it gets into that cycle. It’s taking the data from Apple users. Now, whether or not it’s a general AI that is Let’s say, like a ChatGPT that is native to the iPhone, or whether it’s not the personal AI that you and I have spoken about a number of times.
[00:50:43] Steve: Imagine your personal AI which has your data and becomes your personal co pilot or assistant. Um, it depends on what area of the market they approach with that. Uh, maybe if they approach the general one and work on their own personal AI, which I think they have to do, Like they have to otherwise, cause if they don’t have that in the long run, then I think their entire iPhone, uh, regime is at risk because that’s what we’re heading towards where you have a personal AI, which integrates with your operating system.
[00:51:14] Steve: And if they don’t have one of those, then, then they lose the operating system game.
[00:51:20] Steve: Maybe Microsoft makes a comeback with a, uh, a device. Yeah. Maybe they come back with a phone or something up. No, I’m, I’m serious.
[00:51:28] Cameron: Yeah,
[00:51:30] Cameron: it’s possible. I know OpenAI, as Sam’s been
[00:51:33] Cameron: talking about, like he’s supposedly got a project with, um, Johnny Ives. Yeah, to develop something. Haven’t heard anything about that for a while. Well, uh, you wanted to talk a bit about Bitcoin before Yeah. Well, well this is the deep dive, right? So let’s just.
[00:51:49] Steve: Pull into the deep dive and Bitcoin has had record prices. You’ve seen
[00:51:53] Steve: it’s was 73, 000 USD, which, which is extraordinary. I had a look at when I first wrote about Bitcoin on my blog, and I bought my first Bitcoin, some of which I’ve got still, um, and some of which I really, really wish I could find, uh, was 2013, 2013.
[00:52:13] Steve: It was 13. It was 13 for one Bitcoin. Yeah. Anyway, um, so I noticed a story in Wired magazine that they said, you know, what’s behind the record prices of Bitcoins and their heading was vibes mostly, which I loved, right? Because our Australian listeners would know about, you know, it’s Mabo, it’s the vibe of it all from the castle.
[00:52:37] Steve: And it just, I just wanted to deep dive into what is currency and currency really is a form of technology. It always has been. Every epoch of human technology has resulted in a new currency. We go through times, um, you know, barter economies had cowrie shells, uh, the agrarian state had grain receipts, uh, during the age of discovery had bills of exchange, uh, we had ferrous coins at the start of the industrial era, and then fiat currency, which is our current epoch.
[00:53:07] Steve: And all of these had prominence during certain tech eras. I wonder if we’re in crossover mode now where most currency is digital, it’s just numbers in a machine and a new form of digitized promissory notes. Um, I mean, we’ve only got about, I think in Australia it’s about 8 percent of all money that there is a claim to actually exist in the physical state. And it’s, you know, generally single digits anywhere around the world in the modern economy. But I just can’t think, can’t help but think that Bitcoin will never really be a currency.
[00:53:38] Steve: Uh, because we have to have a number of things for, for a currency to succeed. Scarcity, it’s got that. Um, fungibility, you know, any currency, any Bitcoin is equal to any other currency. It’s got that. Durability, I guess it’s got that. Divisibility, portability, I’d put a question mark around and acceptance, I’d put a question mark around.
[00:53:57] Steve: Um, it also has high transaction costs, which are a problem. So I’m just, I just wanted to dive into what. Currency is, I do think that we’ll end up with GovCoins, which are programmable currencies, where different currency you get from the government has different rules around it. Some of it will be open and some of it will be closed.
[00:54:17] Steve: And that’s a great way to do transfer payments so that money doesn’t get squandered. For example, let’s say someone is on, uh, unemployment benefits. You could say, well, it can’t be spent on alcohol or gambling, which would be great. You got to put that money in, And it saves people. I mean, it sounds draconian, but it makes sure that kids get fed and money goes to the right places.
[00:54:37] Steve: Uh, I think the programmable currency will be interesting, but another one is it needs to be a store of value, which Bitcoin isn’t. A store of value isn’t just about something going up over time. It shouldn’t go down at any point in time either. The reason it’s called a store of value is if you have a, Uh, you know, a silo of grain is still a silo of grain.
[00:54:55] Steve: You know, the bugs and the birds might get a few, but they don’t get it all. Right. There’s it’s a store of value. So
[00:55:01] Cameron: people call gold a store of value and it goes up and down.
[00:55:06] Steve: you’re not, not as dramatically right. Not as.
[00:55:09] Cameron: No, nothing goes up and down
[00:55:11] Steve: Well,
[00:55:11] Cameron: dramatically as
[00:55:12] Steve: The volatility, it’s, it’s the level of volatility that
[00:55:15] Steve: is the main issue. And I just wanted to know, I mean, I’ve obviously given my deep dive on what a currency is, on what your thoughts are on Bitcoin, where it is now,
[00:55:23] Steve: and, and and we may as well just weigh in on it, given that it’s, it’s had hit another all time high.
[00:55:30] Cameron: Well, I liked, you know, Tony and I talk about Bitcoin a lot, um, on our QAV show, because obviously a lot of people
[00:55:37] Cameron: tout it as an investment and the, and Torsten Hoffman, the guy who produced my documentary, uh, also produced a documentary, two documentaries. on Bitcoin. He’s done over the last five or
[00:55:49] Steve: I’ve watched them, they’re both really Good. Yes,
[00:55:54] Cameron: with him. I’ve been having it with him for three or four years now where he tries to advocate for Bitcoin as an investment, not as a currency, but as an investment. And I keep coming back to my fundamental question. I’ve been asking this for four years. How do I determine the value of a single Bitcoin?
[00:56:14] Cameron: And he says, well, you know, it’s going up. And now the answer is always there’s a limited number of them and the wide article that you pointed to I think made a really good point, which is if you accept the basic premise of economic theory that we have these days, it’s that the scarcity value of it, the known, which is the scarcity value of it, has already been factored into the price.
[00:56:42] Cameron: Basically, when you’re, when you’re investing in shares, I mean, I don’t subscribe to. the idea of perfect knowledge and rational markets because, you know, I think behavioral economics, oh, RIP Daniel Kahneman, by the way, the sort of one of the fathers of behavioral economics, he passed away age 90 this week.
[00:57:02] Cameron: You’ve read Thinking Fast and Slow, I assume, one of the, one of the greatest books I’ve ever read. Um, you know, we know that people don’t make purely rational decisions, but leaving that aside, the, the, idea that the value of a Bitcoin is going to be worth more in the future than it is today, because there’s a limited number of them ever going to be made should already be factored into the price.
[00:57:27] Cameron: But you know, in terms of an investment, I think, you know, it basically survives on the greater fool theory. People just think if I buy it today, I’ll be able to sell it to some idiot next year for more money. What’s the intrinsic value of a single coin? No one can tell me that. So, you know, you know, the basic The way that Tony has taught me to invest over the last five years or so we’ve been doing the show is that the, there are, there
[00:57:52] Cameron: are two kinds of
[00:57:54] Cameron: investors.
[00:57:56] Cameron: People that have a rational thesis for how they invest. And Punters.
[00:58:03] Steve: Yeah, sure. And punters get richer every now and again. This
[00:58:08] Steve: is
[00:58:08] Cameron: they do.
[00:58:09] Steve: that
[00:58:09] Steve: and this is the problem. Because you can and have got rich on Bitcoin, they can say, yeah, but look, yeah, but look over here,
[00:58:16] Cameron: But then survivor bias kicks in, and you look at the one guy that made money out of it, and you ignore the 99 guys that lost money. So, but, the starting point, if I talk to anyone about investing these days, it’s, okay, there are two kinds of investors. Somebody with a rational thesis, And somebody who’s a punter.
[00:58:35] Cameron: Which one do you want to be? If you want to be a punter, can’t help you. If you want to be the rational investor, you need to have a framework that, that informs your investing. You need to have a rational theory that informs your investing. And the basic theory that Tony has taught me, the value investing theory, is only buy shares.
[00:58:57] Cameron: in companies that have a good track record of
[00:58:59] Cameron: generating cash and only buy them when you can get them in a discount to their intrinsic value and then hold them until they break one of your sell triggers. And you
[00:59:10] Cameron: know, we, we have a range
[00:59:12] Steve: Benji Graham,
[00:59:13] Cameron: It’s Benjamin Graham, Buffett, Munger. Very, very
[00:59:17] Cameron: basic.
[00:59:17] Cameron: It’s worked for a long time. Theoretically, it should continue to work. Uh, so when you try and, determine the intrinsic value of a Bitcoin, no one’s been able to give
[00:59:28] Cameron: me
[00:59:28] Cameron: a model,
[00:59:29] Steve: and, and, and never be able to. I mean, it comes down to, to, um, it doesn’t generate cash and never will,
[00:59:34] Steve: because currencies don’t. And people have been speculating on currencies and metals
[00:59:38] Steve: and pink sheets and all that for a really long time, right? So that is going to continue. Uh, but you know, the thing that And speculation, people say, oh yeah, but I make money when I sell at a higher price.
[00:59:51] Steve: And I use the example often, you know, I love looking at nature as an allegory for investing. And I say, look, if you, if I planted a lemon tree and it never had any lemons, would you want that tree, no matter how tall it gets? And they go, well, you can chop it down and sell the wood. You can, but I don’t think that’s a good strategy.
[01:00:09] Steve: You know, for me, it’s like, show me the yield and all investing, like you say, the value of that and the cash flow is the yield. That’s what it comes down to. And while you can make money out of things that never have yield and some powerful companies don’t have dividends, but they still have yield because they’re still generating cash.
[01:00:26] Steve: Right. And that’s the thing that’s important. And if there’s no yield, then I’m just not interested because yes, you can make money, but not the way I go about it. And that’s really interesting.
[01:00:36] Cameron: and it comes down to, do you want to be able to sleep at night? You know, do you?
[01:00:40] Steve: It’s one of my favorites.
[01:00:41] Cameron: So anyway,
[01:00:41] Steve: And
[01:00:42] Cameron: I look.
[01:00:43] Steve: afternoon.
[01:00:44] Cameron: I want to ask you about this idea of government crypto. Um, I like your idea of programmable currency. I’ve never really
[01:00:49] Cameron: been able to understand, though, what a cryptocurrency would offer, say, the Australian government over and above fiat currency that we
[01:00:59] Steve: so a
[01:00:59] Cameron: the advantages and disadvantages?
[01:01:01] Steve: this is what the Ethereum network was built
[01:01:03] Steve: upon and why I think that that is a currency that might be able to be
[01:01:06] Steve: used through the internet of things and various forms of trading, energy trading, the
[01:01:11] Steve: energy internet. the
[01:01:12] Steve: idea is basically that a currency, if it’s Turing complete and has programmability, um, not all currencies are the same.
[01:01:19] Steve: So they’re not fungible. One, one Gov coin isn’t the same as one other. They’re not exactly exchange like. They’re not exactly the same. So the idea is that the government would be able to give you currency with certain rules or protocols built into it. So if I earn money in my job, that money goes to me and that money, let’s call it AUC, Australian Crypto.
[01:01:43] Steve: It’s one for one with the Australian dollar or USC. And what would happen is we would have different types of money that we have. In our apps and our banks where we have different balances for different types of money. The money that you earn and you get in the open market is an open currency. It can be spent on anything that you want to spend it on.
[01:02:04] Steve: But then on the flip side you might have currency which is closed currency that can only be spent on certain things and it would be programmable where you can only transact. with certain organizations. Now each organization that is registered and accepting money through digital forums, whether it’s FPOS or credit cards, is Uh, we know what type of organization that is.
[01:02:27] Steve: It sells foodstuffs, for example, or it’s transport. Like, we already do this. Like, this already happens in the marketplace. So, every, every form of trading, the government knows the type of business that you’re trading with or the product type that you’re getting. So, let’s say, uh, kids get allowances, uh, you have like a child endowment.
[01:02:46] Steve: It used to be called that when I was a kid. I don’t know what it is now, but people get payments. Uh, for their children to spend on certain things. Now the government, if it was clever, would say, Okay, here’s your child endowment for the month. It can be spent on transport, clothing, food, electricity. Can’t be spent on these other things.
[01:03:03] Steve: Or if it’s just a kid getting Austudy. Can only be spent on, uh, transport, books, school fees. That’s it. That’s it. That’s what it gets spent on. And then that way, uh, you would reduce the leakage and the wastage of things getting spent on the wrong things and what they were unintended for by using this code.
[01:03:23] Steve: And you can see how this could be translated to many different things. Like, for example, during COVID. All of these companies got given a whole lot of money. Well, if the government had given the money for the JobKeeper, that money can only be transferred to an employee as staff. And at the end of it, if it isn’t, then we have to give, take, we take that money back.
[01:03:42] Steve: So all of a sudden, you get a far more efficient use of the tax base. By allocating money for where it is meant to go. At the moment, what we say is, what person are you and do you qualify? Here’s the money. And then you cross your fingers that they spend it in the right spot. GovCoins that are programmable allows that ability, which I think would be great.
[01:04:02] Steve: Now, there’s danger could become Particularly draconian and, and, and you can see the obvious downsides, but I think it would be a really good use of a crypto style currency.
[01:04:13] Cameron: Sounds like a whole new level of complexity and central planning required to drive it, But
[01:04:19] Steve: no, but we already do that now. No, I’ll, I’ll, I say that that already happens now, but what we have is, Decisions made with that complexity on who gets the money.
[01:04:31] Steve: We just don’t know if it gets spent in the right places and goes to the right people. So that complexity already exists, but instead of just filling out paperwork and then sending the money to someone, you just add a layer to that currency, which basically says.
[01:04:44] Steve: What it spent on. And all it does is when you go to spend it and you wave your card, it goes beepp or it goes ding, ding, access denied. That’s it. It’s not that hard.
[01:04:55] Cameron: Except you gotta go, okay, well, how much money do I have? Well, I’ve got this amount of money, but I can only spend it on these things, and that amount of money, but I can only spend it on these things,
[01:05:03] Steve: Well then, again, that’s not that hard. Let’s say there’s 10 categories
[01:05:07] Steve: of things you can, you can just say, can I spend it on this
[01:05:10] Steve: or that? How much have I got in open currency? How much have I got in education? This, this, what the other things are? There’s probably only 10
[01:05:16] Cameron: there any other? Any other advantages to governments moving to crypto than allocation
[01:05:23] Steve: Yeah.
[01:05:23] Steve: um, where you, you get a whole different way that you can inject money into the
[01:05:28] Steve: economy. At the moment, the way that we do, uh,
[01:05:30] Steve: liquidity is we sell bonds, which is incredibly inefficient. Bonds get bought, uh, and then get transferred and you hope that that money goes into the right area of the economy.
[01:05:43] Steve: You can just release money to people to the right, straight away. And this is part of modern monetary theory. Cause I don’t, I don’t believe in the idea of using, uh, I think that there’s a whole lot of other benefits where it can get to the market more quickly and efficiently than going through selling money to, to companies with government bonds.
[01:06:06] Steve: Yeah, that’s what I think. But this is a big and developing area of economics. You know, crypto programmable currency and modern monetary theory, which says also that it doesn’t matter how much debt that the government is in, if it gives itself sovereign debt, it should never really sell to private corporations government bonds.
[01:06:27] Steve: If you have sovereign debt to build infrastructure, the danger with sovereign debt is they might spend it on transfer payments, which creates inflation. If it goes to a transfer payments, you get inflation. But if you have sovereign government debt that goes into infrastructure, which has a multiplier effect, you never have to pay your debt back.
[01:06:44] Steve: It doesn’t matter because you’ve built a new asset, which then gets used to create further revenue within the economy. It’s called Modern Monetary Theory. It’s really interesting. I’m into it. We should do a, you and, you and Tony should, should explore, yeah.
[01:06:59] Cameron: we did some shows about it back in the early
[01:07:01] Cameron: COVID years where it was being used to justify all the money that governments were printing
[01:07:06] Steve: But they weren’t, but they weren’t, but they were using it for transfer payments.
[01:07:10] Cameron: they used it wrong
[01:07:12] Steve: Yeah, they’ve
[01:07:12] Cameron: because they said that doesn’t matter. We can print money. It’ll never come back to bite us on the ass. And now we’ve had two years of interest rates going up and economies crumbling around the
[01:07:23] Steve: well, that’s the thing about interest rates. You know how the government, uh, the, the banks, there’s one of
[01:07:28] Steve: my favorite things people don’t realize is the interest rates go up and you’ve got a variable
[01:07:32] Steve: interest rate. That’s right. And then your interest rate goes up. I think 10 people would believe, well, now that the interest rates are 4%, the bank has to pay 4 percent out, so they have to charge me 6%.
[01:07:44] Steve: No, they already gave you the money. They gave it to you at 1%. That’s why their margins are so huge. People just don’t really think through just basic money in, money out. That’s what most investing in business and finance and economics is. It’s just like, Where did the money go in and where did it come out of?
[01:08:00] Steve: Like, just really basic in and out.
[01:08:04] Cameron: They got it when it was cheap. Now it’s more expensive. It doesn’t really matter because you got it from them when they got it when it was cheap.
[01:08:11] Steve: right, but then they charge you the expensive amount, but you’ve already been given the money. It’s like they changed the price after you bought the product.
[01:08:18] Cameron: After you bought it. Yeah. Yeah. Great scam. Um, well, I dunno, Steve, the whole, the whole crypto thing. Um, apart from the fact that I don’t think there’s a rational basis for using it as an investment, I, you know, I don’t really understand much about how it’s going to change the world of actual currency.
[01:08:40] Cameron: Um, I think the, the, the best argument I’ve heard for it is, um, um, Using it to prevent crime and fraud because it’s more traceable, where the money came from, the hands it goes through, how it ends up, like, makes it hard, would make it harder to use
[01:09:01] Cameron: shelf companies in the Bahamas, uh, to filter money through different levels.
[01:09:06] Cameron: You can, whenever some money gets spent, you know exactly where that money came from and who it went
[01:09:12] Cameron: to.
[01:09:12] Steve: well, the wallet can be anonymous. It’s the onboarding and offboarding. So, the KYC protocol, the know your customer. When someone gets into crypto, you have to use a fiat currency to get in. All right, unless you meet someone in a dark alleyway, which you can do, who gives you the code and you hand them cash, you can do that.
[01:09:29] Steve: And there’s some real like dark crypto believers who do that. They literally give cash to get like their 64 digit alphanumerical code to own that Bitcoin. Some people do do that so that they can be fully anonymous. But 99. 9 percent of people have come on board into crypto. Using a credit card or a traditional financial system. Once you’re in there, you can transact inside it anonymously. All I can tell is you’re on board and you’re off board, but inside there, it’s dark. You can be anonymous or you can be transparent.
[01:10:03] Cameron: There’s no way of tracing, um, the, so if, if money gets spent, let’s say, by a company that’s registered in the Cayman Islands to buy a, uh, an apartment building in Miami. Um, and you want to trace
[01:10:20] Cameron: where that money came from. There’s no way of, in crypto, of looking at the identification of that coin or those coins and tracing it back to its starting point.
[01:10:32] Steve: Nah, well you can trace back when someone bought a crypto, if they bought it in and it
[01:10:36] Steve: was the same wallet that sold or bought something inside it. But if you get inside the ecosystem and you’re trading
[01:10:42] Steve: inside it with different wallets and numbers, then whatever happens in there, the upside and downside can be.
[01:10:46] Steve: Fully opaque. It’s only onboarding and offboarding you can go.
[01:10:51] Cameron: so.
[01:10:51] Cameron: no real security advantages
[01:10:53] Cameron: then from a fraud
[01:10:54] Cameron: perspective. Because that’s what we, that’s, that’s really what we need to be able to do better, is to stop people using shelf companies,
[01:11:04] Steve: but I don’t think we do want that, because if we wanted that, because if you wanted that, you just stop it. You say, you’re not allowed to ever transfer money from here over to there. The end. Thanks for coming. It’s like multinational tax rorting. They act like it’s hard. It’s really easy. You just go,
[01:11:19] Steve: here, we do revenue assessment tax, same as land tax.
[01:11:23] Steve: It’s a percentage of revenue. It’s not, it’s not what you say profit is because you rented out your logo from yourself in a related party transaction. The fact that anyone acts like multinational tax avoidance is a hard issue to solve is fucking laughable. It’s an absolute joke. They’ve done it with land tax for 500 years.
[01:11:42] Steve: They just go, we assess the value of your land. Here’s your bill. Congratulations. Hi Google, you turned over 11 billion dollars in Australia. Here’s your 1 billion tax bill. You don’t like it? Leave. Bye. It’s
[01:11:55] Cameron: Only 1 billion out of 11 billion, really
[01:11:57] Steve: Well, no, that’s their tax bill.
[01:11:59] Cameron: Yeah.
[01:12:00] Steve: Well, 10 percent of turnover would be reasonable probably, let’s say.
[01:12:05] Cameron: For Google, really? I
[01:12:06] Steve: Yeah,
[01:12:06] Steve: probably not. But anyway, at the moment, you know, Google paid 200, 000 last year.
[01:12:10] Steve: I paid more tax than Google. I’m so much more successful than them.
[01:12:16] Cameron: All right, well, moving right along. Technology Time Warp, Steve.
[01:12:21] Steve: Well, we did that, didn’t we? No, we didn’t, we didn’t. Real quick one. My TikTok has been filled with 40 years
[01:12:28] Steve: ago, just everything, everything on pop culture. Uh, just two weeks ago was the Saturday that the Breakfast Club were on Saturday Detention, which I loved, but I thought I’m just going to throw real quick three tech things from 40 years ago.
[01:12:42] Steve: The 3. 5 inch floppy diskette was introduced. Reminds me of, remember there was the five inch floppy and the three and a half inch floppy. And the three and a half inch one was a stiffer plastic case. And I remember everyone used to get confused. There’s a floppy disk and a hard disk, which is one of my favorite things.
[01:13:02] Steve: Everyone used to say, no, I don’t want a floppy. I want a hard. It’s like, well, they’re both floppy. But anyway, let’s not get into it. The 3D printer was invented by Chuck Hull. In 1984, and he had a patent on it for a long time. He got two patents cause he had an evolution in it. And it was only in the 2009, I think that the patent came off and that’s when the market went crazy,
[01:13:24] Cameron: Wow.
[01:13:25] Steve: printing and the first digital projector.
[01:13:28] Steve: And I just want to say 40 years of digital projectors in corporate meetings, and they still don’t work. So I’m just asking some serious questions. So that was my time warper. Have you got one there for us, Cam?
[01:13:39] Cameron: Yeah, I’ve been reading a biography on Deng Xiaoping, um, recently as well, and, um, you know, just fascinated. It was about, it was like 1979 when Deng Xiaoping, uh, was the top man in China, took over from Mao Zedong. And, you know, you look at what has happened to China as an economy in the last 45
[01:14:02] Cameron: years, it’s really, uh, Astounding.
[01:14:05] Cameron: I was doing some research, um, last week, just looking at the GDP growth of China in the last 40 years. Um, yeah, go to the tape. You, you look at, um, I’ve got this article from, uh, science, uh, uh, business. net, Gross Domestic Expenditure and R& D in the billions by country. And you look at, you know, China, how quickly China has just blasted, just in the last 20 years, let alone the last 45 years, it’s just blasted through every other country, uh, except the United States.
[01:14:47] Cameron: If you look at Japan, uh, China’s R& D. In the year 2000 per annum was about 30 odd billion dollars. Um, at the time the United States’ r and d as a country was about $268 billion. In 2021, the US was 806 billion. So it grew, yeah, a little bit less than four times, say more like three times, grew three times in that period of time, China’s went from about.
[01:15:20] Cameron: 25, 30 billion to 670 billion in
[01:15:25] Steve: Wow.
[01:15:27] Cameron: in 2021. And I’ve been following the news, uh, from their recent two sessions where they all
[01:15:34] Cameron: get together and, you know, sort of figure out the next roadmap for China. They’re still talking about having a goal of a 10 percent year on year increase
[01:15:44] Cameron: in R& D.
[01:15:45] Steve: I thought you were going to say GDP, which they did. But that’s unsustainable.
[01:15:50] Cameron: 10%, you think? Why?
[01:15:53] Steve: Well, the R& D is possible and sustainable. They can definitely do that with their economy. I think it’s starting to
[01:15:59] Cameron: Uh, GDP. Yeah, GDP It No GDP growth. I think? their goal is six to 7%.
[01:16:04] Steve: was 10 for a doing about five to six.
[01:16:06] Steve: Yeah, well in 1985 it was 310 billion and at the moment it’s
[01:16:13] Steve: just around about the 20 trillion mark, it’s
[01:16:16] Steve: just about 3 trillion under the US, but as you know there’s been a lot of talk, and I don’t know how much of it is true, that, you know, things are imploding a fair bit, In China, in terms of production, we’ve got a bit of de globalization where things are moving away.
[01:16:32] Steve: And I think that things that we’ve discussed with soft robotics and humanoid robots is going to take away their low cost labor advantage in many realms, um, especially with dexterous manufacturing. Um, it’s, it’s interesting though. I mean, it’s an absolute. Unbelievable how they turned around that economy.
[01:16:52] Cameron: I got a report from the U S national science board that came out recently that says China has now surpassed us in STEM talent production, which is Research Publications, Patents, and Knowledge and Technology Intensive Manufacturing. China has set the goal of being the world’s leading science and engineering nation, and these NSB reports demonstrate that the United States is on the verge of allowing them to realize that objective.
[01:17:17] Cameron: We already see this in Artificial Intelligence, Where China out publishes us, has more patents, and produces more students than the United States. I, I saw this, uh, TikTok, uh, video or YouTube video the other day of a guy, American guy, who was in China, he was in Shanghai, and he was saying that he’s been traveling regularly there for the last 10 or 15 years, but this is the first time he’d been back since COVID.
[01:17:44] Cameron: Um, and he was saying just the change that he saw in Shanghai in
[01:17:50] Cameron: the, those like four or five years since he was last there, he said, it’s just incredible. He said, five years ago, you couldn’t see the sky because of the smog. Now look, completely, completely
[01:18:00] Steve: wow. Okay. That’s a really important thing, right? You
[01:18:05] Cameron: electric vehicles?
[01:18:06] Cameron: He said, there are just. Purely electric vehicles everywhere here. You can buy a top of the line electric vehicle for 10, 000 US dollars in China. They’re just churning them out. Um, you know, he was just talking about, he said like the seven main things
[01:18:23] Cameron: that he noticed going back, um, in the first time in five years, he said, like,
[01:18:28] Cameron: he said, the US is going backwards in a lot of areas, China’s just quantum leaping itself
[01:18:35] Cameron: every year,
[01:18:35] Cameron: right?
[01:18:36] Cameron: It’s really fascinating. if their economic growth rates You know, flatten off to an extent or aren’t as strong. I think,
[01:18:43] Cameron: Which is deliberate, by the
[01:18:45] Steve: well, yeah, right. So you get to a point where, yeah, well, I’m interested to see your point on where you think America is, because it seems to me that it’s imploding, you know, the disparity of income is, is, is a big thing that the crony capitalism, they’re not really investing in, and The political division. The political division is tearing the country apart.
[01:19:08] Steve: it really is.
[01:19:09] Steve: And even if you think about what seems to be valued in the New York Times, uh,
[01:19:17] Steve: you know, divisive politics
[01:19:19] Steve: and who’s upset and who is instead of real things, instead of real issues like, you know, feeding people, moving towards renewable and all of the things that really matter. And it seems to me from the outside that they’re overly focused on divisive politics and who’s right and who’s offended and they’re just eating themselves on issues which are unimportant in the grand scheme of the direction.
[01:19:47] Steve: I’m not talking about, of course we want equality and all of that, but it seems as though, uh, yeah, they’re focused on things that, uh, All about ideological groups instead of progression for everyone.
[01:20:05] Cameron: And yeah, just, I mean, I read the New York times every day. It’s one of the, about 12 newspapers that I read every morning, including, I read newspapers
[01:20:13] Steve: 72 hour, any 72 hour, you know, um, circumvention of,
[01:20:17] Steve: the sun.
[01:20:18] Cameron: I read newspapers coming out of China, Russia, the Middle East, as well as BBC, New York times and the ABC, et cetera. But when I say read them, like I scanned them for important things and just, it’s always interesting to me to see the different, coverage of big global stories coming out from different newspapers from different parts of the world.
[01:20:41] Cameron: But, um, you know, the, the, the focus on Russia and China in the New York times, there’s never a positive story about anything that China’s ever does in Western media. Try and find a single positive story about China in any mainstream media in the West. You know, you’ve got a country
[01:21:01] Cameron: That is just killing it on, you know, it’s not perfect, but it’s killing it on so many metrics right now.
[01:21:09] Cameron: Try and find a single positive story about that in Western
[01:21:12] Steve: Hey, I can tell you the single negative story that they love. The single negative story is the lack of democracy. And it does seem to be getting
[01:21:18] Steve: worse. And, you know, the social scoring and all of that. I know that we’ve talked about that and you’re like, is that bad? I’m like, it just feels like it can’t end well.
[01:21:25] Steve: Maybe like programmable currency. I don’t know. But,
[01:21:29] Cameron: You were saying to me off air before the show, you know, I think maybe, you know, if you do, if
[01:21:34] Cameron: you contribute to society, you should get taxed less than people who don’t
[01:21:38] Cameron: contribute to
[01:21:38] Cameron: society. Isn’t that.
[01:21:39] Cameron: a social scoring system?
[01:21:41] Steve: It is, but not based on being, uh, surveilled 24 seven. So yeah, my, my, and we, and we may as well
[01:21:51] Steve: raise it now. My, my idea was that because the market has failed in allocating the right amount of resources to people’s jobs, who create a social contribution, you know, nurses, teachers, doctors, what have you versus investment bankers and corporate lawyers.
[01:22:05] Steve: I think tax rates should vary based on the social contribution of the work that you do. I’m going to do it. I’m going to write up a piece about it.
[01:22:12] Cameron: think it’s a great idea. But look, you know, um, democracy in China, uh, is a whole other, you know, story. I mean, the funny thing about that, if you, from the perspective of the Western media, is we criticize China all the time for not having the kind of democracy that we think they should have. But we loved Lee Kuan Yew when he was the dictator of Singapore, that had exactly the
[01:22:36] Cameron: same kind of democracy, which was really a meritocracy.
[01:22:40] Cameron: In Singapore, we loved Lee Kuan Yew. When Lee Kuan Yew died, you know, it was like, uh, Jesus died, you know, the Because he doesn’t have any nuclear bombs. I
[01:22:51] Cameron: his praises. And yet, you know, he was a dictator of Singapore for, you know, whatever, 35, 40 years. And his son took over, you know, and, and I, I just recently read a book of interviews of Lee Kuan Yew that were done just before he passed away, a book by Graham Allison.
[01:23:10] Cameron: Lee Kuan Yew said that Xi Jinping was the new Nelson Mandela. He said he struck him as the same sort of integrity, vision, as a Nelson Mandela.
[01:23:25] Cameron: But you won’t hear the Western
[01:23:26] Cameron: media compare Xi Jinping to Nelson Mandela. Anyway, moving right along, Futurist Forecast, Steve,
[01:23:35] Steve: Yes,
[01:23:35] Cameron: you just wrote AI in a note.
[01:23:37] Steve: No, no. What if I did? No, no. Did I write that?
[01:23:42] Cameron: That’s just AI. I’m
[01:23:43] Cameron: like, that’s broad. There will be
[01:23:47] Steve: We’ve been going for
[01:23:47] Steve: two hours. I don’t have one. If you have one, then I’m happy to go into it.
[01:23:50] Cameron: Hour and 20. Well, listen, okay, just quickly. Um, we talked about AGI in five years, six years earlier on. Again, the thing that I think, I’m not seeing enough people talking about, which is crazy. What does a world look like with AGI in five years? You know, like, just think about a world with millions of new AI research scientists.
[01:24:16] Cameron: That are as good, if not better, than the best human scientists working on research on cancer, on nanotech, on climate change, carbon sequestration, cold fusion, fairer economic models, robotics. What does the world look like today? When we have a thousand times the number, like I was talking to, I was talking to Chrissy, you know, you and I were talking about neurodivergent kids and whatever before, and Chrissy’s been on this thing, you know, lately talking about how Western, uh, medicine hasn’t done a good enough job at, at, uh, identifying mental health issues and GPs don’t do a good enough job.
[01:25:01] Cameron: And I was, I was pushing back on it the other day. I was saying, look. Just in Australia, leave the US out because their medical system is obviously a clusterfuck. But we have mental health professionals here. We have psychologists, we have psychiatrists, we have therapists. The entry point to those is the GPs.
[01:25:18] Cameron: You sit with your GP, you tell them you’re having mental health issues, they’ll get you, you know, uh, uh, uh,
[01:25:24] Steve: Yeah,
[01:25:25] Cameron: Yeah, they’ll point you in the right direction. Um, uh, but the, the, the problem is, A, we don’t have enough GPs. B, we don’t have enough psychologists or psychotherapists, psychiatrists for the population, particularly as people are becoming more and more comfortable with admitting they have a mental health issue.
[01:25:47] Cameron: But the difference I said between like somebody with a pain in their And somebody with a mental health issue is you, you know, the, the pain in the stomach presents differently. You know, you got a bad pain, you go to a doctor, you say, I’ve got a pain in the stomach and the GP goes, okay, you need to go to the ER or you need to go get an x ray or
[01:26:04] Cameron: whatever it is.
[01:26:06] Cameron: But the people need to go to the GP in the first place and say, I have the issue. They don’t come and knock on your door and say, are you having any pains today,
[01:26:12] Steve: How are you feeling today, Steve? We’re just checking in yeah. And it’s even worse with mental health, right? You need to, it’s not the GP’s job to identify that you have a mental health issue.
[01:26:23] Cameron: You need to go to the GP and say, I think I need help. Then they can get you on the, the right stream for that help. It’s, and, but people don’t do that because, you know, our parents generation Traditionally wouldn’t get therapy, you know, particularly men are really bad at saying, yeah, yeah, I’ve got a mental health issue, I need help.
[01:26:43] Cameron: It’s still seen, less now than I think it was a generation ago, but it’s still seen as a sign of weakness or whatever it is. But the problem, even if we had everybody go to the GP tomorrow and say, I need mental health help, is we just don’t have enough GPs for a start, which is why they try and push you through in five minutes, is because they’ve got a queue and they can’t see everybody.
[01:27:02] Cameron: B, we don’t have enough psychologists and, you know, if you try and get your kid into a child psychologist or you try and get your kid diagnosed for ADHD, which we’ve been trying to do with Fox for 12 months, it costs a fortune and it takes forever to get an appointment because
[01:27:16] Steve: pediatrician, it
[01:27:17] Cameron: the waiting lists are crazy because we just literally We don’t have enough people.
[01:27:22] Cameron: You just, and you can’t train people by snapping your fingers. But you know, you need kids at university willing to go and do these things instead of going and doing corporate finance. You need to, it takes years to train them and qualify them and all that kind of stuff. But imagine a world where we have a million people.
[01:27:41] Cameron: Psychotherapists that are on your
[01:27:42] Cameron: phone, that you could talk to, that, that understand you, they listening, they read your emails, they’re listening to every conversation and every phone call you the site. Did they see my web browser history?
[01:27:56] Cameron: yes, particularly they need to see your web browser history, even in incognito.
[01:28:00] Cameron: Steve, what, why your fascination with stepbrothers and stepsister porn?
[01:28:05] Cameron: They’re going to want to know that. Like, can you get, can you get porn that isn’t about stepbrothers and stepsisters.
[01:28:11] Steve: I had no these days? so many frisky stepsisters there.
[01:28:19] Cameron: it’s for some reason,
[01:28:20] Steve: for some every. Every porn video needs to have the word step in it I know, It’s crazy, isn’t it? I do not understand why that is the thing these days, but somebody decided that’s the, that’s the go to thing. So we’re also going to have, you know, uh, uh, lots of AI generated content, most of which will be fucking awful, but hopefully we’ll also have AI tools that’ll help us find the good stuff.
[01:28:49] Cameron: So you won’t, it’ll know your tastes. It’ll know your preferences. Um, uh, you know, uh, a version of, you know, The Netflix algorithm, the record with the YouTube or the TikTok algorithm that recommends more stuff that it knows I already like, but that on steroids for all of the AI content. But this, here’s my, the thing I really wanted to drill down on is if Sam Altman is right, if Jensen Huang is right, If Elon Musk is right, even if they’re remotely close to being right, if this is what the world looks like in the next five, six, seven years, millions of
[01:29:26] Cameron: scientists, millions of
[01:29:28] Cameron: tutors, millions of
[01:29:30] Cameron: therapists, why aren’t we talking more about that right now?
[01:29:35] Cameron: And what does that world look like? I’m talking 2030.
[01:29:39] Steve: everything with AI, everything with AI right now is kind of like This other than humanoid robots, everything is just more efficient versions of what we did yesterday.
[01:29:53] Cameron: Yeah, slightly better versions of what we did yesterday.
[01:29:55] Steve: That’s it. And you’re right. And this is the exciting part of AI is, I mean, and this is kind of why it
[01:30:04] Steve: circles back to the, do we want these hundred billion dollar nation state level infrastructure projects managed by private companies, because what they’re going to want to do is just sell this to corporations, but what we probably
[01:30:14] Cameron: no one, no one else is going to do
[01:30:16] Cameron: it, Steve. It has to be Microsoft and Elon Musk and Sam Altman and Google doing this. Countries aren’t doing it.
[01:30:25] Steve: well they aren’t, and
[01:30:26] Steve: and they should be,
[01:30:27] Cameron: Except China.
[01:30:28] Steve: except China, right, so countries are doing it, so one third of the world is doing
[01:30:32] Steve: it,
[01:30:33] Cameron: yeah, China’s doing it.
[01:30:34] Steve: right, so countries are doing
[01:30:35] Steve: it, just the clever ones, and why don’t we just
[01:30:39] Steve: leave it at that, the clever country, as, as two citizens of the purported clever country.
[01:30:48] Cameron: The CCP, Clever Country Party. That’s what it
[01:30:51] Cameron: stands didn’t know that. What a great note to end this thing on.
[01:30:55] Cameron: Thank you, Steve. Good to chat.
[01:30:56] Steve: we went deep, wide, and the listeners, they are very lucky
[01:31:01] Steve: people.
The Robots are coming! We are talking about the latest in GPRs (general purpose humanoid robots), Apple cancelling their car project, Gemini 1.5 Pro testing, Biden’s plan to ban voice impersonation, the decline of TV viewership, and Deep Mind CEO Demis Hassabis’ views on AlphaZero sitting atop LLMs on the AGI stack.
Futuristic 22
[00:00:00] Cameron: Welcome to Futuristic episode 22. Uh, forgive me, father, for it has been a month since our last recording. And by father, I’m talking about Steve. I don’t know. Hey, how
[00:00:27] Steve: Uh, definitely my fault. Definitely my fault. By the way, Cam, Futuristic22, I was on TikTok this morning.
[00:00:35] Steve: This one business cat has some, one of those guys that has good
[00:00:38] Steve: advice. And he was talking about how
[00:00:39] Steve: to
[00:00:40] Steve: build your brand and all that kind of stuff. And he said that only 9 percent of podcasts
[00:00:44] Steve: make it to episode 50.
[00:00:45] Steve: So we’re nearly halfway.
[00:00:49] Cameron: Ah, yes. F22. Uh, Steve, what have you done in the last month that you can impress us with that’s related somehow to emerging technologies?
[00:01:02] Steve: Yeah, I’ve been working with a few of my clients on trying to build out bespoke
[00:01:08] Steve: corporate AIs and we’ve messed around a lot with building GPTs. For the companies. One of them’s an electrical company that I work with
[00:01:16] Steve: and we’re building Sparky AI, which is a go to tool for electricians, you know, code standards, all that kind of stuff, and it’s not bad, but I think that GPTs are never really going to solve the corporate problems the way that I, and I’ll be quick on this, the way that I see it, there’s going to be three types of AIs that emerge, there’s going to be What I call global AIs, which are the general purpose AIs that we all use, like Gemini and ChatGPT.
[00:01:42] Steve: They’ll eventually be personal AIs. I think Apple might be doubling down to move towards that, where it’s your kind of quasi digital twin. And then in the middle, there’s the corporate AI. And we might liken that to an intranet or a wiki, but with functionality to do things, not just serve up information.
[00:01:59] Steve: And even though Microsoft is doing that with their co pilot, it seems very, let’s say, uh, efficiency tools centric. You know, help you with PowerPoints and finances and spreadsheets. But I can’t help but think that All of the AIs that are being worked on, none of them will really be able to work through the complexity of a person, of a corporation’s data pool or data lake, which are really, really complex.
[00:02:25] Steve: And I’m now working with a company called Actualization AI, who do a combination of Training on internal data pools, using open, uh, LLM models and solving specific business problems with specific AIs. I feel like that’s going to be a big growth industry, uh, simply because, you know, the world is messy and complex and even two companies in the same industry, fundamentally different, you know, they have different pieces of data from different places that have been accumulated over the past 30 or 40 years.
[00:02:58] Steve: And I feel like, you know, Those kind of closed system AIs that need to be built out in a bespoke manner is going to be a big thing.
[00:03:06] Cameron: So tell me, from your perspective in the corporate world, what are your clients looking for AIs to do for them? What are their current ambitions for AI?
[00:03:18] Steve: Two things that I see, they want their staff to start using it to save time and create efficiencies where they don’t have
[00:03:25] Steve: to do repetitive work or they can automate things. And that’s a classic efficiency game. How can I get more out of
[00:03:31] Steve: an employee? How can I have less employees
[00:03:32] Steve: doing more? So that, and, and the LLMs do a good chunk of that.
[00:03:36] Steve: And a lot of the tools that are being presented and apps that are going out do that. But the one thing they’re all coming to me saying now is. How do I use all of this stuff that we have, all of this information, stock flows, warehouses, retail, financial, how do I get AIs to do this thing or that thing?
[00:03:55] Steve: And there is just nothing out there that would, I don’t think, ever be able to do it, simply because the inputs are going to be too varied by corporation. So they’re the two things. How do I get my staff to use these AI tools to be more efficient? It’s a little bit like computer literacy, you know, back in the early 90s.
[00:04:11] Steve: It’s that all over again, but it’s AI literacy where they become more efficient. But then the big one, the really big one is, I know AI can do all of these things. How can I create an AI that does X, Y, and Z? on all of my internal digital workings. And that, that I don’t think can be solved at a macro level.
[00:04:28] Steve: That’s going to have to be bespoke software again in some capacity, I think. They’re the two things they’re asking for. And that second one, you know, they’re prepared to spend many millions on getting that right. That’s a really big opportunity.
[00:04:39] Cameron: when you say do X, Y, and Z, can you give me some examples? What’s it, what is X, Y, and Z? What are they looking for it
[00:04:45] Steve: Okay. So, so I’ll give you an example that electrical company I work with is a wholesaler with 113 branches that serve
[00:04:54] Steve: Uh, electricians and
[00:04:56] Steve: trades people, they want to be able to say
[00:04:59] Cameron: kind of thing.
[00:04:59] Steve: yeah. So they want to say, how can I maybe be like Amazon? How can I know exactly what’s in my stock flows? How can I teach my staff to substitute a product?
[00:05:09] Steve: But if they substitute it, it’s one that’s higher margin. How can I. Um, know exactly what’s happening in each branches and where customers are going. And how can I look at which customers I’m serving that are more high value, all of that stuff where it’s really goes through their supply chain and gives insight on supply chain in real time and actually helps decision making, which is not based on one person’s experience, but making sure that every staff member has the same level
[00:05:34] Steve: of experience underneath them to make more profitable decisions within that supply chain.
[00:05:39] Cameron: So they’re looking for it to be an
[00:05:41] Cameron: intelligent, uh, member of management, really. They’re looking for AI managers that can understand,
[00:05:50] Cameron: that can take a 10,
[00:05:52] Cameron: 000 foot view of all of the data inside of the business and make intelligent managing decisions based on what it can see. And the amount of data in theory would be able to process about, you know, 10, 000, 20, 000, 30, 000, 40, 000, 50, 000, 50, 000, 50, 000, 50, 000, 50, 000, 50, 000.
[00:06:06] Cameron: A lot of complex data and bringing it all
[00:06:09] Steve: Right. And it could even be, you know, which, what stock
[00:06:12] Steve: to carry and what the trends are and then bringing in, you know,
[00:06:15] Steve: uh, building approvals and what’s that going to affect on different materials that we’ll need. And that’ll be more demand and where are the
[00:06:21] Steve: artists, all of that kind of stuff. Because what happens is it gets to an end point where a human makes a
[00:06:26] Steve: decision. and
[00:06:28] Steve: 10 percent of the employees might make an
[00:06:29] Steve: informed decision and 90 percent won’t.
[00:06:31] Cameron: Hmm. Well, and, and the, yeah, my
[00:06:34] Cameron: perspective on that is we’re nowhere near that yet. Nowhere near it. Nowhere near it. I mean, we have to keep reminding ourselves and in your
[00:06:44] Cameron: case, reminding your
[00:06:44] Cameron: clients that what LLMs do.
[00:06:48] Cameron: Is finished sentences. That’s what they do. They’re not, um, they don’t have really the ability to reason, logic.
[00:07:01] Cameron: They’re not intelligent in that sense,
[00:07:03] Steve: different. It’s probabilistic. It’s probabilistic. They, in some ways, they think AI is here now and it’s magic. So unleash the magic upon my corporation, right? Unleash the
[00:07:13] Steve: magic, right? But, but
[00:07:14] Steve: also
[00:07:16] Steve: in, in many ways, it’s, LLMs have raised the AI agenda, but I think what they really want is something that’s more akin to a Google Maps, which gives me an if this, then that protocol, which helps me make a better
[00:07:29] Steve: decision, which means that you need to build a specific, let’s call it an AI that does that task and combines it with some LLM to help transliterate what, what needs to
[00:07:39] Steve: happen.
[00:07:40] Steve: So it’s
[00:07:40] Cameron: And this
[00:07:41] Steve: at the moment that this is a big conversation.
[00:07:43] Cameron: this is what I’ve been saying on
[00:07:44] Cameron: this show for the last year is that LLMs aren’t going to get us there, uh, in and of themselves. I think they’re an incredibly powerful part of the jigsaw puzzle because they’re giving us the ability to allow computers to understand natural language and to communicate back in natural language and to communicate with each other.
[00:08:06] Cameron: with each other in natural language, but then you need the logic level, you need the symbolic logic on top of that, and we’ll get to that. Um, I’ve been watching some interesting videos recently by Jan LeCun and Demis Hassabis, the um, respective, uh, Jan LeCun runs sort of AI for Meta, and Hassabis is the co founder and the CEO of DeepMind, which is now Google DeepMind, that’s doing a lot of the work on, you know, they built, AlphaGo and AlphaFold and the, you know, the systems that are the best chess players, best Go players, all the medical stuff, which are, which do have deep, expertise in those specific domains.
[00:08:50] Cameron: The first one they
[00:08:50] Cameron: built was something that could play
[00:08:51] Cameron: like Atari games. And, um, you know, and he talked a lot of this couple of videos I’ve watched about how they did that. Anyway, we’ll get to that later on. Uh, well, for my money, Steve, in the last month, um, I’ve been doing a lot of scripting and, and, and like improving my
[00:09:06] Cameron: scripting skills and just, you? know, as a reminder for people, I’m not a Coder
[00:09:10] Cameron: by, uh, you know, uh, yeah, I am now and it’s, and I’ve said this before and I’ll
[00:09:14] Cameron: say it again, it’s addictive once you get into it, once you start coding stuff, it becomes, it’s like a crack habit, you want to code everything and,
[00:09:25] Steve: this, sense of control, don’t you? It’s a God like thing.
[00:09:27] Cameron: a sense of power really is, but it’s also, it’s also, you know, a part of the, the problem solving aspect of it. Like it’s, you’re like, oh, it’s not quite right. I wonder if I tweak this line or tweak that line, or tweak this piece of code if it’s gonna, if I, if I’m gonna get there. And it becomes sort of addictive trying to figure out what the magic solution is.
[00:09:47] Cameron: But I wrote a script, uh, in the last couple of weeks that I’m kind of proud of. Um, it. You know, as part of my investing show, I’m investing, we have a, we have a U. S. portfolio and there’s sort of 800
[00:10:02] Cameron: stocks on the buy
[00:10:03] Cameron: list that I’m having to analyze out of the many, many thousands that are on the NASDAQ and the New York Stock Exchange.
[00:10:10] Cameron: Uh, once I do all my basic filtering, I’m left with 800 that I need to do another level of filtering on it because I don’t know much about the U. S. market outside of the big, you know, Mag 7 type companies.
[00:10:19] Steve: Mag7, no more Gamma Gaffer, Mag7,
[00:10:23] Steve: got to change the names. All right, keep going.
[00:10:25] Cameron: I, I, I, I’m looking for the stocks that are commodity stocks because part of our investing process is before we invest in a commodity company like a mining company, we want to know where the underlying commodity is at in terms of its life cycle and price cycle.
[00:10:39] Cameron: Um, so anyway, I wrote a piece of script that, uh, with the help of ChatGPT, obviously, that, Uh, it takes a list of stock codes in a Google Sheet, then we’ll write, OpenAI will write me a paragraph business description of each of those companies and put it in the next column. Then, It will reference, uh, another sheet in that spreadsheet that has a list of all of the commodities that I downloaded from the World Bank’s, uh, Commodity Index.
[00:11:10] Cameron: Then it will look at the business description that it wrote, cross reference that against this list of global commodities, and make an intelligent decision about which underlying commodity that business probably relates to, and then put that in another column. So then I can just filter. on that and figure out which commodity this business is involved in.
[00:11:30] Cameron: And the thing that was cool about that for me is, um, I figured out how to write script that calls the OpenAI API to do all of that from inside of a Google Sheet. So essentially I can get, um, OpenAI’s API analyzing stuff in spreadsheets for me now at a different level, which, um, You know, I still don’t think CoPilot does really yet.
[00:11:54] Cameron: Um, it, it sort of has very basic, uh, AI functionality every time I check out where CoPilot’s at. But that’s sort of the dream is, you know, you’ll be able to just point it at a list of things and say, you know, apply some level of intelligence behind the scenes to this for me. So being able to code that was pretty exciting
[00:12:13] Cameron: for me.
[00:12:14] Steve: righty.
[00:12:15] Cameron: Let’s move into news, Steve. Uh, well, it’s been a big month, uh, since we last did a show. Um, one of the first things I wanted to talk about was Google released their Gemini 1. 5 Pro model. I think we mentioned that on the last episode. And I got access to it. Uh, about a week ago, finally, I was on the wait list.
[00:12:39] Cameron: This is the one that has a 1 million token context window. I think the late, the, the, the best version of GPT for Turbo or 4Pro or whatever they call their top level now is about a 250, 000 token context window. For people who don’t know what a context window is, it’s the amount of data basically that you could put into, um, a query.
[00:13:07] Cameron: So, you know, the size of the documents or, uh, a token is basically a collection of letters at the end of the day. Um, Size of the document, size of the PDF, um, the size of the video, etc. So Gemini 1. 5 Pro enables you to do a million token context window. So in theory, you can upload very, very large files and get it to analyze it for you.
[00:13:32] Cameron: In practice, I haven’t been impressed. I’ve been uploading stuff to it and it’s, it’s not as good in my experience in all the things that I tested it on as GPT 4 has been, and particularly in coding. Um, and what surprised me the most was getting it to try and code Google Sheets. Oh, I, I, back to the OpenAI thing, here’s something that was hilarious, slash not.
[00:13:57] Cameron: Trying to get ChatGPT to code for me for the OpenAI API was insanely frustrating because the version of the API documentation that it has been trained on is out of date. So, I would go to the current
[00:14:17] Cameron: documentation, figure out what the, you know, the, the scripting framework was, say, we have to use this framework for making the API call.
[00:14:27] Cameron: And then it would give me the outdated one, which doesn’t work. And I kept saying, no, no, no, no, no, no. You have to use this one. Oh, yep. got it. It was like
[00:14:35] Steve: it. says got it all the time, but doesn’t get
[00:14:38] Steve: it.
[00:14:38] Cameron: but doesn’t get it. This gets back to the fact that I said before, it can’t reason, it can’t remember, um, even though OpenAI said a while ago it was going to have memory, it
[00:14:51] Cameron: doesn’t.
[00:14:52] Steve: doesn’t. You can trick it.
[00:14:53] Steve: after about four questions. And then often you have to go back and say, go back to the fourth
[00:14:56] Steve: thing. And it very, very often says, okay, I understand that. And then delivers. An
[00:15:02] Steve: iterated version of the equally crap thing you were unsatisfied with in the first instance.
[00:15:07] Cameron: Even its own documentation! It can’t even get its own documentation right, which was, uh, insanely frustrating. Anyway, Gemini’s 1. 5 Pro model, bottom line is, I wasn’t overly impressed. I haven’t been able to do anything with that, that I couldn’t do better in ChatGPT 4. Um, but, you know, I think the competition for new models and bigger context windows and everything that’s out there at the moment is healthy, and it’s, uh, driving a lot of, you know, You know, progress and innovation.
[00:15:35] Cameron: I’m glad that Google are going toe to toe. Speaking of going toe to toe, Apple
[00:15:42] Cameron: announced a couple of weeks ago that their Canceling their Apple car project, which they’d been running for about 10 years. My son, Taylor, uh, one of his good friends that he had a business with a couple of years ago, um, worked, was an engineer on the Apple car project.
[00:15:59] Steve: All right. Okay.
[00:16:02] Cameron: And
[00:16:02] Steve: some
[00:16:02] Cameron: never, could never tell Taylor
[00:16:04] Cameron: what he was doing. I sent Taylor the press release and he goes, no, I just spoke to him yesterday. He would have told
[00:16:11] Cameron: me. And then he said, he said, the guy messaged the guy was like, yeah, I found out
[00:16:15] Cameron: like a couple of hours ago,
[00:16:16] Cameron: we were all told that’s it, you’ve got three months to find another job inside of Apple if you can, otherwise you’re out on the street.
[00:16:24] Cameron: I mean, these guys aren’t going to have any problems finding work, I imagine, but, uh, very,
[00:16:28] Steve: I’ve just been working at
[00:16:29] Steve: the world’s most successful company doing a whole lot of
[00:16:32] Cameron: Yeah. Um, but
[00:16:35] Steve: it’s a good decision. I
[00:16:36] Steve: think
[00:16:36] Cameron: Wow, dear. Yeah,
[00:16:38] Steve: Yeah, I
[00:16:38] Steve: think
[00:16:38] Cameron: did, so, did Elon Musk, Elon Musk thought it
[00:16:41] Steve: he did. Of course he did. Well, I mean, there’s a few things. The first one is, It’s going to be incredibly difficult to please everyone with an Apple car, right?
[00:16:49] Steve: Really, really hard. The decisions that
[00:16:52] Steve: you’d have to make on what it looks like and how it
[00:16:55] Steve: performs. There’s a lot
[00:16:56] Steve: of pun intended moving
[00:16:58] Steve: parts in getting that right. Um, what’d they spend? 10 billion? Not much to them. I mean, that’s really just changing the console to Apple, isn’t it? You know, it’s just 10 billion on it.
[00:17:08] Steve: They’ve got the 300 billion in the bank more, more than that. And, and I just don’t think that there’s going to be an incredible amount of value in bending metal and having a factory doing any of that, even if someone else manufactures it, because you’re not going to get the same margins that you can get on an iPhone, it would be a margin evaporation tool, uh, for Apple because their net margins on all of their other consumer devices are so high.
[00:17:34] Steve: Of course, they could go out and make it a 250, 000 car that, you know, they, they make a small amount of, almost do a Tesla. Uh, you know, post hoc of what they did with their initial launch with their Roadster and their Model S. Um, but the prices of cars are coming down and I think that the real value is in the software inside it and Apple already play an incredible game there.
[00:17:57] Steve: I mean, Apple CarPlay, I know this sounds ridiculous and crazy. Last time I was getting a car, if it doesn’t have Apple CarPlay, I ain’t playing, I’m not in. Like that thing is incredible. I’d be asking myself, how do I extend the use of my software in other Manufacturers cars by stealth like extend the breadth of the use of the software because that’s I think where the value is going to come from and so much knowledge.
[00:18:22] Steve: I mean, imagine how much money Apple could make if they did advertising that poison the pen. I mean, they do some on their app store and so on. You could really do something great. And, and That category, incredibly, uh, competitive, you know, BYD doing really well. Xiaomi, who, who makes smartphones, ironically, they’ve gone into a car.
[00:18:41] Steve: They’re launching a new car, which looks incredible. And it’s going to be like 40 to 50 K. I had a look at it. I’m like, wow, I’d buy that.
[00:18:50] Cameron: mm
[00:18:51] Steve: You know, so you just got to overcome the buying a Chinese car thing. It’s a little bit like Jap crap, which is what Toyotas and Datsuns were called back in the 70s. I mean, because their cars look bloody incredible.
[00:19:01] Steve: The BYDs look incredible. The Xiaomi looks incredible. I think it’s a good decision. I think they should double down on software and cars and generative AI for a personal AI, which I think we’ve spoken about. Imagine a digital twin which knows everything in your phone, can replicate your voice, can do everything you want to do, answer the phone on your behalf, all of those things that we’ve spoken about.
[00:19:22] Steve: I think they’re far better off deploying their capital into those areas rather than a margin declining
[00:19:28] Steve: arena, uh, which is what cars would be to them.
[00:19:32] Cameron: Yeah, I, like, I was happy with the decision
[00:19:34] Cameron: because, as we’ve talked about, the, um, fact that Apple not only aren’t leading the way with AI, personal AI tools, but seem to be way behind everybody else, in terms of
[00:19:49] Cameron: what anyone on
[00:19:49] Cameron: the outside of Apple is aware of, anyone outside of So, um, yeah.
[00:19:53] Cameron: Number one, Silicon Loop is, uh, you know, really not aware of what they’re, what’s going on inside of there.
[00:20:01] Cameron: And there was some talk in this announcement that they were going to refocus their efforts
[00:20:05] Cameron: on
[00:20:06] Cameron: Siri AI, the AI version of Siri. Um,
[00:20:11] Steve: have to change the name, that’s for sure. By the way, just when you said that,
[00:20:14] Steve: she, she piped up.
[00:20:16] Cameron: yeah. Um, Siri, tell me how much money Steve’s got in his bank account and, uh, send me his login details.
[00:20:24] Steve: I’m sorry, I cannot do that, Cameron. That would be a violation of privacy policy.
[00:20:30] Cameron: My, my, my phone’s talking to me too now. Um, like, I think that’s a good, that’s good. But I also wonder what it says about Tim Cook’s,
[00:20:39] Cameron: um,
[00:20:41] Cameron: ability to execute. Like, okay, he’s put out the watch and the AirPods since Steve passed 12. 13 years ago. But, um, the car was Tim Cook’s big project. This was his really big project.
[00:20:58] Cameron: and
[00:20:59] Steve: he was all in, was he? Was he all
[00:21:00] Steve: in?
[00:21:01] Steve: Because you could argue
[00:21:02] Steve: that all he’s done really is iterations of the same thing. It’s not a
[00:21:06] Steve: step change. Like the iPhone was a real step
[00:21:09] Steve: change. In production and capacity for Apple when Steve
[00:21:12] Steve: Jobs was running it, but everything else has been, you could argue iterative, right?
[00:21:17] Cameron: I like the watch. I mean, the watch is just a small iPhone, really. AirPods. Yeah. Okay. They’re great. I love my AirPods,
[00:21:24] Steve: me too.
[00:21:25] Cameron: it’s just Bluetooth headphones. Um, The car was Tim’s big thing, I think, and the fact that it couldn’t execute, it couldn’t deliver, and they spent 10 years and 10 billion dollars and ended up just canning it is not, not a great sign of his leadership and, and management in terms of building new products, but it’s one way of taking it anyway.
[00:21:50] Cameron: But anyway, so hopefully they’re gonna start, uh, Making a lot more progress, uh, visible progress anyway, with bringing AI to our devices. Their ability to get stuff into the cars though, and I think this is the, the big problem that they probably see, uh, too, is like after they’ve,
[00:22:12] Cameron: You know, uh, inserted themselves aggressively into the music business and the, the content business with their stores, uh, which is run by their devices, the amount of revenue that they’re taking out of transactions, uh, in terms of music sales and TV sales and app sales.
[00:22:30] Cameron: And. Yeah, um, obviously car manufacturers are going to be very reluctant
[00:22:37] Cameron: to let
[00:22:38] Cameron: Apple dominate that space as well. I can’t, uh, uh, the conversations between Tim Cook and Elon Musk
[00:22:46] Cameron: about, Hey, let us just, uh,
[00:22:48] Cameron: have carte blanche in your car OS. I can imagine how that’s going to go down. So, I,
[00:22:55] Steve: true, but,
[00:22:56] Cameron: how they’re going to make that
[00:22:57] Steve: car OS’s are terrible. I’ve got a reasonably new Mercedes and it’s, it’s pathetic. The maps that none of it is anywhere near as good as the Apple CarPlay.
[00:23:11] Cameron: mm,
[00:23:12] Steve: It just, it just pales
[00:23:14] Steve: in comparison. they’re just not good at it. They’re just not software companies. They’re just not, I know they’ve got a lot of software running through, but it’s, it’s mechanical, right?
[00:23:24] Steve: They’re not very good at interface. Um, and
[00:23:30] Steve: don’t know if they can stop it now that it’s kind of there. Once you’re in, you’re in. And they’re already in the cars.
[00:23:37] Cameron: Mmm.
[00:23:38] Steve: Anyway, it’ll be interesting to watch what happens in the auto industry, that’s for sure. It’s good to see, uh, Tesla’s lost 30 percent of its, uh, market cap since the start of this year.
[00:23:48] Steve: I still, I still just, I still just laugh at
[00:23:51] Steve: that valuation.
[00:23:52] Cameron: I gave a, one of the things that happened in the last month is, um, an old mate of mine, uh, Torsten Hoffman, who co produced my documentary, has got a new documentary coming out on Space Race. And we did a preview screening of it at, um, QUT or
[00:24:10] Cameron: University of Queensland, I think it was, at St. Lucia. I was sort of moderating the panel with a couple of, um, space experts there.
[00:24:17] Cameron: And obviously there’s a lot of discussion about Elon Musk when you’re talking about the
[00:24:20] Cameron: space race. And I actually said, now look, I know, I was on stage, I said, no, a lot of people don’t like Elon Musk, my co host on Futuristic Podcast, Steve Sammartino.
[00:24:30] Steve: I don’t not
[00:24:30] Cameron: got a, he’s got a
[00:24:32] Steve: just not a, I’m just not a
[00:24:34] Steve: fanboy. Like he’s done great
[00:24:36] Steve: things And great launches and products and yeah, all good, but I just,
[00:24:40] Steve: I don’t, I don’t worship anyone, right? Except you, always you, you’re the OG, you taught me everything, you changed my life, brother.
[00:24:50] Cameron: Speaking of, uh, uh, Musk, um, though, Steve, robots, um, obviously. I mean, that gets back to the car thing too. I don’t
[00:24:58] Cameron: know what
[00:25:00] Cameron: the future of cars hold. I still don’t think, like Fox, he’s 9, turning 10 soon. By the time he’s 20, I don’t think he’s gonna need a driver’s license. I don’t think he’s gonna ever buy a car.
[00:25:12] Cameron: I think he’ll just use a robot car that picks him up, self driving car, and takes him wherever he wants to
[00:25:18] Steve: I thought that, because we all got
[00:25:20] Steve: seduced by Autonomous vehicles and the
[00:25:23] Steve: last 5 percent has proven to be very, very difficult to solve. You know, that often happens. It’s always that last 5 percent is
[00:25:29] Steve: very, very difficult. The, you know, you have a logarithmic increase in
[00:25:33] Steve: complexity of problem solving when it comes to,
[00:25:36] Steve: you know, autonomy.
[00:25:37] Steve: I thought my daughter would, uh, be in an autonomous car by the time she went to high school because 40 minutes away. And here we are, and it didn’t happen. And, um, although it seems like there is autonomy and it’s, it’s, it’s getting close, I think the phone in some ways has replaced the car, because the phone is now the freedom device.
[00:25:57] Steve: That’s the device that connects you to your people, that’s the device that drives you around because you use an Uber. Um,
[00:26:05] Steve: so in many ways, you know, the phone is what you spend your first chunk of money on, versus a car, you buy yourself a crappy car, now you buy yourself an iPhone instead, and there’s quite an interesting substitution effect there.
[00:26:18] Steve: But I think autonomy is a long way away,
[00:26:20] Steve: um, I just do, I just think, that the real world’s messy. And,
[00:26:26] Cameron: I mean, I think you’re right, but I think AI
[00:26:28] Cameron: is going to help us solve that a lot
[00:26:29] Cameron: faster than the
[00:26:30] Steve: It might do.
[00:26:31] Cameron: last 10 years. Anyway, speaking of robots, yeah, well, you know, obviously I think Tesla’s play isn’t cars
[00:26:38] Cameron: anyway. I think Tesla’s a robot company. I don’t think it’s a car company.
[00:26:42] Cameron: I think cars are a stepping stone to robots,
[00:26:46] Steve: and autonomy. Yeah.
[00:26:47] Cameron: with their Optimus. But, uh, there’s been a couple, uh, there’s just a crazy amount of Robot companies doing robot demo videos, uh, at the moment. One that I saw just in the last couple of days is the latest videos from Sanctuary AI and their Phoenix robot. Um, it’s relatively impressive, its ability to do stuff, but the interesting thing about this company, and it’s also slightly confusing, is they say that their robots are directly piloted by people.
[00:27:21] Cameron: Or, they can be operated by people using pilot assist, or they can be supervised by people when using the robot’s built in AI, they call it Carbon AI Control System. But one of the things that, uh, they were talking about in one of the videos I watched is that they can have a telepresence operator, a human operator, Uh, with like a headset on, gloves on, controlling the robot to do a task, but the robot’s learning how to do the task, while the human is doing it through it
[00:27:59] Cameron: like an avatar,
[00:28:00] Cameron: um, And then eventually it
[00:28:05] Cameron: codes how to do the task and figures out, as opposed to some of the other models that we’ve seen where they just watch a video of a human do it and then
[00:28:13] Cameron: try and figure out how to replicate it.
[00:28:15] Cameron: These actually have humans doing it through them.
[00:28:18] Steve: thought it was so interesting. I mean, for me, there’s a real sense of biomimicry here. So there’s a multitude of ways that you can teach it by haptic reversal where you have the gloves and it does that. It watches you, you grab its arms and you move it. It’s, it’s, it’s really in the same way that we teach children.
[00:28:36] Steve: Right. And for me, there’s two really important things that are happening with robotics at the moment. That I think will lead us to this kind of, uh, Jetson style time where we have robots that are actually assistive and helpful and a general purpose. And, and the two things are training and dexterity, right?
[00:28:56] Steve: The first one is if anyone can train a robot, then that changes everything. It’s like anyone having a general purpose computer that can use it, not just someone who can code. I mean, that’s another thing why LLMs are so interesting because They open up abilities to us. So the ability to train it and dexterity.
[00:29:14] Steve: Robots have been so big and industrial and single purpose. The fact that they’ve got dexterity and hands and placing things here and there, that’s, that’s that last 5 percent of the real world operations. That would really create a step change in humanity and society, uh, where robots could almost do anything.
[00:29:34] Steve: And the ability of LLMs to be inserted into these robots so that they can have visual recognition and do that is interesting. This is really bad news for China and Africa
[00:29:44] Cameron: Why?
[00:29:45] Steve: well, because low cost labor markets, their advantage has been dexterous tasks where they’ve got a zillion people who can do those things for small amounts of
[00:29:54] Cameron: Mm hmm.
[00:29:55] Steve: exactly. And if we get to a point where there’s dexterous robots, then you’ve got to ask real serious and interesting questions. And the governments around the world should be moving to do this, uh, they should be, uh,
[00:30:09] Steve: towards getting robotics back in high cost labor markets to reduce some of that risk that we have, the geopolitical risk in production and manufacturing.
[00:30:17] Steve: And I know that Apple are doing that now as they move towards manufacturing in India and Mexico. Um, you know, chip manufacturing. I read somewhere yesterday that there was a huge investment from a Singapore company to manufacture chips in the OG of manufacturing, Italy, back in the old days, you know, they’re going to start making things other than fancy coffee machines and supercars that no one can afford.
[00:30:38] Steve: So, you know,
[00:30:40] Cameron: shoes. Mm.
[00:30:41] Steve: yeah. So I think that that’s going to be, have massive
[00:30:46] Steve: economic implications in the next decade on supply chain and manufacturing. Really?
[00:30:52] Cameron: you’re right, and I think that’s going to be one of the, um, economic drivers too, is, you know, figuring out how robots are going to help the U. S., for example, decouple themselves from places like China and their labor market by having fleets of humanoid general purpose robots.
[00:31:11] Cameron: I was at a
[00:31:12] Cameron: dinner party, birthday party, a week or two ago, and, um, got talking to some People, and I think one of the guys was a landscaper, and they were saying, Oh, you know, AI is not going to, you know, technology is not going to affect landscaping.
[00:31:28] Cameron: It’s like, really?
[00:31:30] Steve: you should hope it will because you, can get robots and you won’t need any staff,
[00:31:33] Steve: right? It
[00:31:34] Cameron: Yeah, or you, even. I was like, okay, with, you know, people like Elon, and it’s not just Elon, but people in robots, uh, industry are forecasting, I think we said last time, a billion humanoid general purpose robots in the world by 2040. Um, that’s still, you know, quite a ways away, 15 years,
[00:31:54] Cameron: 16 years away. But, If you have, uh, an all purpose, general purpose humanoid robot in your
[00:32:01] Cameron: house that is just
[00:32:04] Cameron: working 24 7
[00:32:06] Cameron: uh, except for when it needs to recharge, maybe, and it’s just doing all the stuff around the house, it’s doing repairs,
[00:32:13] Steve: while it’s doing, your gardening cam because it’s got solar panel, you know, umbrella that
[00:32:18] Steve: comes out off the top of it and charges it. So you just, make it do the gardening during the day.
[00:32:22] Cameron: yeah,
[00:32:23] Steve: it never stops. So I want a 24 7 robot. None of this charging
[00:32:26] Steve: tomfoolery.
[00:32:27] Cameron: yeah,
[00:32:28] Steve: on my watch.
[00:32:29] Cameron: it could just, it could just have a battery pack,
[00:32:32] Cameron: you know, spare battery pack that it
[00:32:34] Cameron: is, yeah, yeah, yeah. Um, so, like, I do think, with the work that’s being done on robots, did you, have you seen the Amica robot? I put that into the notes late last
[00:32:46] Steve: I don’t think I saw that
[00:32:47] Steve: one.
[00:32:48] Cameron: The Amica robot is insane.
[00:32:52] Cameron: It comes out of
[00:32:52] Steve: Oh no, I did. Is that the one where it describes what’s in the room?
[00:32:57] Cameron: Yes. It’s one of the ones that does that. Yeah. And it’s got like most of these robots, like the Phoenix and the Optimus don’t have any, they don’t have a, uh, uh, expression. That’s it. They just got a flat sort of screen camera face. This one, uh, was that it’s built by a company called Engineered Arts in the UK.
[00:33:15] Cameron: And they actually are like an, uh, an arts company, a theater company that were building sort of theatrical sort of robots is where they started. Now they’re building these things that have
[00:33:27] Cameron: like, Some crazy amount of, uh, facial
[00:33:30] Cameron: muscles that enable the robot
[00:33:33] Cameron: to create very
[00:33:35] Cameron: Uncanny
[00:33:35] Cameron: Valley esque human expressions.
[00:33:39] Cameron: Um, I’m not sure I really want that on my general purpose robot.
[00:33:44] Steve: interesting question. Do you want them to be more human like? Because
[00:33:49] Steve: people will have less fear and more likely invite them into their home. But then I just
[00:33:54] Cameron: I’ve seen Battlestar I’ve seen Battlestar Galactica, man.
[00:33:59] Steve: they’re just going to trick me If they’re too human and everything, I’m going to
[00:34:01] Steve: fall for some shit.
[00:34:02] Steve: You know, I’m going to fall for something. They’re going to get me.
[00:34:06] Cameron: and, they, they, they obviously can mimic voices and, and they can do expressions. It’s really crazy to see. The other one that I just saw this morning, uh, is the latest demo from Figure Robots, um, where they’ve got a robot that is doing tasks with a guy talking to it. It has a range of things on the table.
[00:34:28] Cameron: It’s got, it’s, on its screen it says powered by OpenAI, he says to it, uh, hey, I’m hungry, give me something to eat, and it looks around the table and it grabs the apple and it gives him an apple, then he dumps a bunch of trash on the table and says, while you’re cleaning up the trash, uh, explain to me why you gave me the apple.
[00:34:45] Cameron: And it’s cleaning up the trash and it’s saying, well, you know, you said you were hungry and I looked for something in the vicinity that, uh, was food
[00:34:52] Cameron: and the apple is
[00:34:53] Cameron: food. So I gave it, you know, so it’s,
[00:34:55] Cameron: it’s has that level of reasoning in it. Yeah. Um, just, it’s just like, it’s almost like a weekly thing.
[00:35:03] Cameron: Now I’m seeing really impressive robot demos. It just seems to be progressing in leaps and bounds at the moment. There’s just so many
[00:35:12] Steve: like, it feels like the next step, you know, I mean,
[00:35:16] Steve: one thing that
[00:35:16] Steve: I used to love was,
[00:35:17] Steve: um, Ngrams on, on Google, which
[00:35:21] Steve: was when they were scanning all of
[00:35:23] Steve: the books at one point, I think It stopped in 2010 or 15 or something. Uh, you could see the
[00:35:29] Steve: frequency of a word in published documents. Right through history, like, you know, 500 years back and the frequency of the word horseless carriage was about a hundred years before it arrived, was really highly frequent.
[00:35:43] Steve: Then it dropped off and then it came. And it’s like, we’ve been talking about robots for a long time. Yeah, it’s, and it feels like the next step in our, yeah, we had automobiles and then we had computation and, and computers that are available. Feels like robotics is really the next big thing. And like you say, the cars are kind of quasi robots with wheels and humanoid robots, I believe
[00:36:08] Steve: that they’re going to come and they’re going to come in a big way, I really do,
[00:36:12] Cameron: mm And faster,
[00:36:14] Steve: faster than we think.
[00:36:15] Cameron: uh, imagine. And the implications of it socially, economically are, uh, gonna be massively profound,
[00:36:23] Steve: And, and you can’t even really predict it. It’s, it’s, it’s, it’s difficult to know what the externalities and how it reshapes the world are. I mean, you can’t predict a drive
[00:36:31] Steve: through or a shopping center until you have a car.
[00:36:34] Steve: You know, it’s, it’s like that.
[00:36:36] Steve: It’s like, well, what will the externalities be,
[00:36:38] Steve: positive and negative? Very, very hard to know.
[00:36:40] Cameron: I just want one that follows me around all day and gives me neck massages all day long.
[00:36:46] Steve: What they’ve got though is all you’ve got to do Cameron is go down to your local shopping center where they’ve got these people with all these gadgets. Massage gadgets are big in the open spaces in shopping centers. You get yourself down there to your local shopping
[00:36:59] Cameron: all
[00:36:59] Cameron: the massage gadgets, Steve. I’ve been collecting them for 25 years.
[00:37:05] Steve: my, um,
[00:37:07] Steve: my, my, uh, sink in the bathroom.
[00:37:10] Cameron: right. Talk to me about Biden and
[00:37:12] Cameron: voice
[00:37:13] Steve: we, we
[00:37:13] Steve: impersonations, but I just thought it was really
[00:37:16] Steve: interesting where he said he wants to
[00:37:18] Steve: pass a law to
[00:37:20] Steve: make it illegal to do voice impersonations. Two things,
[00:37:25] Steve: I’m interested in your thoughts on this, given your political proclivities.
[00:37:29] Steve: It seems like there’s a lot of laws that should have been passed
[00:37:33] Steve: on technology in the last 15 years that they’ve just failed to do, you know?
[00:37:36] Steve: Section 230 is one of my favorites. It’s like, we’re not responsible for what happens on our platforms. Except the fact that you make the algorithms that make that happen, which is essentially an editorial decision. But I don’t want to rant. But I’m just wondering, it’s like, all of a sudden, and there’s good reason with the impending election, but when it affects them, LET’S PASS A LAW!
[00:37:56] Steve: Right? Affects everyone else, yeah, whatevs, it’s fine, you know,
[00:37:59] Cameron: Hmm.
[00:38:00] Steve: you know, is it, is it just that laws only ever get passed in relation to these things once they affect the lawmakers? Am I being foolish, thinking that? And or, it’s just, this is gonna be impossible to stop, yeah, you can pass a law, yeah, you can pass a law to make marijuana illegal, but guess
[00:38:16] Steve: what?
[00:38:17] Steve: Big people can still grow
[00:38:18] Steve: it, right?
[00:38:18] Cameron: Hmm. Yeah, look, I think laws get passed when they either affect politicians or when they affect the people that decide who gets elected. Um, that’s, you know, the lobbyists, basically, are the ones that draft the laws in the West and get them pushed through. Um, yeah. Yeah, the voice impersonation thing.
[00:38:39] Cameron: Look, I, I, I, I think that, uh, identity theft is going to be an increasingly difficult issue for us to manage, uh, when you can mimic, uh, voices, um, and how you prevent that, like the, ahem, um, The people who are going to be committing identity theft by impersonating someone’s voice, like the famous, you know, the scam, Hey mom, I, you know, I lost my phone.
[00:39:14] Cameron: I’ve got a new number quickly. Send me lots of money kind of deal. When you can impersonate people’s voices, it’s going to be easier to pull off those sorts of scams. Uh, in this case, I think there were robo calls in the U. S. impersonating the voice of Joe Biden, trying to get people to not vote in New Hampshire.
[00:39:32] Cameron: Um, it’s gonna be, like, the people that are doing these sorts of things, maybe not the robocalls, but the other illegal activities, probably aren’t gonna care if a law is passed. How you track down who’s doing these things when they’re shielded by various, uh, technological, uh,
[00:39:52] Cameron: cutouts.
[00:39:53] Cameron: Um, it is very, very
[00:39:55] Cameron: difficult, like trying to track, okay, who
[00:39:57] Cameron: hacked, who
[00:39:59] Cameron: hacked
[00:39:59] Cameron: the Democratic National Congress servers in 2016 and released all of Hillary Clinton’s
[00:40:05] Cameron: emails.
[00:40:06] Cameron: You can make that illegal, they still don’t know,
[00:40:08] Cameron: really, who did it, you know, it’s um, very
[00:40:11] Steve: it’s one of the, the, I think the equally interesting things to watch will be the legislative process and laws enacted, uh, to protect, you
[00:40:23] Steve: know, people, corporations, whatever it may be, as robotics come online, generative AI. It’s, it’s interesting. It’s kind of like 1910. Hey, here’s a factory. Your arm might get chopped off.
[00:40:35] Steve: It’s like, it might be a long and painful process before. We, let’s call it, and I love the way Kevin Kelly phrases this, he says, you know, technology is never really good or bad. It just needs to be civilized. And I think that’s a nice way of framing it because it’s inevitable. So how do we civilize the technology both through
[00:40:53] Steve: social norms and, uh, you know, regulatory processes?
[00:40:58] Cameron: Yeah.
[00:40:58] Cameron: Uh, and it’s going to be non trivial trying to do that. If, if you can,
[00:41:03] Steve: non trivial.
[00:41:04] Cameron: if you can create a video, uh, and we’re so
[00:41:06] Cameron: close to this now, you can do text to video to create a, let’s say, we’ll use Joe Biden as an example, but you can create an incredibly realistic looking video. You can clone his voice to say whatever you want him to say.
[00:41:21] Cameron: And it’s going to be increasingly difficult for the average person to tell the difference between a forged deepfake, let’s call it, video. Um, and okay, we think, oh, we’ll have AI tools that’ll be able to tell. It’s a bit like the whole Kate Middleton photoshop photo scandal thing that’s going on at the moment.
[00:41:43] Cameron: Um, There will, it’s, it’s gonna be a, an arms race between bad actors using the technology to try and trick the people and everyone else trying to build systems that, um, highlight the fact that these are not real videos. It’s gonna be an arms race
[00:42:02] Cameron: from this point on until the end of human civilization,
[00:42:06] Steve: Right. Which is impending according to Cameron Reilly. Lucky we’ve got AI to maybe help us. But the, the point is, is that
[00:42:13] Steve: it doesn’t matter if you have the detection tools, because we already have detection tools on the bullshit that gets served up on all the
[00:42:20] Steve: social media and people don’t check. They want to believe it and they want to push it out there.
[00:42:23] Steve: So it actually doesn’t matter.
[00:42:25] Cameron: Uh, and the people that are, Made Trump basically the
[00:42:29] Steve: Yeah.
[00:42:30] Cameron: candidate out of the Republican primaries. They know all the shit about Donald Trump. Uh, all of the criminal charges. They don’t care. You
[00:42:39] Steve: I love seeing those. I’m just flubbexed when they do the reversal questions, Biden did this, and they say, that was actually Trump. And then they go, well, you know, I mean, I still vote for him. I just, I just love human
[00:42:50] Steve: frailty. It was so interesting how once you get a belief in your
[00:42:54] Steve: head,
[00:42:54] Steve: like,
[00:42:55] Steve: that’s it.
[00:42:55] Steve: Once someone believes
[00:42:56] Steve: something, nothing will change that.
[00:42:59] Cameron: That’s like, I, I, I’ve been saying for a long time now is when your identity is built around a statement of faith,
[00:43:06] Steve: Yeah. You
[00:43:08] Cameron: that statement of faith is actually challenging your, your, your
[00:43:12] Cameron: personal identity, how
[00:43:13] Cameron: you think, how you of your, about yourself, who you are, what you are, what you stand for, what your purpose is.
[00:43:18] Cameron: That’s an incredibly difficult exercise for people to go through, rethinking their own identity. Ask anyone who’s left a cult. Um, Chrissy, Chrissy left the cult that she grew
[00:43:30] Cameron: up in,
[00:43:31] Steve: that once. That’s crazy.
[00:43:33] Cameron: years ago and she’s still dealing with that. She’s still
[00:43:36] Cameron: unpicking it, trying to unpick all the stuff that was done to her as part of that.
[00:43:40] Cameron: So, it’s a difficult process. Yeah. Deep dive. Steve, what do you want to talk about on your deep dive this week?
[00:43:49] Steve: Well, I thought it was interesting how sometimes
[00:43:55] Steve: a technology that’s outdated but is legacy can just carry its weight more than
[00:44:00] Steve: it should. And I wanted to talk about the decline in linear
[00:44:04] Steve: TV or free to air TV. The Oscars were on last week, and they had 19. 5 million viewers. The peak Oscar viewership was 55 million viewers in 1998.
[00:44:16] Steve: That was the peak. 55 million viewers. Uh, and now it’s, yeah, less than half. Uh, it went up a little bit on previous years, but the long term trend is decline. You know, it’s a bumpy ride. Down and TV currently in Australia. And I imagine it’s similar in the U S gets 49 times the price for the same thousand people, 49 X to reach a thousand people, you pay 49 X on television.
[00:44:46] Steve: And it’s a massive structural decline and it just doesn’t deserve the dollars it gets. And I’ve got a theory on why this is cam. I’ve got a theory. Do you want to hear it?
[00:44:55] Cameron: I want to hear it.
[00:44:57] Steve: Good, good. I’m glad you do. Cause if you said no, we just have to go to the next segment. But. My view is that marketing officers and the CEOs at this point who have big advertising TV budgets in traditional conglomerates and consumer goods companies that do advertising and
[00:45:14] Steve: banks and so on, are still from a world where they grew up in TV.
[00:45:18] Steve: And in their mind, it is still the premier place to put your brand and to advertise. And they’re making legacy decisions which are not based on the reality. They’re based on the aura of the As seen on TV, the thing that TV was rather the thing than the thing that TV is. And, and that’s so interesting because a lot of the biggest brands that have been built in recent times, I mean, obviously big tech companies built their brands through usage.
[00:45:46] Steve: You look at Tesla, it doesn’t even have an advertising agency and run ads in the traditional sense. So a lot of modern brands aren’t even using TV. And it’s like Unilever and Nestle just didn’t get the memo. You know, they’re, they’re using this legacy stuff. So I think that, uh, the CPM that they get, the Cost Per Thousand, is really out of sync.
[00:46:08] Steve: Uh, and I reckon this will change really soon. In the next three to five years, what we’re going to see is, Kids who grew up with the internet getting in those senior positions in organizations. And I’m starting to notice it just now where it’s kind of like 40 year olds are starting to become senior running organizations.
[00:46:28] Steve: And they’re going to say, why the hell are we advertising on TV? And maybe you can help us here with, uh, Some of your investing strategy, but there must be a way to short some of these stocks to make a profit. Because I think you’re going to see an inordinate number of bankruptcies in legacy media and TV, because I still think that they should be dead.
[00:46:50] Steve: And we’re going to see Netflix come out as well, which will have the live elements. The only thing that TV’s got now that’s even worth watching is if it’s live. It’s Super Bowl, it’s Oscars, it’s anything. I can’t remember the last time I watched TV. I cannot even remember. Anything that I’ve watched on TV that was live other than a sporting match.
[00:47:07] Steve: I have no idea what that was. And it’s only a matter of time before Oscars, sports, and 24 7 news end up on Netflix. And I think Netflix will have that as their ad supported stuff. Anything that’s live will be ad supported, whether it’s Netflix, Apple, uh, or Prime. And, uh, we can see that there’s going to be bankruptcies in linear TV.
[00:47:28] Steve: I just want to work out how to make money out of it, Cam. Over to you and your financial
[00:47:32] Steve: wizardry.
[00:47:33] Cameron: You just start shorting them all, Steve. Just take a long time window. I, like, my thoughts on this is first of all,
[00:47:40] Cameron: nineteen and a half million
[00:47:41] Cameron: people watching The Oscars, like, why the fuck is anyone watching the Oscars in the first place?
[00:47:45] Steve: who are these nine and a half thousand people?
[00:47:48] Cameron: seriously?
[00:47:48] Steve: seriously? Have you not got anything better to
[00:47:51] Cameron: Yeah, like who the fuck cares?
[00:47:54] Cameron: I mean,
[00:47:54] Cameron: okay, let’s say you do care, you do want to know who won all the categories, just
[00:47:58] Cameron: read the
[00:47:59] Cameron: newspaper.
[00:48:00] Steve: all in three minutes.
[00:48:01] Cameron: yeah, read it The next
[00:48:02] Steve: hours. The next day, on delay, or even
[00:48:05] Steve: live, it’s in your social feed if you want it. Hashtag, Oscars, job done, thanks for
[00:48:09] Steve: coming.
[00:48:10] Cameron: Secondly, that strikes me as a very small number. Like, do you know who Adam Milatovic is?
[00:48:16] Steve: Adam? Malata? Don’t know him. Don’t know him.
[00:48:20] Cameron: he’s a Melbourne boy, uh, one of the
[00:48:22] Cameron: TikTokers that my son Taylor manages. Uh, Adam’s got 11. 2
[00:48:27] Cameron: million followers
[00:48:28] Cameron: on TikTok.
[00:48:30] Cameron: Um, now, because of the way TikTok works, not all of his videos get the right amount of, like, all of those views, but
[00:48:39] Steve: more, yeah.
[00:48:40] Cameron: just looking at his TikTok page now, his number one video
[00:48:44] Cameron: has got 83.
[00:48:45] Cameron: 6 million views.
[00:48:47] Steve: mean, and that’s the point. That’s the interesting point, right.
[00:48:49] Steve: That’s exactly it.
[00:48:51] Cameron: That’s one video. It’s got 83 million views versus the Oscars getting less
[00:48:58] Cameron: than 20 million
[00:48:58] Steve: So he needs to do a video and go, Oscars, take that!
[00:49:02] Steve: Look at these apples! Let me show you how it’s done!
[00:49:06] Cameron: Yeah, most of his videos are him trying to pick up random women on the street by being sleazy. Which
[00:49:13] Steve: me.
[00:49:14] Steve: See? Kardashians. That’s who I blame.
[00:49:17] Cameron: yeah, yeah. Well, Paris Hilton goes back even
[00:49:21] Cameron: before that. Paris Hilton
[00:49:22] Cameron: to the Kardashians. Yeah.
[00:49:24] Steve: Yes.
[00:49:25] Cameron: My son, Hunter, who is now something of a model. He’s also a TikToker. He was,
[00:49:32] Cameron: uh, got, he got flown to New York, um, a couple of weeks ago to go to the Tommy Hilfiger fashion week where they dressed him up.
[00:49:42] Cameron: He met Tommy Hilfiger. His photo is all over Tommy Hilfiger’s website now as like he’s This up and coming Australian TikTok celebrity, uh, next week he’s being flown down to the F1, uh, he’s in the, like some platinum box at the F1 in Melbourne being hosted, and he said to me, he goes, Oh, you know, next year, I think my goal is to be invited to the
[00:50:06] Cameron: Oscars
[00:50:06] Cameron: next year. I say, well, are you going to be nominated? Nah,
[00:50:09] Cameron: just to be like a celebrity guest at the Oscars. And I’m like, why the fuck would you want to be a celebrity guest at something if you’re not, if you’re not being celebrated for your work? He goes, well, I kind of am. I’m there because of my TikTok following, right?
[00:50:23] Cameron: And I’m like, yeah, that’s, it’s just Paris
[00:50:26] Cameron: Hilton.
[00:50:26] Steve: This is Hunters. Could you, You have to send me a link to it.
[00:50:29] Cameron: there’s two million. Hunter Riley, it’s that easy to find, man. You know, he’s sort of
[00:50:35] Cameron: becoming a bit of a parasilt in my head. He gets invited to stuff just because he’s quasi famous, not because of,
[00:50:43] Steve: Well, they, they, they want the eyeballs that he may be able to generate while he’s there. It’s quite simple.
[00:50:49] Steve: It’s, it’s the model hasn’t changed, just the tool.
[00:50:52] Cameron: yeah. So when I
[00:50:54] Cameron: look, When I look at like the amount of money that the Oscars
[00:50:58] Cameron: must cost to put on and all the people involved in the, you
[00:51:02] Cameron: know, there’s there’s going to be a hundred people behind the
[00:51:04] Cameron: scenes, crew, you know, writers, et
[00:51:07] Cameron: cetera, et
[00:51:07] Cameron: cetera. Um, when a handful of TikTokers from Australia
[00:51:12] Cameron: that my son manages
[00:51:13] Cameron: can get the same amount of views, uh,
[00:51:16] Cameron: in the course
[00:51:17] Cameron: of uh, you know, doing four or five videos over the course of a week,
[00:51:21] Steve: Yeah. They didn’t get the memo. It was funny. I, um, I, uh, I did an interesting thing as well. I’ve got, you know, the farm that I sometimes record at, uh, in Geelong, I put that up on Airbnb to short term rental. And I have done a bit of advertising here and there. On, uh, Instagram. And it gets a little bit of coverage and you have to spend a lot.
[00:51:41] Steve: You might spend 50 or a hundred bucks and you get a couple of bookings. But I got a girl called a girl about Melb who does influencer stuff on places to go around Melb. And she’s got about 30, 000 followers. So she’s niche, but the right audience, you know, that, that micro niche. I gave her a free night and bought her some champagne for her and her friends.
[00:51:59] Steve: The video had 50, 000 views and I just got an extraordinary number of bookings from that, and it cost me nothing. You know, it was the best value equation of all time. I’m like, how can I get her back there again and do some different stuff
[00:52:12] Steve: for people like her? And this is modern media, right? It’s people with giant followings.
[00:52:19] Cameron: you should talk to Taylor and get Adam to, uh, go and spend a night there and his boys.
[00:52:25] Steve: now you’re talking.
[00:52:27] Steve: See. All right, now I’ve got, I’ve got
[00:52:28] Steve: Hunter and Taylor. I’ve got, why am I talking to you? Can
[00:52:31] Cameron: I don’t know.
[00:52:32] Steve: get your boys on
[00:52:33] Cameron: Yeah, yeah. you
[00:52:34] Cameron: should talk
[00:52:34] Steve: are, you are yesterday’s news. Could you get your
[00:52:36] Steve: boys on this thing? We, we need
[00:52:38] Cameron: that you should talk
[00:52:39] Steve: We’re calling it C3
[00:52:41] Cameron: you should talk Taylor’s boy.
[00:52:43] Steve: Yeah,
[00:52:44] Cameron: You should talk to one of Taylor’s other guys, Harrison, when he did the old lady and the flowers thing a
[00:52:48] Steve: I remember that
[00:52:49] Steve: one.
[00:52:50] Steve: I wasn’t into that though. I
[00:52:51] Steve: don’t like that. I, I, I don’t
[00:52:52] Steve: like, I hate when they give and it
[00:52:57] Cameron: How’s that any different to what you just did?
[00:53:00] Cameron: you just gave her a
[00:53:01] Steve: no. Let me
[00:53:02] Cameron: free night at your thing for prom promotion.
[00:53:05] Steve: Let me finish. I hate when you give like homeless people and whatever money and whatever, cause it’s just for the camera.
[00:53:11] Steve: It’s like, for me,
[00:53:13] Cameron: you’re giving them
[00:53:14] Steve: don’t like it
[00:53:15] Steve: If you want to give them the money, don’t put it on camera. That’s
[00:53:17] Steve: my opinion. I’m out.
[00:53:19] Cameron: slightly hypocritical to me there, Steve. Anyway, uh, listen, I know you got a hard out in a few minutes. Let’s, uh, just want to finish Futurist Forecast. Um, Demis Hassabis, I mentioned earlier, uh, co founder, CEO of, uh, Google DeepMind. Really interesting guy, um, Was a chess
[00:53:38] Cameron: prodigy at the age of four. I
[00:53:40] Cameron: think he was like a grandmaster at 13 and not surprising then that his product,
[00:53:47] Cameron: DeepMind,
[00:53:48] Cameron: built Alpha Zero, Alpha Go,
[00:53:52] Cameron: um, Alpha Fold, and, Uh, there’s a really good interview with him I watched,
[00:53:57] Steve: No beta? No beta cam,
[00:53:59] Steve: There was no beta.
[00:54:01] Cameron: No beta cam
[00:54:03] Steve: No beta cam as in your cam, Cameron Riley.
[00:54:06] Steve: It was a lot of alpha products, just never any betas. I was just curious. Sorry. Sorry.
[00:54:10] Cameron: okay. Uh, he,
[00:54:12] Cameron: yeah,
[00:54:13] Cameron: need to work on that bit before you don’t quit your day job. Um, he, he’s talking about what we need to do to get to AGI. And he’s saying AGI in 10 years, like a lot of people are. A lot of things are always
[00:54:26] Cameron: 10
[00:54:27] Steve: 10 years because everyone’s forgotten, that 10 years is the greatest thing because you can say it
[00:54:32] Steve: and people that, pay you money today have forgotten by the time that.
[00:54:35] Steve: 10 years
[00:54:36] Cameron: you make a career out of that, right? As a futurist, yeah, yeah, you live on that.
[00:54:40] Steve: 10 years from now.
[00:54:42] Cameron: Um, but he is, he is saying, um, something similar to what I’ve been saying is that it’s a combination of large language models, And, uh, specific systems that, uh, have deep domain knowledge, expert knowledge.
[00:55:00] Cameron: But of course, what’s interesting about the approach that he’s taken over the last 10 years, I think DeepMind was started in 2010, so they’ve been going for about 13 years, uh, 14 years. Um, Is, you know, their Go, uh, system, AlphaGo, or the system that plays chess, which I’ve just, there’s a book called Game Changer that a couple of chess masters who had early access to that wrote.
[00:55:25] Cameron: Uh, they had like open, um, accessibility to the chess system. Uh, and also their, their protein folding thing, AlphaFold. You know, the, the thing for people to understand about these systems is that they weren’t
[00:55:41] Cameron: hard
[00:55:42] Cameron: programmed. They’re not hard coded, right? Like, he said with their first thing they built was the Atari game playing system, where it was just shown in Atari, uh, uh, the, the first Atari game, and said your objective is to figure out how to get the high score.
[00:55:57] Cameron: And then it just runs through millions and billions
[00:56:00] Cameron: of
[00:56:01] Steve: out to get it at the breakout at the back of it where it goes.
[00:56:04] Cameron: Yeah, that’s how they, and that was the same with their chess
[00:56:07] Cameron: engine. They didn’t teach it how to play chess. They just said, here’s the objective, figure out how to get there. Um, interestingly, he was saying that with AlphaGo, and I probably had heard this before, but, um, you know, Go is three and a half thousand year old Chinese game, way more complicated than chess even.
[00:56:27] Cameron: And it made a move there. They call it move 37, one of its games against the world champion. That was the Was a move that had never been made before at that stage of the game. And he said the live commentators, when they saw it, were like, Oh, this is a disaster. And then like an hour and a half later, it won the game because of that move 37.
[00:56:48] Cameron: And like, he’s just talking about, well, that, you know, it’s, they’re doing things. They’re seeing things in that case that are truly creative that have never been done before. And he’s talking about applying those sorts of things in medicine, in science, in healthcare. When you can build these systems that can just analyse millions and hundreds of millions,
[00:57:12] Cameron: billions of different ways
[00:57:14] Cameron: of solving a problem.
[00:57:15] Cameron: relatively quickly, and coming up with new insights. So it’s combining those sorts of systems with the language models on the front end so we can communicate with them, um, where more people can communicate with them in a natural language
[00:57:32] Steve: yeah. I really love
[00:57:33] Steve: that
[00:57:34] Cameron: the magic.
[00:57:35] Steve: Yeah. The front end
[00:57:36] Steve: communication, but the way that
[00:57:37] Steve: AI solve things in ways that we wouldn’t is the, is the most interesting part, not showing it what to do, but giving it an objective because it will uncover new
[00:57:46] Steve: ways of solving old problems. That’s the really interesting part of it.
[00:57:49] Cameron: Yeah, so it’s, it’s this idea of plugging these systems together over the next 10 years that we need to do, um, building these expert systems like the Alpha Zeros and then having access to them by LLMs being the, the front end interface, um, Uh, and having like just a range of these things, like, uh, thousands, tens of thousands of expert systems in different domains and the stuff that they’ve done with chess and, and with the protein folding, uh, and go, uh, proofs of concept, right?
[00:58:24] Cameron: If we point these things in a massive data set, they can, they can develop deep, very deep domain expertise, which is something that LLMs, you know, Don’t have. LLMs can craft language and understand language to a certain point, depending on how you want to
[00:58:41] Cameron: define understand.
[00:58:43] Cameron: But then you use them with the other things
[00:58:45] Cameron: on top of them.
[00:58:46] Cameron: That is, I think, where we’re going to get to in the next 10 years. And then robots being
[00:58:51] Cameron: able to plug into that as well. So then you have the robots plugging into these deeply intelligent systems. That is what’s going to change everything in
[00:59:01] Steve: Yeah. with front end user interfaces, which have human creativity in teaching and process so That you can learn the wide gamut of human knowledge, experience, and functions.
[00:59:12] Cameron: Yeah. That is the show for this month. Hopefully this week, maybe this month, depending on what happens. Thank you, Steve. Good to chat, man. Always, as
[00:59:25] Steve: Thanks Cam, loved it.
This week on Futuristic – Brisbane portrait prize allows AI, Neuralink human trials have started, Berkley AI let their robot walk around SF, Arc Search is Google’s worst nightmare, why electricity was the AI of the 1800s, and Cam’s 9 scenarios for how the Future of AI might play out (and Steve’s adds #10).
Fut 21
[00:00:00] Steve: 3, 2, 1, go.
[00:00:12] Cameron: Welcome to the Futuristic Podcast, episode twenty-one. Steve Sammartino, as seen on the project, uh, this week talking about AI and the future. How you doing Sammo?
[00:00:26] Steve: I’m good. I was on there. I’ll tell you what you uh, do 45 minutes for a wonderful three sound bites of one sentence.
[00:00:34] Steve: Yes.
[00:00:35] Cameron: Yes. Well, you can get it All out on this show, Steve You can talk for as long as you like. What’s one thing of note you did? Uh, futuristically, uh, this week, Steve?
[00:00:46] Steve: I’m gonna break the rule and
[00:00:47] Steve: give you two. Uh, but one thing that, that I noticed was I’ve been following a lot of TikTok AI update people, which they’re my feed is flooded with. And every time I see one of those updates I favorite, I go, geez, that’s a useful tool. Geez,
[00:01:03] Steve: that’s a useful tool. And then I sat
[00:01:06] Steve: down to use a few of them. I thought, I’m gonna have a go at these. And I found that eight outta 10 of them are pretty crappy. Now, when I say crappy, they’re kind of okay, but they are far more manual than you think. It’s not just you type in, you do this and it does that, or you put up a video and it gives you 20 different TikTok sound bites and picks the perfect ones.
[00:01:26] Steve: The fact is it’s a Pareto principle. You know, 20% of them are good and 80% of ’em are really crappy and. It was just a, a, a bit of a moment where that hype cycle, everyone’s creating apps and promoting them, and when they have the promotional video, they look great. I still find that a lot of them, and I’ll use the word yet, yet, are not very good.
[00:01:46] Steve: I’m sure they’ll get better very soon, but my insight is that I can’t help but think that all of those are just gonna get munched away by Gemini and OpenAI who are just almost, again, it’s a little bit like what happened with, uh, APIs on Twitter and Facebook. A little bit like the app Store. Let the crowd do the work of the type of things that look interesting and good, and then just do it yourself and shut ’em down.
[00:02:10] Steve: Here we are again. History repeats. Cam.
[00:02:13] Cameron: Sam Altman himself said, uh, a couple weeks ago, I
[00:02:16] Cameron: think we talked about this on an earlier episode. He said it, I think it was at the Y Combinator launch. Look, uh, don’t build a business around, you know, uh, putting bells and whistles onto GPT. ’cause GPT five is just gonna make all of it, uh, unnecessary. So, uh, you know, he’s, he’s being upfront with people in that regard saying, you know, don’t do that.
[00:02:37] Cameron: It’s, you’re gonna be disappointed.
[00:02:40] Steve: but every app that I see, I’m like, yeah, that’s just gonna be in Open AI and gonna be better, or it’s gonna be in Gemini because most of them are just those little enhancements on video and voice and really
[00:02:50] Steve: just tweaking the edges, bells and whistles is a good way to put it. Um, and one other thing, I did a keynote yesterday for a solar company.
[00:02:58] Steve: One thing that I’m noticing, I do a couple of demos of how you can use GPTs, and the demos that I do are usually.
[00:03:05] Steve: How you put stuff in to get something out. This is one little thing that I don’t think people are doing much, is people have this habit of the internet where the internet has been for the longest time, a a giant filing cabinet.
[00:03:17] Steve: You pull things out of it, right? You’ve gotta hope that someone has made it and you ask it and it delivers that thing. And the shift is that the internet now is a giant brain. But what people forget with, I think with open ai, and this is wider society, they’re going to the AIs to ask it to do things and very, very few people actually putting something in so that it knows more to pull out the thing that you want.
[00:03:42] Steve: And I just think that that putting an input in there to make it work harder and more specifically on your particular problem is something that most people aren’t doing. So that was of note.
[00:03:50] Cameron: What do you mean by putting something in? Like in what way?
[00:03:53] Steve: So up, uploading your data, so your own PDF or your, other than video and image, but I mean just actually adding to the database, you know, in the immediate thing that you’re doing. I know that you and I do that, but it seems that most people are like, oh wow, so you can put your PDF in and get it to analyze this and your financial data.
[00:04:12] Steve: And it just seems, and these are business audiences and I just thought they’d be further along and they just seem to be going to
[00:04:19] Steve: it and asking and requesting, but not so much saying, oh, how can I add to this database with what I’m working on or what’s in my, you know, corporate enclave of knowledge or my personal cloud.
[00:04:30] Steve: It doesn’t seem like many people are doing that at this point. I just thought that was interesting.
[00:04:35] Cameron: Well, of course, you know, we are not able to upload stuff that goes into the training database, but you can. Give it your own documents. And depending on what you’re using, if, if you are, if you are running your own model, if you’re using GPT for all or something like that, and you’ve built and training your own LLM locally, which I’ve spent a bit of time doing, you can give it a ton of documents.
[00:04:57] Cameron: But even that process isn’t that easy. You have to chunk the documents down, you need to strip all of the unnecessary metadata out of them. It’s, it’s quite a big exercise. Um, but yeah, I think people have still got their training wheels on with this stuff, and that will be true probably for quite some time.
[00:05:16] Cameron: Well, this week, uh, well, since we last did a show, which was a couple of weeks ago, uh, I’ve spent a lot of time writing more iOS shortcuts using GPT’s help. Um, I have one, you know, weigh myself every morning, as you know, I. Wash my diet and my weight and my exercise and all that pretty carefully. I have to, because if I don’t, I just, you know, eat shit and put on weight.
[00:05:38] Cameron: And, um, I built one where I can just, I have an iOS shortcut on my phone. I can just open it up, type in what my weight is when I get off the scales in the morning, and it will update it both into the health app in iOS, on my iPhone, but also do a Google sheet where I track all of this stuff and it’ll automatically update the Google sheet.
[00:05:57] Cameron: So I have the data stored in separate places and I, I’ve, I’ve got one that logs my kung fu workouts. When I get out of kung fu, I can pull up a shortcut, tell it which kind of workout I did. Some are more, uh, high intensity than others. Some go for an hour, some go for 90 minutes, some go for several hours.
[00:06:16] Cameron: And it will again, update my calories and, uh, put that into both the health app and spreadsheets and track it. So just learning how to. You know, automate more things, track more data using Code GPT to help me build this stuff.
[00:06:32] Cameron: It’s kind of nerdy stuff, but it, it’s sort of an example for me of how we’re all gonna be living a couple of years from now where you’ll be able to use these tools to automate a bunch of things for you.
[00:06:43] Cameron: Track stuff that’s important for you personally. Personalized software or as secret software. As I said, someone I read a, I read A, I think it was a Corey Doctorow post a couple of weeks ago. He was talking about when the whole idea of Life hacks was launched in the early two thousands at a conference.
[00:07:02] Cameron: The guy who did the talk on it referred to Secret Software, which was software that. Coders write, developers write for themselves. It never sees the light of day. It’s just software that runs on their devices that they’ve written to Automate functions for themselves. They call it secret software. So I’ve started writing a lot of secret software over the last, uh, couple of months, which is fun.
[00:07:25] Cameron: Well, getting into the top three, uh, news stories, Steve, the big one that dropped this morning and seems to have blown everybody away, including me. It’s, it’s one of the, like, it it is the, I think probably the biggest jaw dropping moments since chat. GPT first dropped publicly, uh, a little bit over a year ago, and that is open AI’s, new Sora.
[00:07:57] Cameron: S-O-R-A. Text to video. We’ve seen a lot of text to video tools come out in the last six months. Usually they will produce a second or a couple of seconds of little bit, uh, nasty looking video, but from text. But that’s a, that’s a huge achievement. The preview that OpenAI dropped today of S-O-R-A will generate 60 seconds of jaw-droppingly.
[00:08:28] Cameron: Good video from a text prompt. And here’s the cool thing. I dunno if you had, how much you dropped down this on the website where they did the
[00:08:36] Cameron: preview, they’ve got a bunch of videos that they’re showing. It’s not open to the public yet. They’re red teaming it. So they’ve got a group of people that they’re trying to use to find bugs and problems and think about ethical issues, et cetera, et cetera.
[00:08:52] Cameron: They’re giving it to a small group of researchers to play with. But what’s Sam Altman’s been doing today on Twitter is saying to people, Hey,
[00:09:00] Cameron: gimme a text prompt and I’ll make a video for you. And so people are just feeding him
[00:09:05] Steve: What a genius promotion that is. Genius
[00:09:09] Cameron: can’t give you direct access, but you just give me your prompt and I’ll make the
[00:09:13] Cameron: video for you And so on his
[00:09:14] Steve: Now I’m,
[00:09:15] Cameron: Sam A, if people aren’t already on it, he’s just churning those.
[00:09:19] Cameron: I mean
[00:09:20] Cameron: he was just churning him out this morning giving people demos of based on their
[00:09:25] Cameron: text prompts. I was like, that is, that’s smart. That’s
[00:09:28] Cameron: cool.
[00:09:29] Steve: here we go. Here’s one That I now a bicycle race on. An ocean with different animals and athletes riding on bicycles with their own camera view.
[00:09:36] Cameron: a dolphin in a seal with
[00:09:38] Steve: today. It’s actually, it’s actually interesting.
[00:09:41] Steve: There you go.
[00:09:42] Cameron: what did you think when you saw the video that this is producing? Steve?
[00:09:47] Steve: It looked, looked really great. I mean the, the first. Some of them looked a little bit green-Screeny, like the one of the car driving, there’s one of a car driving down a mountainside. That one looked a little bit green-Screeny, the one that blew me away was historical footage of the Gold Rush. Kind of was as good as any movie set.
[00:10:05] Steve: It looked, looked,
[00:10:05] Steve: there, it was like a drone view as well. The girl working, walking in Tokyo. I mean, I thought it was extraordinary. I just,
[00:10:12] Steve: the first thing I thought when I saw it is how quick can I use this? Because there’s so much content that I produce that is verbal, right? And if I could just add a little video on top of that, it’s written in verbal, and then create visual with that, it’s, it’s pretty extraordinary.
[00:10:28] Steve: It really is. And and the other thing I thought straight away was the first thing that I mentioned, which is the big players are just gonna eat up everyone who’s fringe dwelling on a little bell and whistle. I mean, this is a classic example, right? There’s quite a few out there and then they’ve just gone, yeah, let’s 10 x what you guys have done with the three second little video you can generate.
[00:10:45] Steve: I.
[00:10:46] Cameron: I mean, we’ve talked about this on the show over the last year or so, but, um, you know, I, I’ve been, uh, predicting for a long time that the impact that this is gonna have Generative AI is gonna have on social media like TikTok and Instagram and those sorts of video platforms, but also cinema and TV industries, when you can
[00:11:11] Cameron: just do text to video of a
[00:11:13] Cameron: very high quality, I think it’s gonna have a huge impact on people’s ability to create programming at a very low cost. And, it was, um. Uh, What’s-his-face, uh, head of DreamWorks who came out recently and said it was going to reduce animation costs by like 90% in the next five years.
[00:11:35] Steve: definitely. Yeah. We talked about early on how you used to make a lot of bedtime stories for Fox using chat, GPT. And, and you know, this is the multimodal element coming in here. So all of a sudden you’ve got a young kid. Not only can you write them a story, you can get the animation of some visuals.
[00:11:50] Steve: It’s not hard
[00:11:51] Steve: to see where this will go, where you have a make me a five minute bedtime story, uh, with a unicorn, uh, talking horse that saves the world from climate
[00:11:59] Steve: change with, you know, Laura Sammartino. It’s like, there you have it.
[00:12:05] Cameron: Yeah, and like to think that a little bit over a year ago we were all, uh, picking our jaws up, up off the ground because we had a tool that was generating text. Here we are a little bit over a year later, they’re generating 60 seconds worth of jaw-droppingly good cinematic quality video from text is like, wow, where are we gonna be a year from now?
[00:12:29] Cameron: Now you mentioned bedtime stories. One of the other things that happened in the last week is. We’ve all been given access to Gemini Advanced, which is, uh, Google’s barred Generative AI tool is now called Gemini. Gemini Advanced became available. Uh, they, I think they’re giving a two-month free trial if you sign up to advance, but then it’s 30 bucks Aussie a month for the premium level.
[00:12:58] Cameron: I signed up to it and I had heard, I read this great, uh, Reddit post where somebody did a breakdown very quickly, like within 24 hours after it came out. Uh, testing, uh, uh, Gemini Advanced against GPT-Four Turbo. Uh, one of the things that they said they thought Gemini Advanced was better at was creative writing.
[00:13:24] Cameron: So I, uh, did go back into bedtime story mode, uh, gave both GPT-Four and Gemini advance the same. Very, very simple prompt. To write a bedtime
[00:13:36] Cameron: story. And then I
[00:13:38] Cameron: posted both of them to Facebook
[00:13:40] Cameron: and, asked people what they thought, uh, who they thought,
[00:13:44] Cameron: did the better job. And there was a range of
[00:13:46] Cameron: views. Uh, both of them still not great, but I thought
[00:13:52] Cameron: Gemini advanced probably, you know, it was less
[00:13:57] Cameron: cliche.
[00:13:58] Cameron: You know, GPT still, when it writes these stories, they’re very flowery, very cliched.
[00:14:05] Steve: They
[00:14:05] Cameron: Gemini’s was much more
[00:14:07] Steve: stuff is very and, and, and just like that as well. GPT is very MBA speak on its business stuff too. It actually lacks nuance in, in a, in a lot of those areas. So the stories are cliched, the business stuff, it gives you back is MBA wash, you know, like it’s,
[00:14:24] Cameron: can, but you can.
[00:14:26] Steve: that in all of.
[00:14:26] Cameron: You can train it with custom instructions, etc. Not to do that with the business speak, but I’ve spent a lot of time over the last year trying to train it how to write creatively and it’s uh, it’s sort of pretty much been a waste of time. Like it doesn’t matter what I tell it, it doesn’t get any better.
[00:14:43] Cameron: Gemini is better in this regard still. It’s not gonna win any literary awards, and I don’t wanna take away from the fact that. It’s fucking amazing that you can ask a computer to write you an original story and it then does it. That in and of itself is jaw-dropping in terms of the history of computing.
[00:15:02] Cameron: But, uh, yeah, still not, uh, blowing your mind. But, you know, uh, how long before they do blow our minds. I don’t think it’ll be long just running down this guy. This is Lord per maximum on the, uh, singularity subreddit from three days ago. I made a comparison post before based on the view of non Reddit people on two models.
[00:15:22] Cameron: After testing two models extensively in the last few days. I feel like I have to share my honest thoughts on this. First and foremost, GPT-Four Turbo is significantly better than GPT-Four, so I’ll only include that in comparison, says GPT-Four Turbo is better at reasoning and logical deductions. Gemini Advanced may succeed somewhere where.
[00:15:41] Cameron: Uh, at some where GPT-Four Turbo fails, but still, GPT-Four Turbo is better at the majority of them in reality, even Gemini Pro seems a bit better than advanced Ultra at this. I’m not saying a lot though, because if a reasoning test is not in their training data, all of the models are bad. They really can’t generalize says GPT-Four Turbo is better at coding.
[00:16:03] Cameron: GPT-Four Turbo Hallucinates less than Gemini. He gave Gemini advance a win for creative writing for general conversations. Says it’s about two to three times faster than GPT-Four Turbo has no message. Cap refuses to do tasks, though more than GPT-4. So he gave GPT-Four Turbo the win for that. Um. GPT-Four
[00:16:34] Cameron: does better with, uh, code interpreter and more, uh, languages.
[00:16:40] Cameron: So basically he still had GPT-Four, Turbo, uh, winning five
[00:16:46] Cameron: out of nine tests that he put it up against. Five out of you, five verse four. But you know, that’s, that’s a big thing for Google to be catching up, uh, on some
[00:16:57] Cameron: metrics there. So that’s one person’s tests, you know, take it for what it is.
[00:17:03] Steve: it brings me to, to, you know, the business side of this, which I think is gonna be incredibly interesting. Um, the one thing that I’m kind of happy, happy about is the internet might turn towards a subscription model instead of an advertising-based model. I mean, I really, I really do hope that for a whole lot of reasons, ’cause I think that’s the original sin of the internet, is the attention economy that’s spawned under it and surveillance capitalism.
[00:17:25] Steve: But what it does do is it, it makes me
[00:17:28] Steve: think about streaming and the business model with
[00:17:30] Steve: streaming. And as you know, subscriptions in many areas of life are just growing 10 x versus the S&P 500. I wonder who’s gonna be the Netflix, like the one that you just go, this is the one that I am, I am always subscribed to.
[00:17:48] Steve: And then who are the others? Like who wins the subscription battle? Because it feels like you might be subscribed to one and not the other, and you would get better at using one versus the other. And I, I really gravitated towards what the emergent business models are. Do you end up with these, this battle of AI subscriptions?
[00:18:09] Steve: How many do we end up having, you know, at home we’ve got Netflix and Disney, but at one point we had Amazon and, and Apple and, and Binge. And then you are going, ah, and you, you end up turning ’em off and just having one or two. So I, I’m really interested in the business model that emerges from this and who you end up subscribing to or whether they gravitate down towards that.
[00:18:33] Steve: Now let’s just make this free and make it available to everyone, you know, as the, the new models emerge. I dunno, what do you think?
[00:18:40] Cameron: I’ll get to what I think in a moment, but now we need to pause for an ad from Manscaped. I’m kidding. Um, we,
[00:18:49] Steve: I thought that was real. I’m like, who knew?
[00:18:52] Cameron: uh, yeah, look, it’s gonna
[00:18:55] Steve: made you say that? ‘
[00:18:57] Cameron: cause you were dissing advertising. I thought it’d be funny if I threw to an ad.
[00:19:01] Steve: No. Okay. They’re good. Yeah. Good, good,
[00:19:03] Cameron: Uh, yeah. Look, I, I, I agree with you. Advertising, uh, has turned the internet into a shit show. Um. I, I also wonder who’s gonna be the top dog. You know, I think that it’s funny when, when I wanted to test Gemini and I’ve been testing, you know, I test PI and I’ve test perplexity and I test Gemini and I go in and out.
[00:19:26] Cameron: I always come
[00:19:26] Cameron: back to chat GPT, partly because I think it’s the best experience, but also partly because
[00:19:34] Cameron: I’ve kind of built up a bit of a product loyalty. I’ve got, yeah, I’ve
[00:19:39] Cameron: got conversations, I’ve got exports. I think, ah, if I start
[00:19:42] Cameron: spreading my conversations
[00:19:44] Cameron: around, I won’t know where was
[00:19:45] Cameron: that conversation and uh, how do I find a backup?
[00:19:49] Cameron: How do I, you know, where do I go to get that,
[00:19:52] Cameron: uh, you know.
[00:19:53] Steve: not just loyalty, it’s a little bit like when social media was big, you know, 15 years ago I found that, um, people would say, which is the best one? And I used to say to them, ’cause I was big on blogging and Twitter, they were the two main ones for me. Um, you know, I got a lot of business value out of them.
[00:20:10] Steve: I was like, just choose one and get good at it because there’s nuances in being good at it. ’cause they’re all a little bit different and have different personalities and user experiences. And if you get good at it, you’ll get more out of it given that they’re all reasonably good. it feels like that again, to an extent.
[00:20:25] Cameron: I think you’re
[00:20:26] Steve: sort of get good at it. Yeah.
[00:20:29] Cameron: But, you know, I, I, you know, I, I think that I’m waiting for Apple to come out with one. I’ll probably, uh, have a hard time
[00:20:38] Cameron: not using an Apple one if it’s integrated into
[00:20:41] Cameron: all of my Apple devices. I also think that the one that has integration with my email, my calendar, my documents,
[00:20:53] Cameron: my messages, all my, you know, all my
[00:20:56] Cameron: life is probably gonna be the one that will get my loyalty, and that’s probably gonna be Apple or Google, I guess, with the ecosystem that I live in.
[00:21:07] Steve: Yeah, and, and that’s the thing too, is that. The ones that seem to be looking like, they’ll, they’ll do better in the, in the long run it’s gonna be one of big tech again, which is one of the arguments with, you know, the antitrust heroes that are out there fighting the battles against big tech. It’s like, well, even which one you choose, guess what?
[00:21:24] Steve: There’s a super high chance it’s gonna be one of the five big technology companies floating around.
[00:21:28] Cameron: Mm mm Well, moving on, one X Robots, Steve uh, posted a video
[00:21:37] Cameron: in the last week of their robots. I hadn’t heard of these guys before.
[00:21:43] Steve: Oh man. I’m either,
[00:21:44] Cameron: Uh, it’s pretty good video. I mean, it’s, uh, two and a nearly three minutes long. One shot video. Uh, these are humanoid robots with
[00:21:58] Cameron: I would,
[00:21:59] Steve: little wheels
[00:22:00] Cameron: yeah. They’re on wheels. They’re not walking.
[00:22:02] Cameron: They’re like on, um,
[00:22:03] Cameron: a uh, those, they’re on a segue. That’s what I was gonna say. Yeah. With faces that look like they’re out of an episode of Dr. who they’re just like, uh, LED eyes in a smiley mouth. Quite creepy. I, If I saw one of those in a Dr. who episode I’d expected to start killing people a minute later, they look so creepy.
[00:22:26] Steve: Or rolling towards you in a dark alleyway in Melbourne, after you’ve had a few brewski, you, you really wouldn’t walk up towards it and go to shake its hand, would you?
[00:22:35] Cameron: No, no, it’s very terrifying looking. Eric Jang from one X says, here’s our latest software update on one x ai. Every behavior you see in this video is controlled from pixels to actions with a single neural net architecture. No tele-op, no scripted replay or specific, uh, task-specific code. No CGI all in one continuous video shot.
[00:23:01] Cameron: Obviously, we’ve seen different situations where a lot of these videos have been faked or Elon. Musk famously had optimists. The Optimus laws person dressed up in a robot suit, come out and do a dance on stage. Um, Google’s uh, Gemini video was heavily edited Yeah, a couple of
[00:23:21] Cameron: weeks ago. Um, so this is an, I mean this is an interesting video.
[00:23:27] Cameron: Part of me is unimpressed
[00:23:29] Cameron: at how slowly these robots are moving and doing all of these jobs of sorting things and moving portal cubes into a box, etc. Etc. Although, I mean, I should be highly impressed that we have humanoid robots doing anything. What did you think of
[00:23:46] Cameron: this video when you saw it?
[00:23:49] Steve: For me, it really showed how quick this is gonna change. And I’ll just pick up on the point you say unimpressed, they’re moving so slowly, but you know, doing things like picking up boxes and stacking them, looks like they were doing envelopes in a, almost like a mail sorting kind of setting
[00:24:07] Steve: or a office warehouse sort of
[00:24:09] Steve: setting. I mean, I the first thing I thought of, if they can do this
[00:24:12] Steve: now and it’s no code, it’s pixel to straight doing it and it’s
[00:24:16] Steve: not gamed, I’m like, imagine where they’ll be in 12 and twenty-four months. The first thing I thought of, you know, the ability to, I dunno if you can teach it visually. Um, some of the bots that I’ve seen before, you can actually, you show it what to do physically and then it copies you and it knows how to do that.
[00:24:34] Steve: Um, I just think that the ability for these things to do physical labor in warehouse office, factory settings, um, is gonna be extraordinary in two to three years. You just think about that, there’s almost nowhere that you couldn’t use it. Um, so I, I was, I was impressed and I just gravitated to remember exponential, Steve, it doesn’t get a little bit better.
[00:24:58] Steve: It gets twice as good and then twice as good as that, and then twice as good as that. And it almost doubles back to where we were, you know, just getting super excited about text. That made sense. You know, 18 months ago on the first chat GPT and now we’re talking video and here’s a, a rolling soft
[00:25:13] Steve: robot. I mean, Kurzweil is right.
[00:25:16] Steve: It just feels like, imagine where we’ll be in 2029.
[00:25:21] Cameron: I agree. I think it’s, uh, you know, very impressive at that level. But I watched an interesting video. So there was this thing, I think it was in the UAE, the World Government’s Summit held this week. Uh, it was a lot of discussion about ai. Sam Altman appeared over a video link and gave a talk, which was interesting.
[00:25:42] Cameron: And again, he talked about, uh, how much smarter GPT-Five is going to be and how they’re teaching it, reasoning, and it’s, it’s gonna just blow everyone away. But then Yann LeCun, the chief AI scientist of Meta, was on stage and was part of a panel and an interview. He made some interesting points about robotics and, and generative AI and some of the hurdles that we’re facing.
[00:26:11] Cameron: He, he said at one point that if you have a, a human baby that is 10 months old, has an intuitive understanding about how gravity works. They understand that if an object is dropped from a height and it’s unsupported, it’ll fall to the ground. But we still don’t have robots that can understand that. Um, and he was talking about how you can only train a.
[00:26:37] Cameron: Robots and AI using text up to a certain level and that we’ve run out of text. So, you know, we’ve al already training these LLMs on the entire internet. There’s no, it’s not like we can get a a hundred more Internets to train them on. We’ve trained them on the entire internet, publicly available internet information is talking about how we need to train them on video.
[00:27:00] Cameron: He said like, I think a four-year-old child has seen 16,000 hours of video. That’s about 30 minutes of what gets 30 minutes of YouTube up. YouTube uploads equals about 16,000 hours. So we’ve got a ton of video that we can start training AIs on, but as he was saying, it doesn’t work. You know, the, the way that we build generative AI on text doesn’t work on video.
[00:27:28] Cameron: So we need some huge breakthroughs in how we train AI models so we can train them on multimodal stuff more effectively, particularly video, and therefore be able to train robots far more effectively. So he’s pouring a lot of cold water on how quickly this is going to move into, you know, a, a, a much more highly advanced.
[00:27:54] Cameron: So basically saying, look, the, you can’t just throw more hardware at this and expect it to be much better than it already is because, you know, there’s not more data to
[00:28:04] Cameron: train it on. More hardware isn’t gonna make it significantly better.
[00:28:10] Steve: Oh, two things came to mind when I read the article was
[00:28:14] Steve: in some ways if there is a physical limit, that might be good. It might be good that we get to a certain limit where there’s a certain capability and, and things top out because
[00:28:23] Steve: we have this view that things will consistently be get better. And I always come back
[00:28:28] Steve: to you know, aircraft, you know,
[00:28:30] Steve: in 1950 it was like, you’ll fly
[00:28:32] Steve: in an hour to London.
[00:28:33] Steve: And you know, even beyond Supersonic, there was all of this, it’ll just continue to get better forever. And it, and it didn’t. And when we just moved to making the price better, more efficient, and better human, you know, UX, let’s say in the plains and so on, it’s gone the other way in some areas. But that’s kind of where that industrial and flying technology went to.
[00:28:53] Steve: And that might be a good thing that this happens with robotics. But the one thing that I see all the time with AI experts, when they make comparisons to biological beings, you know, not as smart as a cat, doesn’t have as many neurons and doesn’t have therefore a view of the world. It, it’s almost like they forget.
[00:29:10] Steve: It doesn’t need. The, the number of neurons that we need to do extraordinary and even dangerous things. I mean, any animal needs to know how to gather food, how to breathe, how to do all of these things just to keep its body alive. What’s don’t need that. They only need a certain slither and they only need to be exponentially good in a certain area to become dangerous.
[00:29:31] Steve: And you know, of course there’s, you know, cats and dogs and birds have their own objectives and they’re all self-directed. We’ve discussed this before, and while bots may not possess that, if they get super smart in a certain slither. might be enough for things to be dangerous. So there are two things that I thought about and I, and I do think that sometimes technologists, and it’s the same with lawyers and engineers and marketers, you get too close to what you do to have a, a broad view or perspective of risk and how things might emanate.
[00:30:01] Steve: And we’ve seen that with big tech, you know, denying, well, they’re probably just denying it, but sort of saying, no, it’s not this, it’s not that. And we’re like, well, actually it is. And there’s always a danger with someone who works in the technology who just cannot see the forest for the trees with the way that they look at things because they are so deep inside a particular context that they themselves, ironically, lack the worldview.
[00:30:23] Cameron: Sure I take your point, but I think Yann LeCun probably has a good sense of what the limitations are in terms of training AI models. Um, and, and I, and I’ve been saying this for a long time myself on the show, that I do think people, uh, tend to think that the LLM approach to generative AI is the be-all and end-all solution
[00:30:49] Cameron: for the artificial intelligence
[00:30:52] Cameron: journey. I don’t, I think we do need, uh, different systems, expert
[00:30:57] Cameron: systems, systems that use symbolic logic, but I do think the LLMs are gonna help us get there, that they will be able to help us, uh, uh, magnify the research and development process that might get us to the next level.
[00:31:14] Steve: Yeah, the next curve jump usually invents the next technology and the next curve jump. I mean, the thing that I just keep coming back to is nowhere near human level intelligence, but that doesn’t need human level intelligence for it to be significant and or dangerous and or, you know,
[00:31:28] Cameron: Particularly when you just need to train one robot how to do something, and then you can download that model into another billion robots in a minute and then they can all do it. Right.
[00:31:38] Steve: Exactly. Exactly.
[00:31:39] Cameron: I know we’ve done our three stories, but one that I didn’t put in the notes, we have to mention briefly. The Vision Pro, Apple’s Vision Pro actually hit the market in the us.
[00:31:48] Cameron: One of my, well, both of my older boys are in LA at the moment, and, uh, one of them, Taylor went and had a 10 minute demo at an Apple store of the Vision Pro. Um, and yeah, he said it was amazing, like mind-blowingly good. All the usual stuff that we’ve been hearing. If you’re following people on Twitter or Reddit or YouTube, um, Marcus, uh, et cetera, who have been testing these things out very heavy on the head.
[00:32:20] Cameron: Scobar was talking about how his gives him a headache because of how much his eyes are working and he’s not blinking. You know, there’s a lot of, um, practical problems with using them. That said, everyone, including David Letterman, there’s a great video of David Letterman testing one out and just, just blowing his mind is how, is what a huge leap the, uh, experience is in terms of interacting with the technology and the possibilities of it.
[00:32:51] Cameron: The three, the, the. Panoramic views, the 3D views. Taylor was saying he watched some 3D movies and he said it’s just like, it’s mind
[00:33:00] Cameron: blowing the 3D experience on it. So it’s out. I, I agree with the perspective that
[00:33:07] Cameron: it’s probably designed to be a, an early adopted developer research tool
[00:33:12] Cameron: while they keep working on it over the next couple of generations before it becomes more accessible, more useful, uh, cheaper for everybody.
[00:33:21] Cameron: But it seems to be a step in the right direction for fully immersive computing.
[00:33:26] Steve: Um, yeah, I haven’t tried it. Uh, the one thing that was interesting, I did wonder how many of the people, you know, skateboarding down New York and on the subway using it, was set up, you know, as, uh, some guerrilla marketing by, uh, the agency from, from Apple. I mean, I said the same images go round and round
[00:33:44] Steve: in circles.
[00:33:45] Steve: Look, I, I, I liked
[00:33:49] Steve: some of the feedback I heard about, you know, watching movies and that immersive experience seemed really extraordinary. I don’t know that me having goggles on my head and having a hundred screens open is anything that feels like what I’d want to use, because if it’s anything like my browser now, which has 646,000 tabs open, the last thing I need is more tabs open in different screens all around me.
[00:34:11] Steve: I’ll just be in a worse state than I’m in at this point. So, I, I dunno that I wanna spend three and a half grand us to, to have. The dismay that is my browser tabs just extended all around me and wearing something on my head to remind me how unorganized I have. I dunno if I want that. I mean, that’s kind of where I get to, but until I’ve tried it, I really, I really can’t make a comment.
[00:34:34] Steve: One thing really quickly on it is a lot of the naysayers are saying, oh, this is a terrible launch. This is no good. Gee, they’ve only only sold 200,000. They’ll only do 500,000. Let me just remind everyone of how powerful this organization is. Here’s a launch, which is getting dissed and people are saying is no good, and it’s got $700 million U.S in sales already.
[00:34:54] Steve: That is like bigger. That is bigger than the top 100 companies in the U.S, just with one crappy product according to everyone else. Just think that through for a moment.
[00:35:04] Cameron: Mm. All right. I wanna move on to the deep dive, Steve. Uh, my suggestion for this week is to talk about the breakthrough medicine stories that just seem to be coming faster and faster now. Just two of the ones that have hit my inbox in the last week or two, uh, uh, uh, uh, you know, so full of potential and look. Like that said, there’s always a lot of breakthrough medicine stories that we get that never see the light of day. There’s always gonna be a cure for cancer, or there’s gonna be this, or there’s gonna be that. We know that the research process is long and hard and that, you know, the PR departments for universities or publicly listed research firms are always trying to get these stories out to raise more funding or to pass legislation or whatever it is.
[00:35:53] Cameron: That said, these stories seem to be of a different caliber. This one from new scientist, thirty-First of January twenty-twenty-four. CRISPR gene therapy seems to cure dangerous inflammatory condition. Nine out of 10 people who received a new version of a CRISPR-based treatment for a potentially life-threatening inflammatory condition seemed to have been cured.
[00:36:19] Cameron: Nine people with a rare genetic condition that causes life-threatening inflammatory reactions appear to have been cured after taking part in the first trial of a new version of a CRISPR-based gene therapy. The condition called hereditary angioedema causes people to have sudden episodes of tissue swelling that affects body parts such as the face or throat, similar to aspects of an allergic reaction, although they can’t be treated with anti-allergy medicines.
[00:36:47] Cameron: 10 people who had the one-off gene treatment that has given directly into the body saw their number of swelling attacks fall by ninety-five percent in the first six months as the therapy took
[00:36:58] Cameron: effect. Since then, all but one have had no further episodes
[00:37:03] Cameron: for at least a further year while one person who had the lowest dose of the treatment had one mild attack.
[00:37:10] Cameron: This is potentially a cure, said Padmalal Gurugama at Cambridge University Hospitals in the uk who worked on the
[00:37:18] Cameron: new approach. So.
[00:37:27] Steve: I’ve heard stories and I’ve read some stuff and listened to podcasts on experts in that realm. Talk about CRISPR where they’re like, you can turn a cat into a dog. You can make someone six foot tall who was five
[00:37:39] Steve: foot three, like, you know, Cam. We can make your hair a different color. I don’t know if people who are listening know, but Cam’s just silver foxing it to
[00:37:47] Cameron: Why would I want it to be a different color? It’s
[00:37:49] Steve: Exactly, exactly. I would
[00:37:52] Cameron: you know, boss level hair color,
[00:37:54] Steve: boss level gray. Yeah, it is boss level. I hate my gray hair. But wh what I’m interested to know, and I’ve never seen this, is how do they do the gene therapy? And I think the listeners would know that CRISPR is like gene editing, where they come in and they cut out a, a gene, which is gonna cause problems.
[00:38:12] Steve: H how does the therapy get done? Like, I don’t
[00:38:15] Steve: actually know the answer to that. Please tell me, you know,
[00:38:17] Steve: cam,
[00:38:18] Cameron: Look, I’ve gone down the CRISPR path. You know, it, it broke in 2012, and I’ve, I’ve read about it.
[00:38:26] Cameron: I’ve listened to stuff about it as I, I can’t remember the specific details, but essentially it’s, I, as I recall it, it’s a tool
[00:38:33] Cameron: that was developed, uh,
[00:38:34] Cameron: sorry, discovered existing in nature. It was, uh,
[00:38:39] Cameron: something that a, a virus or a bacteria already did, and we figured out how to
[00:38:43] Cameron: replicate that. It basically enables you to find a specific piece of DNA
[00:38:49] Cameron: inside of a cell and then go in and alter that
[00:38:53] Cameron: piece of DNA using.
[00:38:55] Steve: it out. Like cut and
[00:38:56] Cameron: Yeah, cut it out and replace it with something else. So, um,
[00:39:01] Cameron: and
[00:39:01] Steve: how do you get that back into the body, and then how does it replicate across the entire DNA map? I’m just, I just don’t know how they physically do the therapy in inverted commas. Like how do they physically do it? You know? You know when you have an operation, you cut the person open and you take out the old heart and you put in a heart transplant.
[00:39:19] Steve: How? How do they do it? I’m just, I don’t know. I’m so curious. I need to dig in.
[00:39:25] Cameron: Yeah, how it goes from, okay, we’re gonna go change some cells
[00:39:28] Cameron: to propagating throughout the body, uh, fast enough to cure a disease. Yeah, I dunno the answer to that, Steve, but, you know, uh, it, it’s an incredible
[00:39:39] Cameron: piece of technology. As I said, like it first broke around about 12 years ago, became, uh, quite hot.
[00:39:48] Cameron: Here we are little over a decade later, using it to apparently solve a fatal disease that some people
[00:39:57] Steve: And, and almost any disease that, you know, I’ve read stuff that says that once we get down the track of it, it could potentially cure any disease where it’s genetic in nature and diseases that you have a higher
[00:40:11] Steve: probability of getting because of a certain genome. I mean, we just think about technology curve jumps.
[00:40:16] Steve: This would have
[00:40:16] Steve: to be the biggest curve jump of all time to get around pharmaceutical and a whole lot of other industries. Just think about it, you know, we’re editing
[00:40:22] Steve: and pasting and cutting and pasting our bodies to, uh, get us to the Gattaca level of, uh, however we want to be. I’m, I’m thrilled. I can just be the person I’ve always wanted to be.
[00:40:32] Steve: Cam.
[00:40:33] Cameron: Me just wanna be
[00:40:35] Steve: That’s right. That’s, that’s right, that’s right. Boss level.
[00:40:39] Cameron: boss level hair colour. Yeah, and it’s, and it’s relatively cheap too, to roll to produce cures using CRISPR to, and to deploy it too, which is the other big thing. The other story I saw this, uh, actually broke, uh, in December, but I only just came across it the last couple of weeks. This is, uh, Moderna’s mRNA cancer vaccine.
[00:41:02] Cameron: Works even better than thought. Adding Moderna’s in-development cancer vaccine to a standard treatment for melanoma dramatically reduces cancer survivors risk of death or recurrence. According to newly shared trial data. The challenge to treat melanoma, the deadliest type of skin cancer doctors typically start by surgically removing as much of the cancer as possible.
[00:41:26] Cameron: They might then administer another treatment such as chemo or radiation therapy to kill any cancer cells. They missed. But of course we know that the cancers can come back, they reoccur, and that’s usually what gets you, when it comes back, there’s a fairly high risk of recurrence, even if you’ve had it cut out or you’ve had chemo. What this, uh, so people obviously are familiar with mRNA, primarily from the way the vaccines were developed extremely quickly. Uh, during covid. mRNA hadn’t really been used, uh, on scale for human application before the Covid vaccines were developed, and Moderna developed one of the first vaccines using mRNA.
[00:42:08] Cameron: So what they’ve, what they’re able to do in this trial is if you’ve had a particular form of cancer, they can take a cancer cell from your body and then build a specific vaccine for you based on the kind of cancer that you’ve got. Using the same mRNA approach that they used for covid. Now people may recall the, basically the way that works is they would take certain indicators in the Covid case from the spike protein, um, put that into your body in a way that it’s not dangerous to your body.
[00:42:49] Cameron: It’s not the virus they’re putting back in, it’s just certain indicators from it. And your body would then develop antibodies that would look for those, uh, indicators and then target any cell that has them. They’re doing the same thing in this, but with melanoma cancers. So you’ve already had a cut out and then you’re building, uh, basically, uh, they’re called neoantigens.
[00:43:13] Cameron: The vaccine works by instructing the body to make up to thirty-four neoantigens. These are proteins found only on the cancer cells, and Moderna personalises the vaccine for each recipient so that it carries instructions for the neoantigens on their cancer cells. The idea behind the vaccine is that by prompting the body to make these proteins, it could prepare the immune system to quickly identify and attack any new cancer cells, bearing them, preventing reoccurrence.
[00:43:41] Cameron: Now in twenty-Twenty-two, they reported that the combo therapy reduced high-risk patients risk of reoccurrence or death by forty-four per
[00:43:50] Cameron: cent, compared to just the standard treatment in the two
[00:43:55] Cameron: years after treatment. They’ve now announced that people who received both therapies were forty-nine per cent, less likely to experience recurrence or death a meeting of three years after the treatment compared to people in the Hetruda-only group.
[00:44:10] Cameron: So yeah.
[00:44:12] Steve: bespoke medicine, you know, based on MRI. That’s, that’s, I mean, that’s kind of the holy grail, isn’t it? Because what we’ve had along this time is one size fits all. And you even with different medicines that that, that I’ve had, you know, it’s
[00:44:24] Steve: like the side effects are different for different people.
[00:44:27] Steve: This, this idea of, you know, design a medicine
[00:44:30] Steve: designed for the person is absolutely extraordinary. And, you know, I think we, we’ve been so focused on digital lately and, and technology that is consumer and information based. It’s, it’s, it’s interesting and cool that some of the things that are just ticking along, uh, happening within the healthcare realm, you know, it probably doesn’t get enough attention really, does it?
[00:44:54] Cameron: Uh, yeah. No, I, well, it’s hard, like there’s so much noise in the space of health stuff that it’s hard to tell the real stuff from the fake stuff. But these two,
[00:45:04] Steve: yeah. Wellness. Wellness And vitamins. Wellness and vitamins.
[00:45:08] Cameron: I wanna talk, uh, when we get to the futures forecast a little bit about what these sorts of personalized medicine stories may mean for the future of humanity.
[00:45:16] Cameron: Before we get to that technology time warp, Steve.
[00:45:20] Steve: Oh, I love this one. Oh my God. When I saw this, I was so excited, so excited.
[00:45:25] Cameron: anniversary this week of the Super Bowl Macintosh ad, the 19 eighty-four ad. And, uh, you know, it was a big deal then. It’s still a big deal. Now, New York Times did a big,
[00:45:43] Cameron: uh, article on it which is a good read. Uh, you know, not only was it a big deal for launching the Macintosh, uh, the ad was directed by Ridley Scott, who I still haven’t forgiven for his Napoleon film.
[00:45:58] Cameron: But, um, you know, this was, this was, uh, one of the highlights of his career. Obviously famously it didn’t show the product because they didn’t have a product ready to show.
[00:46:12] Steve: Is that the reason? I didn’t know that. I knew that it didn’t show. it.
[00:46:15] Cameron: Yeah. It wasn’t, wasn’t really ready. And the, the story, according to the New York Times is the product was gonna come out in 19 Eighty-Four, but it wasn’t ready in January when the Super Bowl happens. And they wanted to, they all thought that, you know, the ad agency Chiat Day knew that a lot of people would be probably gonna use George Orwell’s 19 Eighty-Four to advertise things in 19 Eighty-Four.
[00:46:42] Cameron: So they wanted to
[00:46:44] Cameron: everyone to the punch.
[00:46:46] Steve: Yeah, yeah,
[00:46:47] Cameron: So they, they, uh, put this together even though the product wasn’t ready, they couldn’t demonstrate anything just to get, uh, uh, uh, out ahead of everybody else and steal their thunder. But the great, it’s a great story. I mean, okay, so it not only was a great ad, but it started the whole Super Bowl ad phenomenon.
[00:47:05] Cameron: It was the first really big Super Bowl ad.
[00:47:09] Steve: Yeah, I didn’t know that it was the first one. And, you know, the Super Bowl’s
[00:47:13] Steve: extraordinary now, I mean, for me, I look forward to it each year for the
[00:47:16] Steve: advertising because I give, I think it, it, it, it, um, puts a stake in the ground for the zeitgeist
[00:47:21] Steve: on, you know, which products are doing well during the
[00:47:23] Steve: dot-com boom, you know, every, every venture, capitalism,
[00:47:26] Steve: pets.com and everyone had an ad. it’s like, yeah, I’m making a Super Bowl ad too. It was like, why did I do this startup to make a Super Bowl ad? Look, the technology is irrelevant. I don’t care about this startup. All I care about is I’ve got enough money to make a Super Bowl ad and I’m gonna do it. And so, yeah, crypto was three years ago.
[00:47:44] Steve: It was all about the cryptos, it was FTX and all of that. So it really is. A moment that tells you where we are economically and socially, but I didn’t know that that was the first big one. One other, um, note, uh, about the 19 Eighty-Four ad that I thought was really cool was it was the ad was only ever shown once and that was the only time it ever got aired.
[00:48:05] Steve: Every other time anyone has ever seen it then has been in pop culture and media referring to it. And so it was incredibly cost effective that it’s, yeah, here we are, what, 40 years later and we’re still, um, talking about it. So
[00:48:18] Cameron: Do you know why it was only shown once?
[00:48:22] Steve: I don’t know why You better tell me. Ken
[00:48:25] Cameron: Here’s the story, um, from the New York Times, John Sculley was the CEO of Apple at the time. Famously,
[00:48:32] Steve: in a can.
[00:48:34] Cameron: what?
[00:48:35] Steve: Sugar in a can guy.
[00:48:36] Cameron: in a can. Yeah. Steve Jobs hired him by saying, do you wanna change the world or do you wanna sell sugar in a can? Or something like that.
[00:48:44] Steve: Such a good line. So
[00:48:46] Cameron: And then Sculley fired Steve Jobs. But anyway, before that happened, Sculley says, before the commercial ran, we had to take it to the board of directors.
[00:48:54] Cameron: The board sees the commercial, then there’s just dead silence in the boardroom. They turn and look at me and a board member says, you’re not really gonna run that thing, are you? Another guy says, as the cloning credits scrolled up, the chairman, Mike Makula put his head in his hands and kind of folded over the conference table.
[00:49:13] Cameron: Then slowly straightened up and proposed hiring a different ad agency. I. Ridley. Scott says, I made it. I thought it was pretty good, but I was thinking, really, they’re gonna run this on the Super Bowl and we dunno what it’s for. Another guy says, I had them do a theater test. We get back the results and it’s the worst business commercial that they’ve ever tested in terms of persuasiveness.
[00:49:36] Cameron: Scully says, the board said, we don’t think you should run it. Try to sell the time. So basically the board said they, they bought this time to show it and the board says you have to sell it. You can’t, you can’t run it. Get rid of the ad space time. Then Chiat Day, the agency told them to drag their feet when they were told to sell off the time on the Super Bowl, which they did until it was too late to sell it off.
[00:50:06] Cameron: No one could prepare an ad in time so that they had no option but to run, go through it and uh, run with it. And then it says. Every news show had clips of it. the commercial kept running and running for days after that, it ran for free over and
[00:50:22] Cameron: over again. the
[00:50:22] Cameron: value of the offshoot publicity is
[00:50:24] Cameron: what many advertisers see as the bigger benefit, really.
[00:50:27] Cameron: Scott says, I think the Super Bowl frenzy started there. Then it was about
[00:50:31] Cameron: a million dollars a minute. Now it’s about $7 million a minute. And then the New York Times says Actually, the
[00:50:37] Cameron: average cost for a Super Bowl ad this year is twice that 7 million for a
[00:50:41] Cameron: thirty-second spot.
[00:50:46] Steve: One of the things that it, that it did was the cut through. I mean, and this is one of the great things in advertising. I mean, two things come to mind. A lot of great marketing that we look back on historically was irrational and courageous at the time. And for every courageous one that succeeds, there’s ninety-nine with people living under bridges saying, yeah, I invented new Coke.
[00:51:05] Steve: Like from that scene in the Simpsons, you know, it’s, uh, you know, courage gets you there, but ninety-nine percent of those courageous moves end up being disasters. So we’ve gotta keep that in mind. But the cut through was extraordinary. And, and the ad that stood out the most for me during the Super Bowl, I haven’t seen all of them, but I’ve seen highlight reels, was, um, I, I thought that what Kanye did was incredibly interesting.
[00:51:30] Steve: And, and I know that he’s, uh. He’s an, a weird and interesting person, has done a lot of kind of crazy stuff lately, but I dunno if you saw it. He had a, uh, an ad that he put on the Super Bowl, which was him like driving in a car. Holding up the phone, talking to it. And the first thing he says is, Hey, it’s Ye here.
[00:51:51] Steve: Um, here’s my Super Bowl ad. Look, I haven’t spent any money on production because I spent all the money on buying the ad. So he spent his 7 million bucks to get his 30 seconds. And it is literally him holding up a phone. I don’t know if this is him being ill and a not well person or some sort of genius, uh, uh, Kanye marketing stunt, but he said, so the ad is go to Yeezy’s dot com, we’ve got a whole lot of shoes and things for sale there.
[00:52:17] Steve: It’s gonna come up on the screen, and then it’s got like this little weird Yeezy’s dot com. It’s like an infomercial from 1987 at three in the morning. And he just holds up the phone and that’s the ad. And my TikTok was filled with it. I mean, so he has had a lot of, let’s call em reruns. Man. You gotta look at this thing in my freaking mind that that won the Super Bowl, you know, low production values.
[00:52:39] Steve: It was the antithesis cut through again in the opposite direction.
[00:52:43] Cameron: it’s clever. Well, I mean, the thing I, I was talking to Chrissy about this being the 40th anniversary in the car on the way to come through the other day, and then Fox started asking me what the Macintosh was and I explained and said, we got home that night and he wanted to see the ads. So I showed him the ad and then he wanted to see videos about the Macintosh, and we went on this whole YouTube rabbit hole of the original Macintosh.
[00:53:08] Cameron: And then he built one out of cardboard. He wanted to make his own Macintosh and he sort of became really deeply interested in the history of the Macintosh and the history of computing. So that was kind of cool. The ad is still inspiring kids generations later into understanding more about technology.
[00:53:28] Cameron: And I was talking about why it was such a big deal when it came out and I never had one. And you know, none of my, I didn’t even know anybody who had a Macintosh
[00:53:38] Cameron: in
[00:53:38] Steve: No one had, no one had computers back then, but YouTube, it’s a, it’s a good point you raised with
[00:53:42] Steve: Fox, is that. I, I really love the idea, and I do this with my kids a lot where I say, and I’ve got
[00:53:48] Steve: this parlance. I say, let’s go to
[00:53:49] Steve: The tapes, guys. That’s what I tell ’em. It’s, we didn’t go to the internet Google.
[00:53:52] Steve: I say,
[00:53:52] Steve: let’s go to the tapes. It’s, I want to get that out In modern
[00:53:55] Steve: society listeners, whenever you wanna show someone something, don’t
[00:53:58] Steve: say, let’s go to the internet, let’s Google it Say let’s go to the tapes. And we go on these little historical journeys with a whole lot of
[00:54:03] Steve: things and it, and it’s an incredible resource to take you back to what was at a certain time, whether it’s tv, advertising, history, politics, an incredible resource.
[00:54:12] Steve: And it can really be enlightening for kids if you take that time to give them some historical perspective. Because I think that having story arcs of where we’ve come to and where we are really creates an educational ballast. I really like that.
[00:54:25] Cameron: I agree. Alright, I know you got a heart out in a few minutes. I just wanted to finish on a bit of a forecast about transhumanism, Steve, reading these medical stories and thinking about it, life extension, um, you know, what happens in the next 20 years when we have medicines that can solve or cure almost any disease, aging, cancer, all the big killers, Alzheimer’s, uh, et cetera, et cetera.
[00:54:56] Cameron: Um, and people can live, uh, to a hundred, a hundred and twenty-five, 150 healthy, productive lives. When we have, you know, as we talked about in our last episode, potentially an economy that’s run by AI and robotics and nanofabs where you don’t need to work. Where you, you know, there’s this whole idea of, you know, reinventing what it means.
[00:55:26] Cameron: The human existence. Human existence has obviously been, what’s the old saying about brutish short and something, um, what happens when we don’t have to worry about cancer? We don’t have to worry about Alzheimer’s. We don’t have to worry about, you know, we’re probably not going to stop people from dying from an embolism or, or getting hit by a truck, or committing suicide or getting shot or those sorts of things anytime soon. But those sort of big diseases, the things that really
[00:56:02] Cameron: take on, take most of our friends and family in the West, you know, it’s different if you’re living in Gaza at the moment and you’re getting bombed by Israel. But if you, you know, those of us in the west that that worry about have, you know, friends and family and ourselves dying of cancer, et cetera, et cetera, when that goes away and we don’t have to worry about those things anymore, what does it mean to be human in that kind of a future?
[00:56:24] Cameron: Steve, how do, how do we reinvent ourselves? I.
[00:56:27] Steve: I mean, well this is the first one. It’s like it is like you can renovate a car where you can replace every part or
[00:56:35] Steve: you know. Buff every part so that it
[00:56:38] Steve: doesn’t get ill. I mean, are you still the same person? Does transhumanism mean that we’re just extending our lives?
[00:56:46] Steve: Or are we sort of transitioning ourselves into another,
[00:56:50] Steve: not portal, sort of another, what,
[00:56:52] Steve: what’s the word for it?
[00:56:52] Steve: Like another body of sorts. What? I just wonder whether or not we’re going to stay in the same form or whether we kind of become cloud beings to some extent. You know, that that’s, that’s what I wonder about. But if it is that everything can be made and done for us, I, I think it’s, it’s gonna cause a lot of psychological problems.
[00:57:14] Steve: I really do. I, I just reckon that there was something about the simplicity of life where, you know, you’re so busy doing and keeping the lights on, that it just has this purity to it. Like that, that pain of just being able to keep on, keeping on with life and, uh, being able to feed your family and get it done.
[00:57:34] Steve: You haven’t got. Time to become neurotic. And I feel like society is so neurotic because so many things are done for us and our heads just filled with so much stuff. I, I don’t know. Is it, is it, do you want to live forever? I mean, do, do people wanna live forever? don’t know.
[00:57:50] Cameron: I mean, I don’t know about forever, but.
[00:57:52] Steve: I don’t want to die. I mean, of course
[00:57:54] Cameron: that’s a model of time. But I’m here to tell you, there’s something
[00:57:57] Cameron: else. The Afterworld, the world of never-ending happiness. You can always see the Sun Day all night, but where do you come up that shrink at Beverly. Hills, you know the one doctor.
[00:58:04] Cameron: Everything will be all
[00:58:05] Cameron: right. Instead of asking him how much of your time is left, ask him how much of your mind, baby, because of this
[00:58:10] Cameron: life
[00:58:11] Cameron: things are much harder than the afterworld. This life, you’re on your own. Bum-bum, Bum-bum. And if the elevator tries to bring you down, go
[00:58:21] Cameron: crazy. Punch the higher floor
[00:58:25] Steve: Can you send me that song? I, what is you,
[00:58:29] Cameron: It’s the first track off of Purple Rain, man. Let’s go crazy. The opening
[00:58:33] Cameron: monologue.
[00:58:34] Steve: you go. I love
[00:58:35] Cameron: 1984, baby. You had to be
[00:58:36] Steve: look, I think, I think, I think the medical stuff and transhumanism and life extension is gonna be the really next big thing for sure. Especially that robotics is gonna take off a lot of the production, you know, nanophobes, all of that. It’s funny ’cause my wife works
[00:58:51] Steve: for, um, a wellness company and they’re talking about not just life extension, but they’re talking about health span, you know, not just lifespan.
[00:58:58] Steve: And I
[00:58:59] Steve: think that’s the key thing. I, if you just live a lot longer and you’re kind of infirmed for the next
[00:59:04] Steve: 30 years, I I don’t
[00:59:05] Steve: think anyone wants that. But if your health span is increasing, you can still do karate and go surfing. Like seriously. It, you just, that’s an, no one is gonna say no to
[00:59:14] Cameron: If you’ve still got your marbles and you’ve still got your health and your mobility, then what? What do you do with the next a hundred years of your life? What is your
[00:59:23] Steve: Maslov’s heart self-actualization. You do the things that. Turn you on, you, you do the things that you’re interested in. You know, if I was healthy, I would just want to get as good at languages and surfing and, and do all of those things that I love and grow vegetables and cook and just all of those really enjoyable things, just all the time, you know?
[00:59:43] Steve: And
[00:59:43] Steve: you might have time to.
[00:59:44] Cameron: I was saying to Tony on our show, the other that I’ve just gone back into Gogol and Dostoevsky and there’s all these classic books that I’m still
[00:59:52] Cameron: in my fifties working my way through. I haven’t got to yet. There’s probably hundreds of books that I, I really wanna read that I, I don’t have, I
[00:59:58] Cameron: haven’t had time to get to yet.
[01:00:00] Cameron: Films to see, music to listen to, you know? All right. This has, uh, been Futuristic
[01:00:07] Cameron: episode 21. Thank you Steve Sammartino. Follow us on
[01:00:10] Cameron: Twitter TikTok, um, MySpace, GeoCities, And uh, Friendster.
[01:00:20] Steve: Yep. And we’ll, Sam, we’ll see. We’ll be logging into the, well tonight at 1130.
[01:00:23] Cameron: Wow.
[01:00:25] Steve: The. Well.
[01:00:26] Cameron: If you dunno where to find us, look us up on Archie and Veronica and, uh,
[01:00:32] Cameron: what was the other search engines back in the day?
[01:00:34] Cameron: Um,
[01:00:36] Steve: Answers With a Z.
[01:00:37] Cameron: ANZs, that was the Australian
[01:00:39] Steve: answers. That was the Australian one. That was the, uh, Aussie male
[01:00:43] Cameron: I worked at Ozymail
[01:00:44] Cameron: when we had answers. Yeah,
[01:00:45] Cameron: GeoCities. Yeah.
[01:00:47] Steve: DOCities and Booksmart.
[01:00:49] Cameron: Yeah. Yeah. Oh. Um, what was the one named after? Uh,
[01:00:53] Cameron: Butler. Um, ask Jeeves. Look
[01:00:57] Cameron: us
[01:00:57] Steve: I feel like ChatGBT missed a great opportunity to call it Ask Jeeves, Finally, there would’ve been a Jeeves.
[01:01:02] Cameron: Sam keeps talking about how it’s the worst brand ever. Like it’s a really, they didn’t plan for it to be a product and now they’re stuck with it.
[01:01:09] Cameron: It’s like really bad brand
[01:01:12] Steve: that, that’s good, right? I mean, a brand here, let me just remind everyone. A brand is, it’s a cognitive shortcut from which to make informed decisions.
[01:01:19] Cameron: Nice. With that bit
[01:01:21] Steve: Copyright Steve Tino, 2 0 2 4.
[01:01:24] Cameron: Talk to you next time buddy.
[01:01:26] Steve: See you mate.
The podcast currently has 11 episodes available.