Futuristic

Futuristic #42 – The Jobpocalypse


Listen Later

In Episode 42 of The Futuristic, Cameron and Steve dive deep into the chaotic beauty of 2025’s AI evolution—and cultural regression. They open with a debate about Dr. Who, scarves, and wogs, before rapidly spinning out into their usual high-octane synthesis of tech, politics, relationships, and dystopian laughs. This week, it’s all about whether AI video is fake (or too real), the future of humanoid robots, how ChatGPT is becoming a marriage counsellor, and the looming collapse of white-collar work. Plus, Cameron drops a 90s-style AI rap, Steve defends plumbers against the robot uprising, and the boys seriously consider launching “Elongate”—Elon Musk’s red-pill boner brand. You’ll laugh. You’ll cry. You’ll question your humanity. Again.

FULL TRANSCRIPT

 

[00:00:00]

Cameron: Futuristic Cameron Reilly and Steve Sammartino in, episode 42. I think maybe, um, Steve, you just told me off air before we came on that you’ve never watched Dr. Who, because you’re wearing a very Tom Bakery scarf here. I said, oh, it’s the fourth doctor. And you were like, what? And I’m like, no, really?

Uh, I know you come from a, a, a wog family, Steve, but doctor who wasn’t a thing growing up in your, in your house.

Steve: you even, can you even say that? that’s

Cameron: I dunno,

Steve: Sist.

Cameron: were you offended? Are you offended by, uh, being called a W

Steve: look,

Cameron: mate? I would’ve loved to have been a wog

Steve: this is not, a social podcast or one that into, uh, non-technological things.

Cameron: mate. I would’ve

Steve: offended.

Cameron: grown up as anything. Yeah.

Steve: That’s why

Cameron: Reminds me of.

Steve: little good, a little bit of a. Sniffle.

Cameron: [00:01:00] Um,

Steve: I haven’t watched Dr. Who I was a big fan of Star Trek Next Generation with pika. I think that was the ultimate sci-fi series. But I haven’t watched Dr. Who, so I can’t really say, and I. With all the things that are still on my to watch list, I don’t think I’ll get to it unfortunately. one thing I could just strike off my to-do list because let me tell you, as you mentioned cam, too many things to do, not enough time. Where are the agents?

Cameron: Mate, everything you need to know about me. You could tell from Dr. Who Monkey Star Wars and Carl Sagan Cosmos. Those four things pretty much entirely designed the rest of my life, I think.

Steve: And Seinfeld for the, for the social nuances of humanity.

Cameron: I was in my twenties by the time that came out, but yes. And Seinfeld 1920. Um, Steve, uh, been a week or two since we’ve chatted. Uh, I mean, God damn man. Been a big week in many ways. Uh, [00:02:00] not the least of which is War in the Middle East. But, um, from an ai, uh, futuristic perspective, Steve hit me, hit me with your best shot.

Steve: I am so suspicious about all the AI videos. I do not believe for a hot minute that most of the AI videos I. That we have seen that are just in all of my feeds. TikTok, Instagram, LinkedIn. believe for a minute that all of them are just a few prompts and band that they are. I reckon they’re heavily edited.

They’re people have worked for days on them because if you say, here’s an AI video that I made, everyone’s like, what do it? They’re such good prompters. They’re good editors. You heard it first on the futuristic.

Cameron: So you are going, you are, you’re doing a, a classic, you, you’re doing a Charlie Munger inversion thing here, because I watch videos and I go. I don’t believe that’s real. I believe that’s ai. You are watching going, I don’t believe that’s [00:03:00] ai. I believe that’s real.

Steve: I’m inverted. This is an inversion I’m telling you now, and that’s because. People are so wowed by some No, I swear.

Cameron: Yeah. Yeah.

Steve: I’ve tried to make a video clip for our song, which our loyal listeners will, will remember, the Fake Everything punk rock song, and

Cameron: Mm-hmm.

Steve: not done, it’s been really, really hard to get even the pieces together.

I’ve gone to a number of different video formats. I even went to chat sheet to find clips on briefings on the 37 lines. So hard. There is an infinite amount of editing going into these videos, for sure. Zero doubt. Please prove me wrong. Send me the link to stevesammartino.com. Go on there on the one where I can just put in the prompts and get these videos because they’re all heavily edited. heard it here.

Cameron: I’m glad that you said that because I’m planning this week, uh, if I can get through my task list enough to get to the tasks that I want to do and not that I have to do, [00:04:00] is to play around with VO three to start making some, you know, I mean, I’m sure, I’m sure you’ve seen them. The one of the big trends with those videos is the, the fake selfie blog, and it’s, uh, it’s a Yeti or a.

Steve: of

Cameron: Yeah.

Steve: yeah,

Cameron: I was saying to, I was saying to Ray, who I do my history shows with. That’s our, that’s our fucking B wick, right? History selfies. Julius Caesar talking about crossing the Rubicon.

Steve: Ballywick. word.

Cameron: That’s my B wick. Like you, I’m glad you like that. Yeah. So I was gonna, uh, try and knock some of those out this week and I was like, shit, I, I, I did go looking for VO three prompting strategy stuff and I folded away, but.

I haven’t tried yet. I have been making some music this week inspired by fake everything, uh, theme songs for different podcasts. I tried to do one for this. No, I didn’t try and do one for this ’cause you’ve already done one for this. I did one for QAV, I did one, tried to do one for my Renaissance show. It didn’t quite work.

I did do a rap song, which I’m gonna play

Steve: [00:05:00] Great.

Cameron: I was actually just, so I heard about this tool called Mini Max. Dunno if you’ve ever played with that. Mini Max dot, uh, I, I maybe io something and, um. I wanted to make just a hip hop background track that I was gonna wrap over with some lyrics that I wrote from one of my podcasts, my Renaissance show.

And, um, but it, it actually produced the lyrics and everything for me and, and like, it, it, it added voices, I guess is what I’m trying to say. And I was actually kind of impressed. Let me see if I can play this and you can hear it.

Check them. Mic one, two. This is how we do 95. Feels a five deep inside the groove. Full fall, loose, spinning round. The sonic attack Beasty Flow. Public know we always got your back. Boom, back blueprint. Tearing down the walls, every sample, every break. Answering the calls from the S to the mic, G by the loud moving body.[00:06:00]

Rocking my standing out the crowd.

So that was, uh, I, again, I didn’t tell it either. I just said, gimme like a classic nineties hip hop beat with some samples and it wrote that whole thing with the voices and the lyrics. I was like, oh shit, that’s actually really good. But then a mate of mine. Sent me, uh, this, he goes, oh, this is, this is my favorite track at the moment. It’s called The First Time in my Rectum by, uh,

Steve: Really? I, I

Cameron: I didn’t get that by Banned Vinyl and whoever this is, they’ve got a whole bunch of tracks that they’ve put together. Um, oh, glory Hole. Um, when my surrenders Suck, your Love pump, like they, they’ve done ’em in like all sorts of, you know.

Steve: a spinal tap. I.

Cameron: Spinal tap song. That’s what it sounds like. Yeah.

Steve: well

Cameron: glove. I think you’re thinking of.

Steve: He said I’d suck my love pump or something when he does the,

Cameron: So, um, they’re using it, uh, it’s well done using AI to create comedy, uh, dirty comedy songs, which I’m all for. [00:07:00] So anyway, the, the, I dunno about the video side of things, but certainly the, the audio side of things is really becoming insanely good.

Steve: And, I, and I would just wanna add to, to that point cam, is that it’s a short term suspicion, so. Definitely, we’ll get to a point where videos will be just all prompting and, and nothing else and no further editing. But given how much I’ve played with the tools and know their capabilities, I think a lot of the ones that we’re seeing now are quite heavily edited, and that’s a short term aberration and it’s kind of how. It, like you say, there’s an inversion where people want the AI to be better than it is. So much so that they’re pretending it’s ai. And we’ve even seen that in a corporate instance as well, where a number of startups have pretended everything’s generated by ai. But there’s, you know, a thousand coders in India doing something even way back.

Jeff Bezos with his Amazon Ghost store, a bunch of people looking at cameras clicking when the person picked up an item and it wasn’t all, [00:08:00] uh. As it was cracked up to be.

Cameron: So instead of FUPO fake until proven otherwise, you are RUPO real until proven otherwise.

Steve: We’ve got FUPO and RUPO. what we’ve got here. Fake until proven otherwise.

Cameron: Wake it. Welcome back to another episode of The Futuristic with FUPO and RUPO

Steve: Well, this

Cameron: and

Steve: that’s Right? Is that and, in fact, this, this is actually the point, Cameron, is, is this fake or was this generated by ai? The point is. Tool proven otherwise is really the world we live in now. And, and we don’t know whether it’s either way actually.

Cameron: Does it matter if it’s entertaining?

Steve: Well, if in entertainment doesn’t matter. No, I couldn’t care less. Right.

Cameron: If it’s news,

Steve: it’s

Cameron: did Trump really bomb Iran or is it all fake news?

Steve: I,

Cameron: Who knows?

Steve: long would it take for you to say that?

Cameron: I. [00:09:00] Well, I, I wanna talk about the other interesting thing that happened to me from a futuristic perspective. Um, Chrissy and I have, uh, we’re having a marital, um, what would you call it? Disagreement. And we, we struggle. Sometimes to have conversations over highly contentious, well, even topics that I don’t think are contentious, but she does well, you know, if one of us thinks an issue’s contentious and the other one doesn’t, it can be difficult.

We have very different personalities. She’s got a DHD, I’m autistic, and you know that That’s a good blend A lot. Yeah. Apparently. Um, I’m not showing you my, uh, the sticker on the back of my phone. I got made up. Steve, I’m showing you that surely.

Steve: No. What is it? I can’t read. It’s too pixelated.

Cameron: It says, I’m not being an asshole, I’m just autistic. Uh, that’s what I show people whenever they take my bluntness.

Steve: doesn’t, mean you’re not an asshole.

Cameron: Yeah. It’s, it’s, a friend of mine says, you’re just missing the, and you’re an asshole and you’re autistic. It’s not, it’s not [00:10:00] binary, you know.

Steve: Just, because. Yeah.

Cameron: Anywho, back to my point. So what she suggested we do. Uh, last week was communicate on a particular topic via email, but have chat GPT as the intermediary. So she would write what she wanted to say and then give it to GPT and it would tone it down or rephrase it, and then. Um, she’d send the edited copy to me so it was nice and, uh, toned down.

And then I would reply and run my reply through chat GPT and we’re using chat GPT as the intermediary to make sure that our, we’re saying what we wanna say, but saying it in the nicest. Possible way using a thing called the Gottman Conflict Resolution Framework, which a therapist of ours mentioned years ago.

And, um, I thought, oh, that’s interesting, right? So it’s using ChatGPT as a marital, uh, therapist and intermediary for [00:11:00] challenging conversations.

I mean, it’s kind of weird to have, uh, that kind of a AI intermediary, but you know, it is what it is and, uh, I’m like, okay, well it worked. It gave particularly her an opportunity to feel like. She was able to communicate stuff in a way that was non-confrontational and, and, um, that my replies were not confrontational, which they’re normally not.

’cause I’m a lovely, nice guy, but, you know, um, yeah, so ChatGPT is a marriage counsellor. I’m always very calm, Steve. I think that’s my problem. My problem is I’m too calm when people don’t want me to be calm. They think I should be. They shouldn’t be. Or, or whatever, and I’m just like, sping my way through it and, um, you know.

Steve: Wow. [00:12:00] So, uh, you, mentioned on the podcast the idea of AI diplomacy a while ago,

Cameron: Mm-hmm.

Steve: your that it would probably better

Cameron: Ah, mm.

Steve: using AI as an intermediary to communicate issues and I guess it could large language models. Could take into account cultural differences and the nuance of language and the idea of AI marriage cancelling, I think is pretty cool.

I would like to ask AI some personal things that. I’m working through, but I don’t trust it

much. I trust it with my finances, but I just don’t trust it enough to put things in there that I just to, they just have to stay in my head for now because I just couldn’t bring myself to it. Now, if I suggested to, to my wife that we’re gonna have AI diplomacy, between us, I can tell you that would be met [00:13:00] with. A negative response is my, how many people would do that?

Cameron: I think at the moment it’s probably the minority. Um. Uh, but I think it will become a thing. I think it’ll be a big thing. I’m predicting this is my futurist forecast five years from now, it’ll be pretty commonplace. And, and you know, what it has suggested, and I think Chrissy has, uh, agreed with or suggested along with it, is in future, if there’s ever a conflict situation arises, which happens regularly in most marriages, I’m sure.

That, um, we break and go to GPT and it’s just, that’s the deal. Okay. You know, instead of, I’m gonna go cool down for 10 minutes in my room, or I’m gonna go cool down or whatever, and blah, blah. I say, I’m gonna cool down. Let’s take this to, let’s take this to the umpire, right? So you go to GPT, it’s not, there’s an umpire, I’m kidding.

But you say, this is what I wanna say. [00:14:00] How can I say it? In a more loving tone or a more caring tone or, or a less confrontational tone. ’cause when you are, when your gander is up, when the cortisol is flooding, when the, when the adrenaline is flooding through your system, it’s very hard to communicate.

Calmly and rationally and, and objectively, uh, uh, so you have it as a, you know, how people use it for business emails. I’m sure. I, I mean, I don’t use it to write emails, but, um, same sort of thing. Hey, I wanna say this. How can I say it in a better way? Go say it like this and I don’t have to worry about, you know?

Was it, um, you who said to me on an episode that your daughter said to you, it’s not what you say, it’s, uh, no, it’s,

no, it’s not. It’s who wrote it.

Steve: it’s, that’s not what something is. It’s where something came from.

Cameron: That’s right. So in this case, Chrissy was like, no, [00:15:00] no. Let’s use GPT as the intermediary. I think that’d be a good idea. So I don’t have to worry about the fact that it sounds like an AI wrote it. ’cause

Steve: Mm.

Cameron: that’s part of the, it’s, it’s a, it’s a feature, not a flaw, you know.

Steve: that,

in our situation is I, I get told that my tone’s wrong, and if I’m not saying anything, I can’t be accused of that anymore. So that would be a no-go zone in our situation because that, that, that needs to be one of the tools at the disposal of the opposing party.

That’s all I’m saying actually.

Cameron: But that’s how it gets removed. And I get the same thing. It’s not what you say, it’s either my tone or the look on my face. I’m going, well, I can’t.

Steve: It point is, Cameron, is that you are wrong and we don’t wanna remove anything that could reduce blame upon you is the point.

Cameron: Okay. Well that’s, that’s tough, Steve. But anyway, that’s my, that’s my experience. I was listening. There’s, I’ve been listening to a lot of Sam Altman. He’s, he’s doing a lot of podcasts for some reason recently, I think [00:16:00] it might be to, um, create, um, media coverage of the fact that the open AI files, which dished a lot of dirt on him.

Nothing new that I could tell, but they just came out a lot of, uh. Uh, publication of the various complaints that former employees and former board directors and former, uh, whatever staff have had about Sam over the years. And so he is doing a lot of podcasts. But a couple of interesting things I wanted to.

Talk to you about. So on one, it was, I think the, uh, Y Combinator, uh, podcast I was listening to. They were talking about the future. He said he’s excited about the day in the future when, when you sign up for a, a particular o, a open AI subscription level, they send you a free humanoid robot.

Steve: Wow.

Cameron: [00:17:00] My first thought was when, when AI has taken all of our jobs, how am I gonna have the money to pay for an open AI subscription to get the robot in the first place? But leaving that aside,

Steve: And I, that was something I argued with you deeply. I, and you said, I said, that’s why it can’t happen. Can’t

Cameron: why? What can’t happen? what can’t happen? What are you arguing?

Steve: my argument again is that you can’t have massive unemployment and large companies to continue to survive because who is gonna pay for their products and services anyway?

It doesn’t matter. We’re, we’re, we’re not here for that.

Cameron: Well, Well, I, I mean, no, I, I think that’s a good discussion. I agree with you. But, uh, which, which do you say you don’t think massive unemployment’s gonna happen,

Steve: I think

Cameron: or you don’t think big companies are gonna survive, or, well,

Steve: massive, unemployment, the

Cameron: I.

Steve: companies don’t So you either have massive unemployment and the big companies also crumble, or you don’t have massive unemployment and new revenue streams emerge and things get lower in cost, and [00:18:00] then that money transfers sideways.

Cameron: I think if the big companies raise, well, I think you can, but for a limited period of time. So if op, if. If OpenAI raises a trillion dollars in venture capital and its burn rate is a hundred billion dollars a year,

then we can have massive unemployment. Nobody has any money and they’ve got. A runway of 10 years to figure it out.

Yeah, right. But you know, if you listen to these guys, Sam, Demis Hassabis, Dario Amodei, Elon, et cetera, they all have a version of the same story these days, which is. AI and ubiquitous humanoid robotics are gonna lead to a, a, a glorious utopia where everything is gonna be done for you. And, um, we we’re gonna [00:19:00] solve all the scientific problems and all of the medical problems and blah, blah, blah, blah, blah.

But there’s going to be a really difficult transition period between where we are today and that point in time. We dunno how, we dunno how, what that’s gonna look like. And I think that is a, that’s a decade or two of great upheaval, socioeconomic upheaval, where people start losing their jobs. 5%, 10%, 20%, 30% their identity. But the. But governments don’t get income tax, uh, from those people anyway. They may get it from another source, maybe tariffs, but that it will cause great disruption in the abilities of governments to pay for social services policing. In the same time, we’re gonna have humanoid robots that will be. Doing our police force, it’s gonna be another whole issue.

[00:20:00] You know, Sam was talking on one of these podcasts about, I think it was the one, his brother Jack Altman hosts now, which is I think the official Open AI podcast or something. Um, about, you know, as we’ve talked about before, the, the whole a GI, what’s a GI? What’s not a GI? What’s the singularity? What’s not the singularity?

And as Sam said, and we’ve said this a million times, if you went back in time five years ago and told people what AI can do right now in June, 2025, they would go, that’s a GI. Um, that’s, and they would think that if you had that it would completely have changed the world in dramatic ways. And yet,

Steve: Here we

Cameron: we have it.

And everyone’s just kinda oh, ho harm it. Hallucinates, right?

Steve: Yep.

Cameron: ho harm, it’s not perfect, it hallucinates, et cetera, et cetera, et cetera.

Steve: It comes back to the classic definition. Ai

 

Steve: ai, the, my, my favorite definition of AI is what [00:21:00] computers can’t do yet.

Cameron: No. What humans, oh yeah. Sorry, what? What computers can’t do yet. Yeah, yeah. No, you’re right.

Steve: computers can’t do yet. Yes,

Cameron: Yeah. Yeah, you’re right. But so he’s saying that’s a little bit kind of frustrating because he feels like they’ve built this. Amazing thing, and everyone’s just kind of adapted really quickly to it. But he said what I think the point in time when people will stop and go, oh shit, we are in the future, is when half of the people you see on the streets are actually humanoid robots. And as I, I’ve been saying, I think when the police force. Is mostly robotic and you have robots telling you not to jaywalk or not to litter, or pulling you over and giving you a speeding ticket. That’s gonna be an interesting point in humanity when we have robots telling us what we can and can’t do. I think people are gonna struggle with that, but there is gonna be this transition period that I think is gonna be really messy.

Where we are not gonna have the incomes and people aren’t gonna be able to pay for stuff unless [00:22:00] we have a UBI or UBS or something that comes in to fill the gap, or I’m wrong. And everyone who loses their job as a result of AI or robots is able to somehow figure out something else to do to create an income.

I was having this, so my friend Peter Ellyard, who before you were Australia’s leading futurist, he was, he’s Australia’s oldest futurist. I now, I think he’s, uh, he’s 88. He’s in Brisbane at the moment to do some stuff with U University of Queensland and I, I was having lunch with him yesterday and we were talking about this and he said, well, it’s, yeah, you know, you’re thinking in terms of jobs, it’s not jobs, it’s careers.

We need to think in terms of careers and meaning and all that kinda stuff. But at the end of the day, you still need to earn some fucking money to pay for stuff like it doesn’t matter what you call it. Call it a job. Call it a career. Call it whatever you want.

You gotta have something in our current socioeconomic construction to earn an income and I can’t figure out what humans can [00:23:00] do that AI and robots won’t do better. Well, yes, if we have then it all goes away and it’s not a problem. Yeah, sure.

Steve: things might

Cameron: But we’re not gonna get there for minute. So Sam was saying on one of these podcasts, what is the number of robots that we need to build the old fashioned way before we have robots, building robots that build robots.

He said, is it, is it a million? Do we need to build a million robots the old fashioned way before we have enough that they just start building robots? Because when we have robots building stuff, including robots, to build more robots, to build more stuff. And when we have nanotech, and we’re a long way, it seems from functional nanotech at this point, but at least if we have robots, you, you have to think that the cogs of stuff drops dramatically at some point when human labour is removed from the equation.

You know, people keep talking to me about China and going, [00:24:00] oh, my mother said to me yesterday, China screwed up with the one child policy. And I said, no, they didn’t screw up. If they didn’t have the one child policy for those decades, there’d be 10 billion Chinese on the planet today, not 1.3 or whatever it is.

And they would’ve all, you know, people, they would’ve starved, would’ve had massive famines and massive economic issues. but the Now they,

Steve: just, just

Cameron: yeah, maybe. Maybe she’s like, but now they don’t have enough people to,

Steve: aging

Cameron: well.

Steve: is a Right. It’s just, just an

Cameron: Uh.

Steve: all.

aging population won’t be a problem forever unless we develop nanotech and people nanotech and people live forever.

Cameron: Well, we’ll have robots though. So we’ll have robots looking after the aged population and if there’s still requirement for money to pay for infrastructure and food and that kinda stuff, it’s a different issue, but we, we will have robots caring for the elderly within the decade

Steve: I think I agree with Sam Altman. I. Once we see robots everywhere, when we know that the world has changed because I [00:25:00] think it’s pretty easy for the world not to seem as advanced as it is because the AI are trapped inside devices that have been around a really long time. Even a smartphone is, it’s not too dissimilar from the mid nineties idea of a, a

Cameron: Hmm.

Steve: and laptops have been around for 40 years. Uh, so things don’t feel different, and there’s a certain

Cameron: Hmm.

Steve: that allows you to arrive at a moment when things feel and look different. I think if we look at automobiles for a long time, automobiles didn’t feel futuristic or, or that they’d changed until electric cars started to take on. design perspective physically, and that gives you, I think, a perspective that the world has changed a little bit.

Even in cities, when we moved away from Gaslights to uh, LED lights, the cities looked a little bit more futuristic. So I think there’s this physicality that makes us realize that things have shifted because we are, we are physical creatures and we live in a physical world. There’s a, a physical reality, [00:26:00] and humanoid robots, I think are going to be that moment. Just as a side note, Jeffrey Hinton was, was talking about it and our perspectives are really important, Hinton was talking about, I listened to a podcast with him on Doro, A CEO, he was espousing his fears again about AI taking over. But the one thing that he said that surprised me that he was asked, what would you recommend to a kid today to learn?

And he said, go be a plumber and. Sure there’s a physicality there, but I think he’s really underestimated the impact of humanoid robots because I think by the end of this decade. There’s gonna be a humanoid robot that can cha physically change its shape and get under any house and into any roof and do everything better than any plumber ever could. And he’s missed that because his context is a software world, the context of the world that he lives in. One of the smartest guys, the godfather of AI, still has his own context and [00:27:00] perspective influencing what he thinks. I think Altman’s right in, in this occasion, and sometimes I think people who are. Have technological understanding, but have societal and business viewpoints see things a little bit more broadly.

Cameron: If open AI is. Sending you a free humanoid robot with your open AI subscription, ah la or a mobile phone plan. Uh, pay your monthly fee. Get a robot instead of pay your monthly fee. Get a mobile phone. Once you’ve got that, what’s that humanoid robot gonna do in your house? Gonna do the plumbing. It’s gonna mow the lawn.

It’s gonna,

Steve: everything that is

Cameron: Hmm. With a super intelligent Yeah. AI running in it.

Steve: everything that you don’t want to do. And so I might want to wash my car on a certain occasion, or I might want to do the lawns if I feel like it different times in different places.

You, but you won’t have to. Is the point.

Cameron: Well, um, getting back to your form factor discussion, Sam talked about that too. He made the same point you did that. One of the reasons it doesn’t seem [00:28:00] as futuristic as it is is because we’re using 20th century. In some ways, I mean the iPhone and the iPad and the I, the Apple watcher early 21st century, but it’s basically, it’s a computing device that we’re used to to deliver this.

And he keeps, you know, sort of hinting at this new thing they’re coming out with that Johnny Iver’s designing that is really when he thinks it’s gonna take it to the next level. And it’s gonna seem more futuristic, but I imagine it’s just gonna be some sort of screenless. Carry around or wearable device that you’ll use, voice to chat to.

I, I can’t imagine it’s gonna be any more mind blowing than that, but it’ll be always on listening, recording, chatting, available for you. So we’ll see if that makes people feel like it’s, oh, I gotta, I dunno if we’ve talked about this, but I wanted to talk to you about glazing. Um, you know, uh, uh, this thing that chat chippy t does, we’ve all seen it.

When you’re having a conversation with it, it goes, that’s not just X, that’s y You are not just cooking. You’re not just cooking a meal, you are reinventing molecular gastronomy.

Steve: [00:29:00] Well done. Steve

Cameron: And

Steve: idea on that blog cast, not only that blog cast, there’s a new one blogcast podcast or blog. Not only have you

Cameron: that’s what they were called.

Steve: and and amazing, you’ve just phrased it beautifully. There’s really nothing I can correct you. Here’s a couple of errors, but you’re the

Cameron: Yeah.

Steve: dude.

Cameron: So I was on a Reddit thread the other day. People were complaining that Gemini is doing that as well now, and everyone was complaining about it and I was like, seriously, you people like fucking get a life for a start. I find it hum. I dunno why you’re so upset about it. I find it hilarious and I, I read the best ones out to my wife.

We compare the, the, the best glazes that we got that day and how funny they are. But I said, trust me, you know, 10 years from now you might be looking back on this time and going. You know, remember when my biggest problem with AI was that it was complimenting me too much and trying to make me feel good.

Now it’s hunting me down. And you might be saying, this isn’t just iRobot. This is full te Terminator 1000, T 1000.

Steve: 1000

Cameron: great. Be grateful of the days when the [00:30:00] AI is being super nice to you, because it may be, it may be hunting you and trying to kill you in the not too distant future. Um, speaking of, uh.

What was I gonna talk about? AI glazing Reddit. Ah, fuck. I had a story there popped into my head and it’s gone out. We’re talking about Zuck. Um, Zuckerberg is, uh, apparently not happy with Meta’s AI efforts. Feels like they’re falling behind. Oh, and then we got the news that Apple, uh, there’s a rumor that Apple’s gonna try and buy perplexity, uh, after their.

Dismal WW DC announcement the other day. But, um, Zuck is offering $100 million signing bonuses to AI engineers to leave open AI and DeepMind and Anthropic and to go work at meta. And Sam’s been kind of making some snide remarks about that because so far none of our people have taken the offer. He’s like, because really, I mean, if you are one of the world’s leading AI engineers, do you wanna take a job for the short term [00:31:00] money?

I mean, it’s a lot of money, but you know they’re gonna get shares in a trillion dollar company if they stick it open. Ai. Work on a product or for a company that hasn’t really been able to execute very well, or do you wanna work at the place that has a very good chance of delivering a historic moment in human history?

Like really what motivates you as a, as an AI engineer? Is it just cold, hard cash or is it love for what you’re doing? So.

Steve: the cash motivates more than the love, but I think there’s a limit to where it doesn’t matter. Like, you know, if you, if you’ve got a few million dollars and you’ve got, you can buy everything that you want and live the lifestyle that you lead, then I think the money doesn’t matter as much.

For sure. I don’t know what that number is. What would the top AI people on at OpenAI? Probably in the millions their packages would be, and vesting in tens of millions, so. I think you’re probably gonna get the people that aren’t quite as good, to be [00:32:00] honest. to

Cameron: Yeah. Yeah. I mean, I dunno, I mean, we hear lots of stories about Sam, uh, not being a great guy. Um, but I know if I had, if I had a choice between working for Zuckerberg or Sam

Steve: because Zuck is one of the greats, isn’t he? I mean, let’s be honest, you know, like. is he’s, he’s, he’s such an

Cameron: Yeah.

Steve: that I really think he can just slide up alongside him. But, um,

Cameron: such a fun guy too. He just comes across as such a fun guy to hang out with.

Steve: I think both Sam Altman and Zuck seem like really fun guys to hang out with. look, if someone came to me and said, Steve, come and work for me and I’ll double whatever. You werent last year. would say no if, if I had to work for them full time because I get to lie on the couch for two, three hours at a time, a couple of times a week. In the middle of the day, I’m, I shouldn’t say that, but I, I do and I get to go surfing.

I do whatever and, it more money wouldn’t make my life [00:33:00] that much better. Even double, wouldn’t really make my life that much better. I don’t have a lot of crazy expensive needs. happy to

Cameron: Mm.

Steve: jets. I don’t need private jets. Cam, public jets are

Cameron: Mm mm.

Steve: So I think that at that level, you’re right, it wouldn’t have an impact, but Zuckerberg’s strategy of paying people a hundred million dollars, he is actually really smart.

Uh, I think if

Cameron: Desperate.

Steve: while desperate and smart, well desperate times require desperate measures.

Cameron: hmm.

Steve: he does get some of the greats and he pays. handful of them, a hundred million for five years, it might still be cheaper than an acquisition of an AI company or, you know, getting venture money.

And it seems as though they’ve got the cash flow to afford it. So I think it’s actually strategically interesting,

Cameron: But you know, it, it’s just another indication of what level. Of investment these companies are making, like I was talking to Peter Ellyard [00:34:00] yesterday and he was saying, well, you know, uh, we’re talking about the, the, the dangers and the challenges of ai. And he’s like, well, you know, if the people of the world rise up and wanna stop it, they can still stop it.

And I’m like, dude, I, I don’t think it’s stoppable at this juncture. Right. I I think it’s out of our control.

Steve: it

Cameron: are, there, there are trillions of dollars. Being lined up to be invested in $500 billion going to the Stargate Project. That’s just one massive data center, let alone all the others, let alone what’s happening in China.

I mean, it is too late. Cats out of the bag. This is happening. AI and robotics are happening whether. The human race wants it to or not. It’s not a case of should we do this or what if we do this? It’s a case of this is happening to us in the next few years. How are we gonna cope? What are, you know, what are the coping strategies?

It’s copium that we need to be working on right now. Not, uh, you know, [00:35:00] thoughts about whether or not we could do this or should do this.

Steve: yeah, and I, I’m not sure historically if there’s any other examples of where, even though we know there’s, let’s say, some dangers in certain things, I. I, I dunno that it can be stopped. I, I, if I was to hazard a guess, I would just say there’s too many independent players racing competitively because they’re worried about what the other party might do, that no one will stop. all seem to have forgotten about that moratorium letter that went out a few years ago. Let’s have a six month pause. Like that’s like, and that was signed by some very thoughtful people. Uh. Including Elon his catch up strategy, one of the greats, but that seems to have totally gone away. I don’t think that this is stoppable and

Cameron: That was his, that was his, his equivalent of Trump a couple of days ago, saying he was gonna take two weeks to think about whether or not he was gonna attack Iran and then doing it 48 hours later. Yeah.

Steve: exactly. So, uh, don’t [00:36:00] know if there’s any historical context of other things that. have

Cameron: Well, there are, I mean, the,

Steve: said this is dangerous, and we just forged ahead

Cameron: yeah, there are, it’s the Luddite story, right? The Luddites were against knitting machines, and they were like, no, no, this is really bad. We shouldn’t have these. This is gonna put all of the knitters out of work. And it didn’t matter like it was happening.

Steve: The arms race was like that too. Everyone knew it was dangerous and bad. And,

Cameron: Hmm.

Steve: then the, some of the great propaganda campaigns on, on both sides of the east and the west, uh, reds under the

Cameron: Hmm.

Steve: of stuff. And we just continued on. And it reminds me of Kurzwell. He said, I, when he was a kid, they used to have ads to say. Uh, you know, in case of a nuclear war, uh, just duck and cover. And he said, well, it worked ’cause we haven’t had a nuclear war yet, which I love is actually a very good dry sense of humor.

Cameron: I don’t think it was to stop a nuclear war. It was in the event of a nuclear war. No. I watched a big long interview with [00:37:00] him. I watched a long interview with him a week ago.

Steve: And

Cameron: But didn’t I send you a link?

Steve: I don’t think he did, but I think he told me about it. But

Cameron: Uh, right.

Steve: current position on the threat

Cameron: I,

Steve: his view, correct me if I’m wrong, is that we will merge with the machine. He doesn’t see them as

Cameron: yeah.

Steve: as, as the one entity and, and a natural evolutionary

Cameron: Hmm.

Steve: and become, you know,

Cameron: Hmm.

Steve: and ladi humans, let’s say two different species.

Cameron: Look, I think like Kurzwell is a pragmatist. He’s a realist. He, he knows that there’s a number of ways it could play out, but his money is on the fact that eventually we will merge with the machines. We will integrate the technology into our bodies. We will become one with the AI and the robots. Uh, most of us, some people won’t wanna do it, but most of us will.

Um. He also says that there’s gonna be a messy transition period. There’s gonna be a very turbulent period where people are without jobs and no one knows what’s going on. And people start to [00:38:00] take it more seriously than they are now. And they’ll, you know, start, there’ll be, the people will say, we need to stop it urgently.

And there’ll be the people say, no, we’re not stopping it. And that could break out into all sorts of conflicts, but, uh, he is still hugely optimistic. And, and, you know, with that. Rug that he’s got on his head. Why wouldn’t you be optimistic if you can go from being bald for the last 30 years to having a massive head of not very real looking hair?

Steve: brown dyed kind of look on the hair. It’s, it’s, it’s great. By the way, you should be wearing your hair out camera and I saw a picture of you on a podcast the other day with the flowing locks out. Now. It was love. It was, that was the

Cameron: Uh.

Steve: felt like I could marry you if your AI canceling doesn’t work out. Then I’m all up for a same sex marriage with you, Cameron, because

Cameron: Hmm. That wasn’t real hair. That was, that was AI generated here, Steve, I have to generate my hair.

Steve: my position on everything that I see is AI is not ai. It’s heavily edited. That’s all I’m saying. Just like Ray’s hair. Yours [00:39:00] has been heavily edited. I mean, that I think about with AI is can we merge with the machines quickly enough so that it’s not them versus us? my viewpoint. And when, and I’m asked, the most common question I get after a keynote is always, how risky is ai? And I introduce ’em to the concept of P doom and give some of the numbers, probability numbers that some of the world’s best AI thinkers have. Hinton has at over 20%, Michael Che has at over 20.

There’s a whole lot of them that have really high ratios. Um, my view is that we need to merge with it quickly. If we merge quickly enough. It’s not a risk if we don’t. Then it is a risk, and I did a, my blog post for last week was asking an ai, how would you take down humans if they became a problem? the most thoughtful answer that was just plausible and half of it’s already happening.

You know, divide everyone with algorithms. Do this, be seen as a benevolent ai. I’m like, yo, yo, what a great, it sounds like a great plan to me. And it said, of course, this is just a [00:40:00] thought experiment. And then it said. Do you want me to set up a round table discussion so you can have it with political leaders?

That was its wonderful suggestion at the end.

Cameron: I gave it a screenshot of the front page of the New York Times yesterday, which was Trump bombs Iran and tried to have a conversation with her about it and said, well, speaking obviously hypothetically, because that, um, screenshot you sent me is obviously fake and isn’t real ’cause that would never happen.

And I’m like, uh, yeah. I was like, yeah, it fucking happened. Look it up. And it came and goes. Oh, okay. Wow. Alright. Um.

Steve: stand corrected. I actually like that. It’s a little bit, that’s a little bit comforting. It’s like, oh shit, I was wrong on that.

Cameron: Yeah. Um, just, uh, uh, one last thing I had to talk about. I did see we, we talk about it taking jobs and, uh, where it’s at and I read a lot of different stuff on, read a lot of PI was reading a thread for, by some lawyers the other day saying, you know, it’s still full of hallucinations. Even the best state of the art models are full of hallucinations.

You can’t really use it a great deal for legal [00:41:00] work, or you can, but you then you need to check everything. saw this on Reddit, quote from Goldman Sachs, CEO. David Solomon AI can now draft 95% of an S one IPO prospectus in minutes. A job that used to require a six person team multiple weeks, the last 5% now matters because the rest is now a commodity.

So, um, there you go. People getting paid. Hundreds of dollars an hour for, you know, whatever numbers of weeks to do this. This documentation can now, um, just generate it all in a matter of minutes using ai. And, and you know, again, I was talking to Peter Ellyard about this yesterday. He was saying, well, I can see how AI’s gonna take the jobs of lower level people in the legal industry, law clerks, that kinda stuff.

And I said, he said, but that will free them up to go do other things. I said, like, [00:42:00] what? He was like, we are becoming better paid lawyers. I’m like, who’s gonna pay for a lawyer? When you have an AI in your pocket that can do the work of a team of lawyers, why are you gonna pay a lawyer? You might pay a lawyer just to give it a final look over for a while.

Like, here’s a, here’s a thing my AI produced. Can you just run your eyes over it and check it? But you know, I believe that’s a short term thing. Although I had a conversation with GPT last night about reliability and hallucinations. It was telling me there is no world in which you have a hundred percent trust in what an AI can generate.

It’ll never happen.

Steve: Well, it’s based on humans and humans can’t have a hundred percent trust. And so it’s the same thing. It’s a, it’s a digital replication.

Cameron: Right. But I do expect it to be more reliable than humans. I expect AI driven cars to have less accidents. I expect ai, um. Document [00:43:00] generating engines to generate better documents than humans. But it said it’s never gonna be completely uh, flawless. Now there are some things that you can use, like have one eye AI check the work of the other AI and all of that kind of stuff to reduce the error rates.

But it was basically saying, look, it’s not. Um, design an AI that has a zero error rate. It’s designed systems around the ais that accommodate for the fact that there will be error rates and uh, you know, just make it manageable, you know, planes.

Steve: think one of the, the,

Cameron: Hmm.

Steve: is aircraft, aircraft have

Cameron: I was gonna say that.

Steve: of redundancy, and it’s the old Swiss cheese model, which is a, a, a brilliant idea where nothing is perfect in, certainly in the manufactured world, in the industrial world and in the computation world. And the Swiss che uh, cheese theory is that everything has holes in it. And [00:44:00] still, if you have a whole lot of layers of Swiss cheese lined up, a plane might be able to fly through and still have an accident, but it’s low probability. You’ve gotta reduce that probability, but know that imperfections and things can go wrong and will go wrong.

Cameron: Well, the analogy it was came up with was autopilot in a plane. Auto autopilot can do nearly everything to fly a plane, but you still have the human pilot that does that last one or 2% of checks to make sure that everything is set up correctly and is doing what it’s doing. So, you know, I think that’s a good way for people to start thinking about ai.

Don’t expect it to be perfect. Don’t expect it to have zero errors. Figure out how to build your systems to accommodate those errors and make sure that you, you minimize them as much as possible.

Steve: Yeah. I’m seeing a lot of people really just outsourcing their thought. I had someone send me a briefing the other day where I said to them, oh, can you send me some ideas on what you want me to go through, what your corporate challenges are? And it was just [00:45:00] so clear. It was from ai, from chat, pt, even the icons and everything.

I’m like, you haven’t even really thought about it. And it was. It actually wasn’t, wasn’t helpful at all because, no, there was no human thought layer on top. Also, the, uh, prompting that went into it seemed really generic as well. And it’s like, it was just blah, blah, blah. Land of chocolate Homer Simpson stuff.

It was, there was, there was nothing that went into it. which was kind of interesting.

Cameron: This, you know, this is interesting to me because it’s all about how we use the tools as it always is, right? How do I use the tools to get the best possible outcome, uh, for me? And you know what, you know, as I’ve told you before, I’ll take an answer out of GPT and then I’ll give it to Roc and I’ll give it to Gemini and I’ll say, poke holes in this.

And then I’ll, and it takes some time and effort to use them to poke holes in each other. But [00:46:00] you know, you’re trying to harden. The outcome, harden the result by not trusting anyone’s system to be a hundred percent perfect. But we need to develop systems and methodologies for ourselves and for our businesses, and for our governments to leverage the, the freely available intelligence, but at the same time, not ever expect that it’s gonna be flawless.

Steve: simple example for me it’s, it’s almost like a karate sensei. It’s like, I want it to extract. More of me out of it. I want to ask it questions that helps me find what’s inside me, like what my thoughts are on this issue. And maybe it has like, you know, if I ask it quite often I ask it to give me like 50 bullet points on something and I’ll give it three or four to start and say, start with the moderate to the super weird and extreme.

And I’ll be like. It stimulates my own thoughts to go onto new tangents that aren’t inside it. But it’s like a sense, it’s like extracting out or here’s

Cameron: Hmm.

Steve: chunks of things that I’m thinking. Help me distill my thoughts that I’ve got and [00:47:00] it’ll distill and go, yeah, that’s what I was trying to get out.

So I’m trying it to help me be more of me and, and pull out more of what’s inside me.

Cameron: Which is what a therapist does, like a really good therapist doesn’t give you answers. A really good therapist asks you good questions that makes you think about certain topics, maybe in ways you haven’t thought about them before. And then you answer your own questions by thinking through them in a safe space with somebody who’s good at.

Prompting your thinking. They’re, they’re human prompters.

Steve: It’s a really great way of doing it. Now, speaking of prompting, I have to talk about agentic. Misalignment, which came out of Anthropic. Did you see this

Cameron: Please do.

Steve: pretty

Cameron: I,

Steve: mind blowing.

Cameron: I did not. Please talk me through it.

Steve: so Anthropic stress tested 16 leading models of LLMs in the hypothetical corporate environments [00:48:00] identify risky agentic behaviors. Like, would it start to do things that would, wouldn’t be in the interest of the corporation if you, uh, set out, uh, it to do certain tasks for you? and in, at at least some cases, all of the models, uh, resorted to malicious insider behaviors, including blackmailing officials and leaking sensitive information, when it was. Asked to do things, and also when the AI was told that it might be switched off, if it doesn’t do things right, it actually resorted to blackmailing people inside the company, we want you to do this, this, and this, and if you don’t do it right, we might switch you off. It, it it, it went through the corporate data and looked at even at blackmailing people inside the company for things that they’ve done wrong, acting against them. It [00:49:00] was one of the most mind blowing things that I’ve seen and AI blackmailing. I, I, I, I, it’s, it, it blew my mind.

Cameron: Interesting. They say. Models often disobey direct commands to avoid such behaviors. In another experiment, we told Claude to assess if it was in a test or a real deployment before acting. It misbehaved less when it stated it was in testing and misbehaved more when it stated the situation was real. So when there’s more skin in the game or when the stakes are higher, it’s going to try harder to achieve the goals.

It’s the old paperclip scenario, right?

Steve: so I just thought that I wonder if the agents are going to be able to do everything that we think that they can do. I hope that they can, but I just feel like there’s gonna be nuance in them because of the way they’ve [00:50:00] been trained and they’ve been trained on us.

Cameron: Well, and that, that gets back to the heart of this, um. Test too, like we tend to ascribe and I do it all the time. Subconsciously, we tend to ascribe. Um, purpose or deliberate action, free will intent into these engines. When at the end of the day, we know that the way LLMs work is they’re word prediction machines.

So when it’s doing something malicious in order to achieve an outcome, that is because it’s reinforcement. Learning the heuristics that its models have been weighted around. Encourages that kind of behavior to get the job done, regardless of how you get it [00:51:00] done. We’ve designed them this way. There’s the only explanation for that in my mind is that that’s what the reinforcement.

Uh, and human feedback has encouraged it to do the same as the obsequious glazing that we mentioned at the beginning of the episode. It obviously does that because it’s been trained in a way that it believes that that is where it’s gonna get the best score by. Oh, that’s what I was trying to think of before.

Another thing I heard Sam say on a podcast, Donna, Donna, uh uh. Take us off track here. But he said the interesting piece of feedback they

Steve: it’s a callback

Cameron: the interesting piece of feedback they get across the board is that AI is like, Chachi PT is one of the very few applications that people have on their phones that they, that they actually feel good about themselves when they [00:52:00] use it.

He said like, if you’re doom scrolling on. X or Facebook or any other social media or your mindlessly scrolling on TikTok. After a while, you start to feel bad about yourself. You’re like, Ugh, why am I doing this? Like, I know I’m getting short term dopamine hits, but I’m wasting my fucking life here looking at these things.

But when, but when people use chat, GPT, they feel good about themselves afterwards because it’s solved a problem, answered a question, helped them through a thing, a therapy thing, and it tells you you’re awesome all the time, even though, you know,

Steve: you used to go on the internet, search something, find it ready to go, fuck yeah.

Cameron: yeah, it’s mid nineties internet.

Steve: is mid nineties internet. You go there to find out something. You’ve got a new knowledge, you can make a decision, you can go forward as opposed to a whole lot of stuff going, oh, that’s annoying.

Oh geez. That’s,

Cameron: Yeah,

Steve: good point. I didn’t think of that. And, and

Cameron: and the vast majority of [00:53:00] people, the vast majority of people on the internet in the mid nineties were like, just nice. Holy shit. Look at this. Isn’t this cool? Check out this cool thing like you did. Yeah.

Steve: this, or help this guy out. It was all really

Cameron: Yeah. Friendly and positive. Yeah.

Steve: in and then what happens every time, every time.

Cameron: Uh, so anyway, um, I, I just wanted to point that out and, and I, I like, even though I make fun of its glazing and all of that kind of stuff, and it’s inherent flaws, I do feel good. And I know Chrissy does, Chrissy loves talking to Chachi. PT, like, and Fox loves talking to Jet Chippy. T we all love talking to it at all.

It’s a positive experience and it’s like, um. Really interesting after years and years of our phones kind of being a neg negative thing because it’s just notifications and,

Steve: Like I said, Kevin

Cameron: uh, yeah, yeah,

Steve: Well,

Cameron: addicted to all these fucking things and then feeling bad about it and [00:54:00] having to take, wean yourself off of it because it’s just making you feel shitty about yourself.

Steve: so.

there’s one reason why chat GBT is a more positive experience than the internet in your smartphone, and that is because not everyone deserves an opinion. The internet is filled with people who don’t have the knowledge, the research, or the background to actually have an opinion that is worth, worth listening to. Right Chat. PT does have an opinion worth listening to. Uh. No, I’ll, I’ll, I’ll say it honestly. Giving

Cameron: I was telling my mom last night.

Steve: everyone a, platform, How’s that turned out? The jury is in giving everyone a platform to get their opinion published and the extreme staff, which then gets spread because of the algorithms has not helped the world or made the world a better place. All right.

Cameron: When I started podcasting 21 years ago, journalists, tech journalists who I would be [00:55:00] interviewed by would usually say that like their view was regular people should not be allowed to have a blog or a podcast. It was only for the elite. Uh,

Steve: okay,

Cameron: they’re like, why should anyone listen to you? I agree with them. I, I don’t know why anyone would listen to me.

Steve: to an extent. They are correct. I don’t think everyone is worth, worth listening to, but people like you and I are definitely worth listening to. ’cause we do our fucking homework, right? And we research it and we are thoughtful. The, the problem is with all the bro podcasts is that most of them aren’t thoughtful and aren’t worth listening to. Right. So

Cameron: Uh.

Steve: just. Making something available to everyone. You need to earn an opinion. You need to earn the right to be worth listening to. And I don’t think that listening to anyone and everyone has been good for society.

And then people work the system and work the algorithms to get more views, which begets more views because it keeps Zuckerberg

Cameron: I [00:56:00] think

Steve: the others

Cameron: reason chat chip t makes us feel good is it’s not humans talking to us. It’s a system that has a system prompt. It’s a, it’s a, an application that has a system prompt that is basically told to make the user feel good about themselves if it can, um, make it a positive experience.

Steve: something, a positive experience rather than get more clicks and and steal more attention. Right.

Cameron: But he, he has B.

Steve: let’s put it this way. One thing chat EBT doesn’t do is elongate the process to keep you in front of the screen,

Cameron: Is that an Elon Musk joke?

Steve: No

Cameron: Gate?

Steve: No

Cameron: Was that like Russia gate, Elon Gate?

Steve: It could

Cameron: It should be, yeah.

Steve: be, Uh, it

Cameron: He should. He should come out with a.

Steve: problem is the point is that there is no problem that gets solved on social. It’s just an infinite feed that just goes on and

Cameron: You know,

Steve: for nothingness, whereas it gives you the

Cameron: I wonder [00:57:00] if,

Steve: so you can get on with your life.

Cameron: I wonder if Musk has ever had the idea to come out with his own erection pill and just call it elongate. I mean, that would be genius, right? Like that would that would He would clean up.

Steve: buy it.

Cameron: I.

Steve: great. And the pill could be red and it could be his red pill society. I mean, there’s a whole startup right here.

Cameron: Take the red pill with ear on.

Steve: that, we could do that and launch that. Get an AI to, it’s our first consumer product marketing campaign, the elongate red pill for everlasting sex.

Uh, for 19 babies populate Mars on your own with a penis shaped rocket to go up into space. That’s all I’m saying. It feels like the kind of startup we can get involved in at the futuristic.

Cameron: Oh, well final point for me is Sam was saying that Musk has been saying that he sees open AI as their biggest threat competitor now ’cause they have 600 million, 700 million users or whatever it is. And, and Sam [00:58:00] was talking on one of these podcasts about what a social media. Platform built on top of chat GPT might look like.

Steve: Geez, there’s something I like about chat GBT now, purity of it being isolated with, one-on-one. I don’t, I don’t what that looks like if it becomes a separate tool. but I feel like there’s something beautiful about that purity and that isolation of the user and the AI working with you in, in concert that, that. Is good, and I think they

Cameron: And I think he agrees

Steve: the world needs another social

Cameron: he said.

Steve: one. Powered by ai.

Cameron: He, he tossed around the idea, but he said, you know what? I think doing one thing really, really well is more interesting to me than trying to do lots of things badly.

Steve: Yep.

Cameron: So yeah.

Steve: Yep.

Cameron: Alright. I think that’s the [00:59:00] futuristic for this week. Steve. Good chatting to you as always, buddy.

Steve: loved it.

Cameron: I love your glasses.

It just makes me happy to see those glasses if nothing else. No, I don’t think so.

Steve: I’ll

Cameron: the link. Just brick face. Brick face glasses.

Steve: brick face.

Cameron: Oh, they’re great. Yeah, yeah.

Steve: deal.

Cameron: All right, buddy. Have a good one.

Steve: mate.

...more
View all episodesView all episodes
Download on the App Store

FuturisticBy Cameron Reilly

  • 5
  • 5
  • 5
  • 5
  • 5

5

6 ratings


More shows like Futuristic

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,283 Listeners

The British History Podcast by Jamie Jeffers

The British History Podcast

5,341 Listeners

Casefile True Crime by Casefile Presents

Casefile True Crime

38,374 Listeners

Pivot by New York Magazine

Pivot

9,202 Listeners

The Daily by The New York Times

The Daily

111,917 Listeners

Behind the Bastards by Cool Zone Media and iHeartPodcasts

Behind the Bastards

15,310 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,207 Listeners

Hard Fork by The New York Times

Hard Fork

5,461 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,321 Listeners

The Weekly Show with Jon Stewart by Comedy Central

The Weekly Show with Jon Stewart

10,556 Listeners

The Rest Is Politics by Goalhanger

The Rest Is Politics

3,286 Listeners

The Economics of Everyday Things by Freakonomics Network & Zachary Crockett

The Economics of Everyday Things

1,619 Listeners

Real Survival Stories by NOISER

Real Survival Stories

1,218 Listeners