Futuristic

Futuristic #44 – AI Agents, Robots and Car Sales


Listen Later

In episode 44 of Futuristic, Cameron and Steve dive into the dawn of the embodiment era of AI. Steve reveals he’s purchased a $16,000 humanoid robot — the K-Bot — to be delivered in December, marking his entry into the personal robotics revolution. The conversation expands into the future of open-source robotics, the potential for a robot skill-sharing app economy, and the economic implications of humanoid automation. Cameron shares his experiment with OpenAI’s new GPT Agents and explains where they fall short. They also explore a future where AI personal assistants act as bullshit detectors during major purchases like buying a car. Finally, Cameron reads an NPR-style retrospective on IBM’s Watson, drawing a direct line from symbolic AI to today’s LLMs and speculating on a future hybrid model. Also – Cameron launches his AI consulting business, Intelletto

FULL TRANSCRIPT

 

Cameron: [00:00:00] Okay, go. Gimme your intro.

Steve: Steve. Austin. You gonna to say, gimme an intro again? Wait a minute. wait a

Cameron: intro.

Steve: Okay, I’ve got it. You ready?

Cameron: Yeah.

Steve: We

Cameron: Give it to me, Steve.

Steve: Oh, we have the technology. Steve Austin. A $6 million man who is better than every human in every way. In 1975, $6 million was required to build a humanoid robot who was the future.

Now it barely gets you a wooden house with two bedrooms in a city in Australia. We’ve had a great reversal. First a humanoid was 6 million. Now it’s a house. Things are back to front. If you think you can predict the future technology sociology, then fucking think again. Welcome to the futuristic.[00:01:00]

Cameron: This is, uh, the futuristic episode 44, recording this on the 4th of August, 2025, pre GPT five, the days before GPT five, which we’re expecting to hit any day now, but there’s been a big few weeks since we last talked on the show. Steve, you and I have talked off the show off air, but on air it’s been a while.

Steve: private

Cameron: What’s, yeah. Tell me you’ve got big news, Steve. Tell me your big news. Let’s, let’s start with that.

Steve: I.

robot. It wasn’t a $6 million, man, Cameron. It was a $16,000 humanoid. Sexless as far as I know, but I could be surprised when it arrives. We don’t know if it’s gonna have genitalia. We don’t know. But I bought myself a k Bott [00:02:00] which is going to be delivered in December now. Hey, still waiting on, uh, Elon’s Robo Taxis while they’re there now, so we don’t know, but, uh, it’s your first personal robot, open source, full stack in your hands. they’ve actually just, uh, got their second batch now. They’re 11,000. Mine was 16,000 with a few upgrades. It was $18,500 deposit. Tom my co-founder at Macro 3D, we want to code it to do trades work. Uh, that’s the main thing. And also mow the lawns and do the dishes and fold the washing. So. The robotics revolution, I call it the embodiment era of AI is upon us. Wow. We,

Cameron: Wow. And you’re gonna take it on the road with you, get it up on stage,

Steve: I want

Cameron: do a bit of a Abbott and Costello routine.

Steve: would really love to do [00:03:00] that. I put on LinkedIn a picture of me taking it through the airport and sitting next to me on a plane and lying down and going through the, uh, metal detector. They’re gonna detect a lot of metal in there. I dunno if that’ll wipe its memory or what’ll happen or whether it’ll glitch out, it’ll be interesting.

And I think they are coming. Uni tree also announced a,

Cameron: to buy a seat.

Steve: well, I

Cameron: Do you have to buy a seat for it on a plane, or do you put it in cargo?

Steve: well, you wouldn’t be able to put in cargo because of the lithium ion batteries. So imagine you’re gonna have to

Cameron: Hmm

Steve: Otherwise, that’d be

Cameron: hmm.

Steve: we don’t want that.

Cameron: That’ll be I, I can’t wait to see that you’re the first person in Australia to have a robot sitting beside you in first class.

Steve: First class. Okay.

Cameron: Come on. Surely you only travel first class. You’re Steve Santino, Australia’s leading futurist.

Steve: I, I,

Cameron: is too good

Steve: travel business [00:04:00] occasionally. It just depends on the client really, to be honest with you.

Cameron: on the gig. Hmm.

Steve: occasional upgrade.

Cameron: so let’s, um, talk about this in more detail. So you say it’s open source. What is open source? The software or the hardware or both?

Steve: I think both. Uh, again, this remains to be seen, but you can, it comes, it’s standard fittings and capabilities, but you

Cameron: Shouldn’t, shouldn’t, shouldn’t, shouldn’t, you know, be before you bought it, what you’re actually getting.

Steve: world works is that you just buy things first and you cross your fingers and they make promises they often don’t keep, whether it’s autonomous vehicles or you know, any other elements, but it’s codeable and you can teach it things and you can also teach it through code, but you can also do it verbally and visually, uh, which is gonna be the killer app on robots.

I mean, one of the things that I really hope for is an open source movement within robotics. [00:05:00] Uh, we don’t wanna have closed source. You need to be able to train it in your way and maybe even the skills that you’ve trained your robot. I think that’s a great idea. might be the best car washer or the best on a work site or the best. Warehouse worker. I think that’s really important to get an extension of our skills based on what we teach it. You could have the Stevie skill that, uh, for gardening goes around the world. I, I just downloaded the, the Stevie Gardening, uh, app for my humanoid robot and it’s the best one. And we might get like a whole new app economy where human skills, which are incredibly varied, across all the gamut of things that we do physically, that I think that’s a really big opportunity.

But it can only happen in an open source world.

Cameron: I think that seems to be the model that Jensen Huang at Nvidia is pushing them into is to make the software free and open source, [00:06:00] uh, because he wants to sell the chip set that runs the robots. So I. I do think that will be one of the models that’s out there. There might be some that are closed and some that are open.

Uh, well that’s very exciting, Steve. That’s, uh, that’s really huge. Do you know of anyone else in Australia that has a humanoid robot?

Steve: seen a few out with gigs, but all of them seem to be pre-programmed. I’ve seen quite a few of the Boston Dynamics ones, the dogs and the Atlas,

to do certain things. Uh, uni Tree just launched a new one, which looks absolutely incredible. It was under 20,000 as well, which is very aligned with Jensen Huang. two years ago we spoke about him saying, by the end of this decade, they’ll cost less than a small car and we’ll all have them. That seems to be very on track. capability, we don’t know we don’t

Cameron: Uni tree are a Chinese operation, aren’t they?

Steve: Kbo is

Which, which,

Cameron: yeah.

Steve: is, is [00:07:00] rare and good. I

Cameron: Hmm.

Steve: have

Cameron: it.

Steve: countries, well, I think you wanna have as many countries as possible with this capability. I think,

Cameron: Right.

Steve: no, this is not a

Cameron: So,

Steve: It’s just you want as many as possible.

Cameron: right. Yes. That’ll be the two main countries producing them, I imagine. Uh, be interesting to see how it plays out in the tariff wars.

Steve: That’s

Cameron: Two, getting a robot from China versus a robot from the us.

Steve: And, and,

Cameron: Well,

Steve: yeah, I, I think that Trump, for other reasons is tapped into something that’s gonna happen, which deglobalization and reshoring and onshoring, all of, all of my clients are talking about securing up their supply chain to Nearshore and Reshore, with the ability of AR and robotics, not just those that we’re making, but then those ones that we make can actually help production locally as well.

Cameron: hmm. But China’s gonna be the dominant manufacturer [00:08:00] of humanoid robots. I imagine. So. Uh,

Steve: it’s not because they’re more capable. I think it’s because America and other Western markets have systematically eroded their own supply chain of all the, I’m gonna call it bits and bobs that went into cars and washing machines and all of that stuff where they no longer have it.

And it’s not that we don’t have the capability to design and make them work, it’s that we don’t have all of the small pieces that go into any form of machinery in our local markets like we did in the seventies and the eighties.

Cameron: I think China is making a massive commitment to leading the world and AI and robotics, though at a governmental level, they’re gonna think everything behind it and I don’t think the US is gonna be able to compete, quite frankly. But we’ll see how it plays out. Well. I don’t, I haven’t bought a robot, Steve, but I did do my first experiment with chat [00:09:00] GPTs, new agent that came out a few weeks ago.

Now, at the end of last year when we predicted what the big story for 2025 would be, we both said AI agents would be the big thing. Didn’t make us that original. Everyone in the industry was predicting that. But I did get my Chachi PT agent, uh, up and running and did a project, which I’ve tried to code over the last year or so, several times unsuccessfully.

And this was a, uh, a project involved with my investing podcast, QAV, um, to basically go out and find a list of companies on the A SX, look up the investor relations page on their website, download their most recent. Annual report or half yearly report, find the independent auditor’s report in that document and read it and see whether [00:10:00] or not the company has a qualified audit.

Now, for non people that aren’t investors, a qualified audit means an audit where the auditor’s gone. You know, this company has some problems. So we’re qualifying, um, the, the green tick that we’re giving the audit. And that for us as investors, that’s an issue. If the auditors picked out that there’s some serious fiscal concerns with the company, we wanna know about that before we invest in it.

I’ve tried to code, it hasn’t worked. Too complicated. Got agent to do it, it seemed to work. Uh, and then I asked it to create a spreadsheet of the results. Give me the, the name of the company. Whether or not it has a qualified audit. And then a link to the most recent financial report, so I could check it.

When I gave it a list of companies that I knew had a qualified audit, it gave them all a clean bill of health. Um, and then I went and double checked it and found that a couple of them did not get a clean bill of health and [00:11:00] said to the agent, Hey, how come you gave this one a clean bill of health? And went, ah, yeah, sorry.

On, in retrospect, rereading that I shouldn’t have done that. So that was completely useless, uh, but it at least managed to get 90% of the way there, just hallucinated on the important bit. So that was a fail. So anyway, uh, it’s a good first step. You know, it was, uh, easy to get it up and running as an agent.

It could go out, find the websites, get the report. At least the links to the reports were right. It just didn’t do a very good job of reading and analyzing it. But I think, you know, it’s, it’s, we’re getting there we’re, it’s, it’s a step in the right direction. The other thing I wanted to mention is you mentioned to me a Kevin Kelly book, the Inevitable from 2016, which I downloaded and started reading over the last couple of days.

Um, well, yes, but hilarious. Now, Kevin Kelly, a guy [00:12:00] that we both admire, been fans of Kevin Kelly for 30 years. Talked about him a lot on this show. My entire podcasting business model was based on stuff that he wrote about in the early two thousands. Um, and. In this book, in the first couple of chapters he’s talking, he’s predicting the future of AI and what it would possibly look like and how it would roll out.

This is in 2016.

Steve: Mm.

Cameron: Nine years ago, this book came out hilarious. How outdated it already is, and this guy,

Steve: it, but I remember it,

Cameron: oh man.

Steve: and I was thinking of the last chapter, but tell me,

Cameron: I haven’t, I haven’t got to the last chapter yet. Well, I will at the end of the show, or when we get to, um, technology time warp, he talks a lot about IBM’s Watson and, and I was like, oh my God, whatever happened to Watson? Haven’t heard of Watson for years. So [00:13:00] when we get to the, uh, technology time warp, I’ll talk about IBM Watson and what happened to them.

But, you know, it’s, he talks about AI and sort of talks about it sort of happening 20 years from now from when he wrote it in 2016, you know, sort of thinking of a 20, 35, 20 40 timeline, uh, and what it, what it might look like. And it was built around the idea of, you know, the, the what we call gofi now, good old fashioned ai, the symbolic AI approach that they took with IBM Watson, which has now become.

Really, uh, clunky and out of date, however, I think is gonna make a comeback too. So with a hybrid between LLMs and Gofi, which I’ll talk about later on in the, in the show.

Steve: just on that point, and in relation to your use of the agent, a really clever friend of mine, Nick Hodges, who I might have mentioned on the

Cameron: Hey, I know Nick,

Steve: Super

Cameron: friend of mine, old Microsoft [00:14:00] guy.

Steve: Uh, was he a Microsoft? Uh, this Nick wasn’t, but maybe he

Cameron: Oh, different Nick Hos. Okay. Could be a different Nick Ho.

Steve: all the Knicks out there, uh, a friend of mine, David Brown, started a Twitter group with everyone called David Brown

Cameron: Oh,

David Brown. No.

Steve: Yeah, dunno him which one? He wrote a post a few weeks ago ago saying the challenge because of the probabilistic nature of Connectionist ai, which is the opposite of symbolic ai, is what he calls the takeoff and landing problem. And so when you’re in cruising altitude and you’re working on something, the AI is amazing, but getting it to start, it obviously can’t start itself.

It needs a lot of direction and nurturing, but also bringing in the project to completion to finish off those rounded edges, it needs the human, uh, attachment as well. And it’s a really great analogy, the takeoff and landing problem. And I do [00:15:00] wonder if that’s at all solvable. And I’m starting to get suspicious that it won’t be, the nature of the models probability means that. How you start and end something is really, really different to the middle of a project. And so it can’t quite get that learning and the guesses that it needs to take at those edges is why you might need symbolic code and get that hybrid model to come, uh, in and around it to, to make it work. And I’ve found that same problem when I’ve been doing some vibe coding with some AI tools for clients where it hasn’t really worked in that way.

So, uh, that is, uh, I agree with you. We’ll get to that.

Cameron: Vibe coding.

Steve: I like the word, I just want to use it as much as possible.

Cameron: People call it vibe coding. For me, it’s just coding, you know? Just, you know, it’s coding with ai. Right. Anyway.

Steve: vibrating and vibe listening to music?

Cameron: [00:16:00] Yeah, right. It’s ridiculous. Um, alright, so let’s get into some of the big news stories, Steve. Um, I think Chachi PT agents, so they, they launched that on the 18th of July, sort of two and a bit weeks ago. Probably the biggest story chat PT five is due out this month. Sam did a sneaky little Twitter post or ex post today with a screenshot of him having a conversation in GPT five.

Uh, but, uh, agent has been the biggest release that they’ve come out with since our last show. Have you played around with the agent? Much

had any success. Tell me about your agent experience.

Steve: I, I found the takeoff and landing really hard. I’ve asked it to do a few things for me, and it’s, it’s been so much effort get it off the ground and starting to do the thing that I wanted to do and setting the parameters. I’ve found that it’s not too dissimilar from giving detailed briefing and instructions inside the traditional prompt framework so [00:17:00] far.

Cameron: Let me read from open AI’s blog post from the 18th of July. Chat GT now thinks and acts proactively choosing from a toolbox of ag agentic skills to complete tasks for you using its own computer chat, GPT can now do work for you using its own computer handling complex tasks from start to finish. You can now ask Chachi PT to handle requests like look at my calendar and brief me on upcoming client meetings based on recent news plan and buy ingredients to make Japanese breakfast for four.

And analyze three competitors and create a slide deck chat, GT will intelligently navigate websites. Filter results prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slide shows and spreadsheets that summarize its findings. At the core of this new capability is a [00:18:00] unified agentic system.

It brings together three strengths of earlier breakthroughs, operator’s ability to interact with websites, deep researchers skill and synthesizing information and chat GPTs intelligence and conversational fluency. CHATT carries out these tasks using its own virtual computer, fluidly shifting between reasoning and action to handle complex workflows from start to finish or based on your instructions.

Nice. In theory. Um. I’ve only done the one experiment, which was not entirely successful. I know my boys Hunter and Taylor have been playing with it quite a bit. Taylor upgraded to a plus subscription. No, a pro subscription. A couple of hundred dollars a month one, so we could really run it through. ’cause you only get a certain amount of credits on the, uh, normal subscription.

And they also found it useful in ways, but also flaky in ways. And I think they’ve, uh, terminated their [00:19:00] experiments with it. Uh, I haven’t seen a lot of people excited about what it can do in the subreddits and online. Like there are, you can get it to log in and research people in LinkedIn and get their email addresses and send them emails, log into your emails and that kinda stuff.

But. Again, it’s a step in the direction. And later on in the show I’ll take you through a scenario that I wrote over the weekend about what I think the process of buying a car might look like a couple of years from now, which is based largely around agents doing a lot of the work for you. So I think this is a step towards that future, but has some ways to go before it’s really, uh, that useful as a tool.

Steve: I think that Sam Altman’s quote that you read out before is not too dissimilar from what we already have just by via prompting process inside search ability. I agree that the ability to write code, the ability to surf websites and do all of [00:20:00] those things is still there, but if you just went in and said, book me a holiday. I don’t think it has enough access to what your preferences are or the memory or your internal files to actually be able to come back with something that is, that you would sign off on. So, so

Cameron: Well, it, it asks you questions first.

Steve: right. And it can do that in the non-agent mode already, already makes suggestions at

post.

It sort of says, do you want me to put this into a PowerPoint that you can work? It does all of those things now. So I feel like, again, the takeoff and landing isn’t there.

Cameron: Yeah, like now it can in theory log into websites for you, like log into your calendar, log into your email, cr you know, create things, do things which wasn’t very good at doing before. But, uh, yeah, you know, like I think this is, um, a hint at where we’re gonna be a couple of years from now, but, uh, a lot of work to be done to make it [00:21:00] reliable.

I mean, with the stuff I was doing, completely useless if the answers it’s giving back to me are hallucinated. So, uh, and, and I think that’s true with a lot of this stuff now. And while it’s still full of holes and needs, human checking, um, kind of pointless, um,

Steve: Yeah.

Cameron: to just get humans to do it.

Steve: Well, and often with some things, if a human needs to check it, that’s the same as doing it. Not with administrative work in a corporate setting, but in a technical setting, which I’ve been working on with clients where AI to develop some stuff. Specifications, which you can’t have an error in there.

95% is the same as zero. ’cause if you’ve gotta check it all a hallucination, which could end up with a bad calculation, and this is where symbolic code’s really important. You can’t have errors.

Cameron: Steve, I saw you did a blog post recently about, uh, whether or not AI’s gonna take all of our jobs. And you said you weren’t that [00:22:00] worried about it. You’re more worried about AI just taking over humanity as the dominant species. But Microsoft released a study about a week ago called Working with ai, measuring the Occupational Implications of generative ai.

I saw, I also saw in the financial review this morning, the federal treasurer, Jim Chalmers, had a post, an article. Talking about all of the great stuff the federal government’s doing with AI and how he doesn’t think it’s a problem, uh, for jobs, we’re gonna take the middle path, like we have any control over what path we take.

Steve: Hmm.

Cameron: But, uh, Microsoft has come up with this list of the 40 jobs most at risk of being replaced by AI and the 40 jobs least at risk of being replaced by ai. The top 40 occupations with highest AI applicability score most at risk sorted alphabetically. Advertising, sales agents

Steve: Yep.

Cameron: broadcast announcers at radio [00:23:00] DJs.

Glad they left podcasters outta that, I’m safe. Hey, brokerage clerks,

business teachers post-secondary CNC tool programmers, concierges. Counter and rental clerks.

Steve: uh,

Cameron: Customer service.

Steve: a concierge kind of at a hotel, human in greeting someone doing whatever. I totally disagree with that, but

Cameron: Yeah, customer service representatives, data scientists, demonstrators and product promoters. Economics, teachers post-secondary,

Steve: I thought you meant the ones on the street waving flags and,

Cameron: like the a hundred thousand in Sydney on the weekend. Yeah. Editors, farm and home management. Educators, geographers, historians, hosts and hostesses. Interpreters and translators. Library science [00:24:00] teachers, post-secondary management analysts, market research analysts. Mathematicians, just straight up mathematicians.

You’re all outta work models. Models because we can just create AI models. Now,

Steve: was at

Cameron: new accounts, clerks,

Steve: was hot. The one at Wimbledon, wasn’t she?

Cameron: I dunno what you’re talking about.

Steve: a model went viral ’cause it wasn’t a real model and she was all cruising around Wimbledon.

Cameron: News analysts, reporters, journalists, passenger attendance, personal financial advisors, political scientists, proofreads and copy markers, public relations specialists, public safety, telecommunications, sales, representatives of services, statistical assistance, switchboard operators, technical writers, telemarketers, telephone operators, ticket agents and travel clerks.

Web developers. And writers. And [00:25:00] authors.

Steve: you read out all of them?

Cameron: That’s the top 40.

Steve: Okay, so can I just cut straight to it?

Cameron: You can.

Steve: Yes. The top 40, most of those are at risk, but think they were at risk before we had the generative AI boom. Most of the things on there software could already do in most capacities. I think a large majority of those Google translate’s been killer for a really long time. Uh, edit up. Yeah, it’s a bit better now.

You can throw it into GPT and get a better version and prompt it, but I think a lot of those were already at risk, and I don’t think a lot of those are driven by generative ai.

Cameron: Right.

Okay.

Steve: I feel like the has been attached to a general movement, which has already been well underway, [00:26:00] pre generative ai.

Cameron: I’m gonna point out what I think are some of the gaps here. Like they. They mentioned specific teachers like economics, teachers, library science teachers, et cetera, et cetera. I think just teachers in general, uh, at risk, you know, I keep, I’ve written a couple of posts about this recently. I think when kids have the greatest teacher of every subject ever on their phone or their laptop or their iPad, it’s going to not get rid of schools.

As such. You still need a place to send your kid and they’ll still need adult supervision. But I’m not sure that teachers, as we think of them, a real what role they’re really gonna play when the AI is a better teacher than the human. More patient understands the child better, is has access to the kids.

Emails, chat messages, listens to all of its conversations with its friends, understands its [00:27:00] neurodiversity and its preferred learning modalities knows what movies, it’s watching music, it’s listening to books, it’s reading podcasts, it’s listening to in a way that a human teacher could never hope to do.

It’s gonna be able to customize teaching to every kid’s requirements. Um, I just dunno how humans are gonna compete. But anyway. Oh, and we’ve got a guest coming on the show in a few weeks. Um, old friend of mine, Nick, who’s the principal of a large private school here in Queensland, and very progressive guy, very tech savvy.

Um. Science, uh, savvy guy. He’s gonna come on and we’re gonna talk about it from his view as the principal of a private school, um, where he thinks AI is gonna lead. But anywho, uh, what do you think about the list of occupations with the lowest AI applicability score? Automotive, glass, installers and repairers, bridge and lock tenders, cement [00:28:00] masons and concrete finishes, dishwashers.

I mean, to me, these are all easy robot replacements, right? Maybe not AI software, but robots with ai, floor sanders and finishes, robot

Steve: Robot,

Cameron: mold and core makers, robot gas, pumping station operators, robot painters, plasterers, production workers, roofers, robots, industrial truck and tractor operators. Yeah.

Yeah, right.

Steve: Every, including Jeffrey Hinton who said go out and be a plumber in a recent interview. I wrote a post about why the Godfather A of AI was wrong and I’m of course right, Cameron, I was correct and he’s wrong, is because I think everyone is missing the embodiment moment when AI gets an embodiment where the visual, verbal, contextual learning can be taught to something that has a physicality to it. All of those tasks that everyone says, oh well, because you know it’s a one-off and it’s nuanced and it every task changes. [00:29:00] I think they’re all missing the robotic part of this, and I think they’re overestimating the ability of. who want humans to do things with them, where they involve communication and human nuance, we actually want it. And my view is that the most important jobs are gonna be the ones we pay for because a human is doing it, not because an AI can’t. And I think with

Cameron: Yeah.

Steve: tasks where you’re not really interacting with someone, you’re interacting with stuff, you’re a lines person or a plumber or whatever, no one even sees or cares what you’re doing.

They just want it done. Honestly, I think the blue collar trade and technical stuff is gonna get humanoid far quicker than a lot of the office stuff is. And, and there’s another reason why people like building empires inside corporations and having people under their control. That’s a big part of what happens in corporations.

People like shaking hands and being in charge and impressing other humans, and I’m convinced that [00:30:00] 80% of what happens in large corporations now is already unnecessary and we don’t need it. It’s all bullshit and layers of bullshit of people talking to each other on presentations and meetings about meetings, and none of that’s based on efficiency or requirements.

That’s all just based on power and control and subservience. So that’ll continue. Whereas a lot of the technical tasks will be outsourced to humanoid robots will, which are getting very close to that capability. They already understand it technically, verbally, visually. All you need to do is put that into humanoid that has a large battery life and dexterity, and we’re very close to that.

Cameron: Here’s my, my, my pushback on the corporate bureaucracy kind of thing, shareholders. I’m talking about your, your large, you know, activist, shareholder groups. You’re gonna be saying to publicly listed corporations. Uh, why aren’t you using AI more to [00:31:00] reduce your workforce? We’re already getting stories.

Atlassians just laid off a bunch of people, Microsoft, Amazon, Google, they’re all laying off thousands and thousands of employees and upfront saying, we’re replacing ’em with ai. It’s happening across the board, it’s happening in Australia as well as overseas already. There was an article from McKinsey I read this morning in the Wall Street Journal saying that, um, AI is an existential risk to them and all consultants, they’re laying off people as well.

And replacing them with AI agents. Uh, shareholders will be saying to boards, uh, executive teams, why aren’t you laying off more people and replacing them with ai? So these little fiefdoms inside corporations aren’t gonna be safe from shareholders. It’s like, why aren’t you replacing half of your employees with AI is an easy thing for shareholders to say, why aren’t you [00:32:00] replacing them with a custom built software system?

Not so easy to say as an activist shareholder because, you know, you don’t understand necessarily what’s available and how hard it is to build a customized software system, et cetera, et cetera. But why aren’t you replacing these people with AI and saving us? You know, gajillion dollars in costs and putting it through to the bottom line and dividends or capital re reimbursement, et cetera, is an easy thing for shareholders to push on boards.

It’s, and, and it’s gonna be very difficult, I think, increasingly for boards to, uh, you know, avoid those sorts of conversations.

The shareholders don’t give a crap about the bureaucracies. They want the money.

Steve: Of course they don’t. to say, harder to do than it is to say. I think a lot of the idea that you can just remove people and just [00:33:00] put in an AI without the edge cases being solvable the same way that AG agentic AI can do stuff self-directed right from the start.

Unless that can happen, I don’t think it will. If I, if age agentic AI gets to the point where it does what it’s meant to do, it interprets the requirements before the project has started and can do it, then that can happen. Until that age, agentic AI is really working, we won’t see the mass of full-time

I

Cameron: but I, I give that a couple. Yeah. I agree with you. But I give that a couple of years, you know, a couple of years from where we are now based on the progress.

And

Steve: happen that 5% might never be solved. I’m wondering if

Cameron: it might not, yeah.

Steve: a little bit like an airplane where apparently Cameron, you should be able to fly to London in one hour in a supersonic plane. And then we just got to a point where there was a, a maxed out [00:34:00] limit of what was economic and will end up

Cameron: What people are willing to pay for. Yeah.

Steve: Also,

Cameron: see.

Steve: that this is a productivity unlock. So the, one of the thoughts that I have is, it is better to have fiefdom where we finally get some productivity enhancements, which the western world hasn’t done very well in the last couple of decades. Australia’s got very low, uh, productivity per person. If all of a sudden you can get through three weeks of work in one week or whatever the ratio happens to be, that can enhance employability because the productivity in getting things done might be a lot quicker. if people are more productive and there’s more output for the company, then it almost becomes an accelerator of having more people who can control an AI that becomes like their staff doing a number of things and they’re the orchestrators of AI’s doing things with the onboarding and offboarding, the takeoff and landing element. [00:35:00] So there’s a chance that that happens. But the technology implementation incorporations is so slow. I’m working with these guys every week and they’re still talking about what they can do and why and where. I always harken back to the idea that we could have been doing a million things with video that we didn’t do until COVID. The classic example is seeing the doctor, there was a 10 year lag on capability and implementation.

Cameron: Yeah, no, you make good points. There are economic. Bureaucratic hurdles always in place of rolling out this sort of stuff. People get in the way. Well, uh, that’s a couple of news stories for the week. Steve, do you wanna move into deep dive and time warp? I.

Steve: let’s do it.

Cameron: So, um, I’m in the process of launching my AI consulting business intellect, uh, [00:36:00] which is Italian for intellect. And part of what I’ve been doing over the last couple of weeks is thinking through, I mean, the sort of questions that I wanna be asking clients. And, you know, I think the biggest question in my mind for most organizations, whether they’re businesses or, or government organizations or any other kind of organization, is what does the world look like a few years from now?

When your customers, your employees, your suppliers, your partners, your competitors have unlimited intelligence available at their fingertips, and particularly starting with customers, what does it look like when your customer. Knows as much about your product and service as you do, and can see through all of the sales and marketing and PR bullshit [00:37:00] immediately.

So I’ve, I’ve been exploring some scenarios and, and writing some stuff about it. Uh, for the website. One of the ones that I was working on over the weekend was, what does buying a car look like in that world? I was thinking about big tickets. So what, so what led to this? I was at the supermarket on the weekend.

I’m talking about grocery shopping at Kohl’s, and. I was looking at some Nest Cafe, instant espresso liquid in a bottle thing that was on sale. And so I pulled out GPT and I said, look, have a look at this thing. Got the camera on it, right? Have a look at this. Um, is this more cost effective than buying a bag of beans and doing my own grinding and blah, blah, blah, blah, blah.

And it talked through the cost effectiveness of it with me. And then I said, look up the reviews for this NE NECA product. And it said, yeah, the reviews are pretty shit. Basically it says it doesn’t taste like real coffee, it’s weak, it’s, you know, not as flavorful, et cetera, et cetera. [00:38:00] Cheaper, faster, but not as good.

Um, then I was gonna buy a bag of peanuts to, ’cause I’d been making my own peanut butter for a while, and it was 20 bucks a kilo at Kohl’s for. Peanuts. Just peanuts in a bag versus buying pre-made peanut butter. So again, I was like, okay, peanut butter, jar of peanut butter, seven 50 grams is like 10 bucks buying peanuts, raw peanuts is 20 bucks a kilo?

Is there any justification for making my own peanut butter, GPTs? Taking me through the cost benefit analysis and said, no, that’s ridiculous. That’s a ridiculous price. And in, and it said, in fact, don’t go to Kohl’s. Don’t buy your peanut butter at Cole’s. Go to Aldi. It’s like a fraction of the price at Aldi, right?

So it’s talking me and

Steve: because legally peanut

Cameron: So

Steve: peanuts. Otherwise it’s not peanut butter.

Cameron: is that right? You just pulled that outta your ass. Is it?

Steve: craft peanut

Cameron: That’s right.

Steve: and

Cameron: 30 years ago. [00:39:00] So, and you know, I, I was just had it on in the, in the shopping aisle. I’m asking you about all these products that I’m looking at. What about this, what do you think about that brand versus this brand? It’s talking me through my grocery list,

but then I thought,

Steve: video idea. You get tailored to film that. You doing that as your launch and you go

Cameron: yeah,

Steve: shopping an AI agent. I’m, I’m, I’m telling you now that is a million views and I know a thing about or

Cameron: Taylor’s in,

Taylor lives in LA now, but yeah, yeah. No, I am gonna do that,

Steve: you

Cameron: but

Steve: huge. ’cause if you don’t all steal it,

Cameron: I’ll do it. Um.

Steve: good.

Cameron: But then I was thinking about big ticket items, like buying a car, buying a house. How does AI play into that? So here’s the scenario. I’ll, I’ll just talk you through it. I won’t read the whole thing, but so imagine I’ve been, let’s say you’ve been thinking about buying a new car. Let’s say it’s 2, [00:40:00] 3, 4 years in the future.

You put on your AI enabled glasses one Saturday morning and you say, Hey, I’ve decided to press the button on the new car. It already knows you, it knows what you’ve been talking about. It’s it, it says, okay, I’ll put together a short list for you. And it comes up with, based on your budget, based on your requirements, the size of your family, et cetera.

It says, look, I found a, a new one at a dealership. That I can book an appointment for you to go and have a test drive today. I’ve also found a secondhand vehicle that looks pretty good. We can go take a look at that. You go, yeah, set, set up the meeting. So it contacts the dealership, it contacts the private seller and it sets up appointments for you, slots it into your diary.

You go to the dealership and you sit down with the salesman after you take a test drive and you say to him, Hey, listen, um, before we get started, I just wanna let you know that my AI assistant is gonna be sitting in on this. Are you okay with that? And he’s like, okay, [00:41:00] I guess. And you say, can you just, can you verbally?

He just nods you go, can you verbally consent? To my AI sitting in on this call, which he does. It’s in your glasses. And so it’s listening to all of the claims he’s making about the car, the pricing, the financing, the whole deal. And it’s running as a, as a live bullshit detector for everything that he’s saying.

And it’s talking. You can either have it on the desk, but that might be a little bit confronting. So it’s in your ears. You’ve got your like meta style glasses. It’s, it’s, it’s coming up on the screen, it’s talking in your ears. It’s going, no, no, that’s bullshit. Don’t listen to that, that, that does. Ask him for more details on that because that doesn’t check out.

He’s trying to jack up different value added things, and you go, no, no, no, I, I can see another car at another dealership five minutes away where I can go get it without all that kind of bullshit. And I, I could just walk. So it’s acting as a real time bullshit filter on the car salesman, verifying all of his [00:42:00] claims, not letting him get away with any nonsense.

Then you go to the private seller, you have your glasses on. When you’re inspecting the car, it’s looking for any signs of damage. It’s looking at the engine. It’s already checking the VIN number. It’s checking the history of the car, the, you pull out the service book. It says, uh, ask, ask the seller if it’s okay for me to contact his service center to verify the service records you ask him.

He says, yeah, sure, that’s fine. Your AI contacts, the AI customer service front end at the. Car Service Center says, uh, my owner is looking at buying this car service records. Indicate you’ve been servicing it for the last couple of years. Would I be able to confirm the service history with you? The AI on the customer, on the on the service center’s end says, well, due to privacy concerns, I’ll need to get approval from the owner of the car.

Uh, tell him [00:43:00] to check his phone. I’m about to send him a message. You tell the guy, message pops up. Will you approve that? I can reveal this information? He clicks yes, and it confirms the service record of the car. Also, they say, listen, um, we’re willing to back up this car, so if there’s anything that isn’t, uh, uh, contained within the service record that we’re about to send to you, we’ll cover it free of charge for the next six months or something like that.

It, it you renegotiate the price based on some stuff that you found in the service record with him. It does up a new contract for you, sends it to the guy. It’s already in pre-negotiations stage with eight finance companies to get you the best deal. Once you’ve decided to go ahead, it locks in the best deal with the best finance company.

It registers the vehicle, it changes the details with your insurance company and it’s done and dusted and it’s all AI driven. So this, this idea of having an AI [00:44:00] assistant with you when you’re doing these bid ticket item purchases, I think is gonna revolutionize that side of buying and selling big ticket items.

Uh, within a few years,

Steve: a brilliant synopsis cam. What I’m hopeful for is that this is something

can be augmented by someone who’s been in car sales for a lot of years, or in purchasing who can work with the GPT to write the code and the software and direct the pathways, like an architecture of what that could look like and be able to automate that process and potentially have a whole. Let’s call it GPT Economy, which hasn’t quite spawned yet, even though Open AI’s tried to do it, there’s new forms of ais that can do that or can. And I want your view on this, is this something that your personal AI just does intuitively [00:45:00] Because it’s a multimodal AI that gets you, gets your situation and knows what to do.

And we actually have a general AI that just does everything. You don’t need those specific ones anymore.

Cameron: Yeah, well, when I was writing this article, I said to GPT, what are the five most common ways that car dealers rip people off? Stitch people up in the process,

and it gave me a list.

Steve: is the first thing. When you see the guy in a brown sort of plaid suit with a mustache like the movie, the Big Steel. For any old Australian listeners, you come in there and there’s a different motor in the car and the one he sells you, and then when he delivers it, it’s all changed up.

Cameron: So I think, I think Theis will be smart enough to be able to know, because it, you know, it’s, it’s reading everything that’s published out there about these sorts of things. So it can give you the, the things to look out for and it can be looking out for them for you. There may be opportunities for customized symbolic [00:46:00] rules in there, but I think LLMs are gonna be able to do a lot of that just straight outta the box, you know?

Steve: think you’ll have your personal ai, which can help you with looking for a house, a car, a university to study at, cooking, all of those things. But this trajectory is one that we’ve been on for a long time. Cam, a friend of mine, was a car sales guy about 20 years ago when car sales.com au and the equivalents around the world arrived where you could see cars online and research. And he said that it got to a point where he used to know more about all of the cars on the yard, but someone would study that specific car every detail on it, and there was no way that he could possibly know more than a consumer within that car. And they learnt all of the tricks. Uh, so they were doing a human version of what you’ve just just described.

But this would put it on steroids. If you had an ai,

Cameron: Yeah.

Steve: just be your research, which is still better than the person selling you the [00:47:00] car because you, you, you’ve really drilled down onto your needs and that they have to be across the whole car yard. and I think it does remove some of that complexity and chicanery of buying things that you don’t buy all that often.

Cameron: All the stuff. I see people, you know, when I’m reading AI posts and there’s these AI conferences happening everywhere and they’re talking about how it’s gonna be so great for sales people in sales and marketing ai. ’cause you’re gonna be able to get all of this intel on your customers and you be able to segment your markets and you’re gonna be able to do all this kinda stuff on a new level.

And I’m, I’m calling bullshit on all of that. I think AI is gonna cut through all of the bullshit of sales and marketing and it was interesting to see that Microsoft had put down sales and marketing people as one, as the jobs that’s most at risk from AI because the customers are just gonna be able to.

See through all of the sales and marketing bullshit, and it is gonna be able to do, the agents are gonna be able to research everything. Like imagine even buying white goods. You need a new [00:48:00] fridge, you need a new dishwasher, you need a new, you know, clothes dryer. Just to be able to say to your ai, you know, based on my family and what we need, tell me which one I should get and find me the best price and get it delivered for me.

Steve: but there is a delineation cam, yes, AI will be able to filter through the bullshit that comes from marketing and sales guys. But there’s a two-speed economy. One of those is with rational purchases. And when it comes to rationality, I think the AI will be able to do that, but often and often in the more profitable areas, their emotional purchases, we’re actually looking for a reason from a human to justify spending an inordinate amount of money on a premium car that isn’t as

Cameron: That’s when your AI will step in and go, Hey, don’t do that. You can’t afford it.

Steve: but you don’t care. Like you’ve already got your brother and your sister and your dad telling you you can’t afford it. You don’t need it. And yet we buy things we [00:49:00] don’t need because we are irrational beings. Emotional purchases.

Emotional purchases, AI isn’t gonna solve that problem. You’re actually looking for a reason to justify the decision

Cameron: Why,

why,

Steve: reasons.

Cameron: it’s why people invest in Bitcoin.

Steve: right? I make a lot of irrational

Cameron: no rational reason to buy Bitcoin.

Steve: no rational reason to buy very large majority of the things that we do, but we’re irrational beings.

Cameron: Yeah,

Steve: why

and why a human is doing something is gonna be more important than what the human is doing. And, and often I think we will want humans to do things, and the important thing is that a human is doing it, even if it could be done by a machine.

Cameron: Yeah. I mean, I, I, I can see an element of that playing out. I’m not sure how much of the marketplace is gonna care that much, but we’ll see.

Steve: in [00:50:00] certain areas. I would love

Cameron: I.

Steve: what percentage of purchases, and even in supermarkets it happens to where you think it’s a highly rational place, but there’s a lot of emotional purchases that happen in a supermarket where you buy premium goods and pay as much for ice cream than you otherwise would.

Does it really taste better? I don’t know. Some foods are better and more premium, but a lot of things aren’t. And so I’d like to know what percentage of the economy is emotional purchases and what are rational and, and that would be different with different people. One thing

Cameron: So today when I’m at, sorry, go.

Steve: I was gonna say the one

Cameron: No, I thought you were finished.

Steve: we can explore next time is the idea of what I’m calling the robot economy. Like, what will we spend on, because robots are there and do robots need certain things to serve them? And, and I’m thinking about humanoid robots. What, what does that build? And the thing that I’m hearkening back to is that there was really no nighttime economy before the early 19 hundreds when [00:51:00] electricity became commonplace.

There was only local communities where you’d go to a, an inn or a pub or, and there wasn’t much of a

Cameron: Rle,

Steve: It was really hard to get around and get to places and places to be warm and have electricity. And that’s an extraordinary part of our economy day. And, and I really am

Cameron: I love you.

Steve: about that element.

I wanna explore that idea of what the robot economy looks like, what things arrive in support of that new ecosystem.

Cameron: Yeah. Okay, well let’s

Steve: Jo, that

Cameron: schedule that for next time. Yeah. Um, I was gonna say, back to the supermarket. Story. Like when I go to the supermarket now and I have my phone out, or I have just talking to it on my AirPods, Hey, I’m looking at this, I’m looking at that. I’m thinking about a world where we are running AI enhanced glasses, uh, with a camera.

So it’s seeing everything that I’m looking at. So [00:52:00] you’re talking about buying the premium ice cream.

Steve: Yes.

Cameron: be seeing what you are picking up and it’ll be going, Hmm. Yeah. You want my, you want my advice on that? Don’t get that one. It’s overpriced. The reviews are shit. Uh, get the other one that’s, uh, a door down.

It’s, uh, just as good. Half the price, less sugar. You know, it’ll be, it’ll be talking in your ear. Some, some people won’t care. Some people will. But increasingly I think people are going to be using their aid or AI to save them money, particularly when they’ve, they’re losing their job to ai. They’re gonna be,

Steve: of the

Cameron: they’re gonna.

Steve: let’s go to taste. Much of the taste is the perception you have in your mind. While you are consuming it because you paid more, it means more to you. This is commonplace with champagne and wine and ice cream and many other products where even in

Cameron: Cigars.

Steve: scenarios, you will convince yourself that it is better you paid more.

And so then we open up two [00:53:00] other ideas on this Supermarket, private and public consumption. of the products we know we’re getting ripped off on, but we want to serve to others or have others see us wearing. And brands are a classic example. wearing the, the branded jacket or what have you, or I’m consuming because it’s, it’s a display of success and my position in society and which cohort I move with.

I’m a surfer, I’m a skateboarder, I’m a whatever. that still exists as well. So you’re gonna have emotional consumption, rational consumption, then you’re going to have private consumption, public consumption. But even in some private consumption, you’re competing with your own mind, which wants you to believe that spending more is deserved upon you because you work hard and you’re looking for those moments of joy and little dopamine hits.

So I think that the rational AI helping us will be there and will be an element, but I think this is far more complex than we think.

Cameron: You, you make a lot of great points, but getting back [00:54:00] to the questions that organizations need to be asking themselves right now is, what kind of impact is this gonna have on my business two, three years from now when my customers, employees, partners, suppliers, competitors, have access to unlimited intelligence?

Steve: Right. And so they’ll need to ask themselves, where do we strip out the bullshit? ’cause that game is up, and where do we lean

Cameron: Yeah.

Steve: and irrational side of the consumption pattern? Because that’s something that

Cameron: Yeah.

Steve: and a and a perfect all-knowing AI isn’t necessarily gonna sway a decision.

So then you get a new decision template within corporations to understand where they sit in private and public consumption, irrational and rational in the person’s mind and the other. So you get a whole new consumer dynamic.

Cameron: Well with that, let me, uh, get to the, the last segment of the show, Steve, I wanted to talk about, which is IBM and Watson speaking about emotional [00:55:00] decisions. Um, and I thought I’d do something different with this. I’m going to go into NPR mode. Um, I’ve, I’ve written this as a narrative rather than just a ramble, so, um, I’m gonna put on my NPR voice, Steve

Steve: Love.

Cameron: uh, uh, and or whatever the Australian equivalent is.

PBS maybe. Hello, boys and girls, let’s go back to the mid nineties. You are sitting in front of a bulky CRT monitor. The internet is new. The future feels digital, but not quite real yet. And then this headline hits IBM Supercomputer Defeats World Chess Champion, that champion Gary Kasparov, one of the most brilliant minds of his generation.

The machine, deep blue, a hulking IBM computer that could evaluate 200 million chess [00:56:00] positions per second. In 1997, it became the first machine to beat a reigning world champion in a full match. Now, I remember this clearly, people always said computers will never beat a human at chess, and then one did not.

Just a human, not just a grandma, but Kasparov at the time thought to be perhaps the greatest chess player who ever lived. It felt like a turning point after his loss. Kasparov didn’t just walk away quietly. He was furious and suspicious. He claimed that some of deep blue’s moves, especially in game two, were too creative and too human to be the result of brute force calculation alone.

He suspected that IBM’s team may have had human grandma feeding moves to the machine during the match, violating the agreed upon rules. He said it was an incredible and extremely deep combination [00:57:00] that no machine should be able to see. He demanded access to deep blue’s logs and inner workings, and IBM refused.

Then not long after the match, IBM dismantled deep blue and never allowed a rematch. That only fueled the conspiracy theory. Kasparov famously said, I lost to a machine, but not to a computer. He believed he’d lost to a team of humans hiding behind the machine, not the machine itself. However, 2017, he later wrote in the book, deep Thinking, I Was Fighting the Last War.

Deep Blue was not intelligent, but it was fast, accurate, and didn’t get tired or scared. The age of machine intelligence was dawning or so we thought, because what Deep Blue actually represented [00:58:00] wasn’t intelligence. As Kasparov later said, deep blue was intelligent. The way your alarm clock is intelligent.

This was symbolic AI logic rules and raw computational force crafted by teams of grandma and engineers. Deep blue didn’t understand chess. It just calculated faster than any human could, and yet in the eyes of the public, it was the birth of thinking machines. Fast forward to 2011. IBM did it again this time.

The Battlefield wasn’t a chess board, it was a television quiz show. Jeopardy is an American game show that’s been on the air since the 1960s. It’s famous and weird. Contestants are given answers and must respond in the form of a question. The host might say, this US state’s name is derived from a Native American word, meaning Great River.

The contestant would have [00:59:00] to answer, what is Mississippi? But it’s more than trivia. Jeopardy tests, pun recognition, obscure references, word play, and buzzer speed. It’s fast, it’s human. And the champions like Ken Jennings aren’t just smart. They’re quick witted and fluent in nuance. So when IBM introduced Watson, a computer designed to beat them at jeopardy, it wasn’t just another AI stunt, it was a public demonstration that machines could now process language, context, jokes and ambiguity.

And in 2011, Watson did just that. It destroyed its human opponents. For IBM, this was the sequel to Deep Blue. Only this time the stakes weren’t chess. They were everything. After Jeopardy, IBM promised that Watson wasn’t just a game show novelty. It was the future of work medicine and decision making. They rolled out glossy ads and corporate demos.

[01:00:00] Doctors would use Watson to treat cancer. Lawyers would use it to scan cases. Customer service would be handled by intelligent chatbots. Sound familiar? This wasn’t just ai. This was practical. AI applied real world intelligent assistance for professionals. But behind the scenes, Watson wasn’t. Magic Watson was a mix of technologies.

It used natural language processing to pass questions, large databases of structured and unstructured information, confident scoring to choose the best answer, and a lot of human tuned logic to make it all come together. Every new domain, Watson entered required teams of engineers to manually train it.

You couldn’t just install Watson at a hospital. You hired IBM to build you a custom Watson, like commissioning bespoke software from scratch. This wasn’t scalable. It wasn’t plug and [01:01:00] play, it was more of a consulting gig than a product. Still, IBM went big on one domain, in particular healthcare. IBM partnered with top hospitals like Memorial Sloan Kettering to build Watson for oncology.

The pitch Watson would read every cancer study ever written and help doctors select the best treatment plans faster and more accurately than a human could. It sounded revolutionary, but leaked internal documents painted a darker picture. Watson was mostly parroting suggestions from a narrow team of doctors.

It wasn’t actually learning from new data, and in some cases it gave dangerously bad advice. The hospitals quietly pulled back. The media stopped covering it. By 2022, Watson Health was shut down and sold off. IBM never delivered on the promise, and by the time they tried to pivot. It was too late while IBM was busy branding [01:02:00] everything.

Watson, Watson, assistant Watson Analytics, Watson Ads, the real AI revolution was happening elsewhere, deep learning neural networks. And then in 2017, the invention of the transformer model developed at Google, not IBM. By the time GPT-3 dropped in 2020, Watson already looked obsolete. By the time chat GPT hit the seated 2022, Watson wasn’t even in the conversation.

IBM’s AI was still tied to old school logic. Symbolic ai. While open ai, Google and Anthropic were shipping black box language models that could write essays, generate code, pass legal exams, simulate conversations and scale across domains instantly. Watson had a brand, but the future had moved on. Symbolic AI or gofi, good old fashioned [01:03:00] AI is how we used to think machines would reason you hand them rules, definitions, logical structures.

You want an AI to know what a cat is. You define it. If X has fur, purrs and chasers, mice X is a cat. It worked in narrow domains, medical expert systems, legal logic, chess. But symbolic AI is brittle. It can’t handle uncertainty, ambiguity, or messy inputs, and it doesn’t learn. It’s frozen in the rules you give it.

That’s what Watson was, an advanced symbolic system with some machine learning bolted on. It couldn’t evolve. And when deep learning exploded, it got left behind. But here’s the twist. The same LLMs that replaced symbolic AI are now being used to rebuild it. Today you can ask an LLM to write an expert system, build a logic engine, translate a medical guideline into a [01:04:00] rule-based system, create transparent, explainable AI for regulated injuries.

In industries, LLMs are black boxes, but they can generate white box systems. Suddenly, the brittle logic of gofi can be spun up on demand. And that matters because in sectors like law, medicine and government, black box AI is entrusted. You need auditability transparency, a trail of logic you can verify. We might be entering a hybrid future using LLMs to handle messy data, language, creativity, and using symbolic systems to encode values, rules, compliance, and reasoning.

Neural nets do the thinking. Symbolic logic explains the answer. The consensus in the AI research community is that the future of advanced AI lies in the successful fusion of neural and symbolic approaches. While purely data-driven LLMs have demonstrated [01:05:00] impressive capabilities, their integration with symbolic reasoning is seen as the crucial step towards creating more robust, trustworthy, and intelligent systems.

Watson had the funding, the brain power, the brand. It beat champions. It had the world’s attention. It could have led the AI revolution, but IBM made. A fatal error. They confused a public demo with a product. They mistook symbolic logic for intelligence and they failed to pivot when the ground shifted. So yes, Watson won Jeopardy, but it never made it to final jeopardy.

The real game was still to come and the next generation of machines rewrote the rules.

Steve: That was

Cameron: And it reminds me as an ex-Microsoft guy, the last time IBM missed the boat, which was the PC revolution.

Steve: Yeah, that’s

Cameron: [01:06:00] twice. They’ve missed the boat now in 40 years.

Steve: I don’t even know if IBM they sold off their computer division and they’re sort of a quasi consulting firm now, aren’t they? Am I, am I misinterpreting that

Cameron: Yeah. No, they sold off, um, the Lenovo. Yeah,

side of the business.

Steve: written cam and it really explains a lot and it was a perfect way to finish off where we started. And I think that hybrid model is really the future. The idea of using a black box to build a white box system that we can see and understand is gonna be really important. The combination of sim symbolic and neural network or LLM models, uh, to make up that. and landing problem is, is really, uh, great. And where you started, they were fast, intelligent and didn’t get scared. That’s a really interesting thing for the corporate [01:07:00] and where we are with AI overlapping work and what the future looks like.

You know who, who’s gonna get scared, who’s gonna go fast and make mistakes and, and who isn’t? And the idea that a symbolic system is frozen in the rules that you give it. I think that humans face the same problem. Many of us are frozen in the rules that we were given. And all of those rules in life and in technology are about to change.

...more
View all episodesView all episodes
Download on the App Store

FuturisticBy Cameron Reilly

  • 5
  • 5
  • 5
  • 5
  • 5

5

6 ratings


More shows like Futuristic

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,134 Listeners

The British History Podcast by Jamie Jeffers

The British History Podcast

5,375 Listeners

Casefile True Crime by Casefile Presents

Casefile True Crime

38,320 Listeners

Pivot by New York Magazine

Pivot

9,543 Listeners

The Daily by The New York Times

The Daily

112,582 Listeners

Behind the Bastards by Cool Zone Media and iHeartPodcasts

Behind the Bastards

15,469 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,799 Listeners

Hard Fork by The New York Times

Hard Fork

5,496 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,061 Listeners

The Weekly Show with Jon Stewart by Comedy Central

The Weekly Show with Jon Stewart

10,831 Listeners

The Rest Is Politics by Goalhanger

The Rest Is Politics

3,310 Listeners

The Economics of Everyday Things by Freakonomics Network & Zachary Crockett

The Economics of Everyday Things

1,644 Listeners

Real Survival Stories by NOISER

Real Survival Stories

1,296 Listeners