
Sign up to save your podcasts
Or
“We’re in this process where we should be discovering what’s possible… That’s what I mean by AI-native — just go figure out what the AI can do that makes something so much easier or so much better.”
– Tim O’Reilly
Tim O’Reilly is the founder, CEO, and Chairman of leading technical publisher O’Reilly Media, and a partner at early stage venture firm O’Reilly AlphaTech Ventures. He has played a central role in shaping the technology landscape, including in open source software, web 2.0, and the Maker movement. He is author of numerous books including WTF? What’s the Future and Why It’s Up to Us.
Website:
www.oreilly.com
LinkedIn Profile:
Tim O’Reilly
X Profile:
Tim O’Reilly
Articles:
AI First Puts Humans First
An Architecture of Participation for AI?
AI and Programming: The Beginning of a New Era
Redefining AI-native beyond automation
Tracing the arc of human-computer communication
Resisting the enshittification of tech platforms
Designing for participation, not control
Embracing group dynamics in AI architecture
Unlocking new learning through experimentation
Prioritizing value creation over financial hype
Ross Dawson: Tim, it is fantastic to have you on the show. You were my very first guest on the show three years ago, and it’s wonderful to have you back.
Tim O’Reilly: Well, thanks for having me again.
Ross: So you have seen technology waves over decades and been right in there forming some of those. And so I’d love to get your perspectives on AI today.
Tim: Well, I think, first off, it’s the real deal. It’s a major transformation, but I like to put it in context. The history of computing is the history of making it easier and easier for people to communicate with machines.
I mean literally in the beginning, they had to actually wire physical circuits into a particular calculation, and then they came up with the stored program computer. And then you could actually input a program one bit at a time, first with switches on the front of the computer. And then, wow, punch cards.
And we got slightly higher level languages. First it was big, advanced assembly programming, and then big, advanced, higher level languages like Fortran, and that whole generation.
Then we had GUIs. I mean, first we had command lines. Literally the CRT was this huge thing. You could literally type and have a screen.
And I guess the point is, each time that we had an advance in the ease of communication, more people used computers. They did more things with them, and the market grew.
And I think I have a lot of disdain for this idea that AI is just going to take away jobs. Yes, it will be disruptive. There’s a lot of disruption in the past of computing. I mean, hey, if you were a programmer, you used to have to know how to use an oscilloscope to debug your program.
And a lot of that old sort of analog hardware that was sort of looking at the waveforms and stuff — not needed anymore, right?
I remember stepping through programs one instruction at a time. There’s all kinds of skills that went away. And so maybe programming in a language like Python or Java goes away, although I don’t think we’re there yet, because of course it is simply the intermediate code that the AIs themselves are generating, and we have to look at it and inspect it.
So we have a long way before we’re at the point that some people are talking about — evanescent programs that just get generated and disappear, that are generated on demand because the AI is so good at it. It just — you ask it to do something, and yeah, it generates code, just like maybe a compiler generates code.
But I think that’s a bit of a wish list, because these machines are not deterministic in the way that previous computers were.
And I love this framework that there’s really — we now have two different kinds of computers. Wonderful post — trying to think who, name’s escaping me at the moment — but it was called “LLMs Are Weird Computers.” And it made the point that you have, effectively, one machine that we’re working with that can write a sonnet but really struggles to do math repeatedly. And you have another type of machine that can come up with the same answer every single time but couldn’t write a sonnet to save its life.
So we have to get the best of both of these things. And I really love that as a framework. It’s a big expansion of capability.
But returning back to this idea of more — the greater ease of use expanding the market — just think back to literacy. There was a time when there was a priesthood. They were the only people who could read and write. And they actually even read and wrote in a dead language — Latin — that nobody else even spoke. So it was this real secret, and it was a source of great power.
And it was subversive when they first, for example, printed the Bible in English. And literally, when they printed the printed book — the printed book was the equivalent of our current, “Oh my God, social media turbocharged with AI, social disruption.”
There was 100 years of war after the dissemination of movable type, because suddenly the Bible and other books were available in English. And it was all this mass communication, and people fought for 100 years.
Now, hopefully we won’t fight for 100 years. But disruption does happen, and it’s not pretty. But it’s not — there’s a way that the millennialist kind of version of where this is somehow terminal is just wrong.
I mean, we will evolve. We will figure out how to coexist with the machines. We’ll figure out new things to do with them. And I think we need to get on with it.
But I guess, back to this post I wrote called “AI First Puts Humans First,” there’s a lot of pressure from various companies. They’re saying you must use AI. And they’ve been talking about AI first as a way of, like, “If you try to do it with AI first because we want to get rid of the people.”
And I think of AI first — or what I prefer, the term AI native — as a way of noticing: no, we want to figure out what the capabilities of this machine are. So try it first, and then build with it.
And in particular, I think of the right way to think about it as a lot like the term “mobile first.” It didn’t mean that you didn’t have other applications anymore. It just meant, when companies started talking about mobile first, it meant we don’t want it to be an afterthought.
And I think we need to think that way about AI. How can we reinvent the things that we’re doing using AI? And anybody who thinks it’s just about replacing people is missing the point.
Ross: Yeah, well, that’s going back to the main point around the ease of communication. So the layers of which we are getting our intent to be able to flow through into what the computers do.
So what struck me with the beginning of LLMs is that what is distinctive about humans is our intention and our intention to achieve something. So now, as you’re saying, the gap between what we intend and what we can achieve is becoming smaller and smaller, or it’s getting narrower and faster.
Also, we can democratize it in the sense of — yeah, there is more available to more people in various guises, to different degrees, where you can then manifest in software and technology your intention.
Yeah, so that democratizes — as you say, this is — there are ways in which this is akin to the printing press, because it democratizes that ability to not just understand, but also to achieve and to do and to connect.
Tim: Yeah, there is an issue that I do think we need to confront as an industry and as a society, and that is what Cory Doctorow calls “enshittification.”
This idea — actually, I had a different version of it, but let’s talk about Cory’s version first. The platforms first are really good to their users. They create these wonderful experiences. Then they use the mass of users that they’ve collected to attract businesses, such as advertisers, and they’re really good to the advertisers but they’re increasingly bad to the users.
Then, as the market reaches a certain saturation point, they go, “Well, we have to be bad to everybody, because we need the money first. We need to keep growing.”
I did a version of this. I wrote a paper called Rising Tide Rents and Robber Baron Rents, where I used the language of economic rents. We have this notion of Schumpeterian rents — or Schumpeterian profits — where a company has innovated, they get ahead of the competition, and they have outsized profits because they are ahead.
But in the theory, those rents are supposed to be competed away as knowledge diffuses. What we’ve seen in practice is companies put up all kinds of moats and try to keep the knowledge from diffusing. They try to lock in their users and so on. Eventually, the market stagnates, and they start preying on their users.
We’re in that stage in many ways as an industry. So, coming to AI, this is what typically happens. Companies stagnate. They become less innovative. They become protective of their profits. They try to keep growing with, effectively, the robber baron rents as opposed to the innovation rents.
New competition comes along, but here we have a problem — the amount of capital that’s had to go into AI means that none of these companies are profitable. So they’re actually enshittified from the beginning, or the enshittification cycle will go much, much more quickly, because the investors need their money.
I worry about that.
This has really been happening since the financial crisis made capital really cheap. We saw this with companies like Lyft and Uber and WeWork — that whole generation of technology companies — where the market didn’t choose the winner. Capital chose the winner.
The guy who actually invented all of that technology for on-demand cars was Sunil Paul with Sidecar. Believe it or not, he raised the same amount of money that Google raised — which was $35 million.
Uber and Lyft copied his innovations. Their venture was doing something completely different. Uber was black cars summoned by SMS. Lyft was a web app for inner-city people trying to find other people to share rides between cities.
They pivoted to do what Sunil Paul had invented, and they threw billions at it, and they bought the market.
Sure enough, the companies go public, unprofitable. Eventually, after the investors have taken out their money — it’s all great — then they have to start raising prices. They have to make the service worse.
Suddenly, you’re not getting a car in a minute. You’re getting a car in 10 minutes. They’re telling you it’s coming in five, and it’s actually coming in 15.
So it’s — and I think that we have some of that with AI. We’re basically having these subsidized services that are really great. At some point, that’s going to shake out.
I think there’s also a way that the current model of AI is fundamentally — it’s kind of colonialism in a certain way. It’s like, we’re going to take all this value because we need it to make our business possible. So we’re going to take all the content that we need. We’re not going to compensate people. We’re going to make these marvelous new services, and therefore we deserve it.
I think they’re not thinking holistically.
Because this capital has bought so much market share, we’re not having that kind of process of discovery that we had in previous generations. I mean, there’s still a lot of competition and a lot of innovation, and it may work out.
Ross: I’m just very interested in that point. There’s been a massive amount of capital. There’s this thesis that there is a winner-takes-most economy — so if you’re in, you have a chance of getting it all.
But overlaid on that — and I think there’s almost nobody better to ask — is open source, where of course you’ve got commercial source, you’ve commercially got open source, and quite a bit in between.
I’d love to hear your views on the degree to which open source will be competitive against the closed models in how it plays out coming up.
Tim: I think that people have always misunderstood open source, because I don’t think that it is necessarily the availability of source code or the license. It’s what I call an architecture of participation.
This is something where I kind of had a falling out with all of the license weenies back in the late ’90s and early 2000s, because — see, my first exposure to what we now call open source was with Berkeley Unix, which grew up in the shadow of the AT&T System V license. That was a proprietary license, and yet all this stuff was happening — this community, this worldwide community of people sharing code.
It was because of the architecture of Unix, which allowed you to add. It was small. It was a small kernel. It was a set of utilities that all spoke the same protocol — i.e., you read and wrote ASCII into a stream, which could go into a file.
There were all these really powerful concepts for network-based computing.
Then, of course, the internet came along, and it also had an architecture of participation. I still remember the old battle — Netscape was the OpenAI of its day. They were going to wrest control from Microsoft, in just the same way that OpenAI now wants to wrest control from Google and be the big kahuna.
The internet’s architecture of participation — it was really Apache that broke it open more than Linux, in some ways. Apache was just like, “Hey, you just download this thing, you build your own website.”
But it wasn’t just that anybody could build a website. It was also that Apache itself didn’t try to Borg everything.
I remember there was this point in time when everybody was saying Apache is not keeping up — Internet Information Server and Netscape Server are adding all these new features — and Apache was like, “Yeah, we’re a web server, but we have this extension layer, and all these people can add things on top of it.”
It had an architecture of participation.
The same thing happened with things like OpenOffice and the GIMP, which were like, “Okay, we’re going to do Microsoft Office, we’re going to do Photoshop.”
They didn’t work, despite having the license, despite making the source code available — because they started with a big hairball of code. It didn’t have an architecture of participation. You couldn’t actually build a community around it.
So I think — my question here with AI is: Where is the architecture of participation?
Ross: I would argue that it’s an arXiv, as in that now basically, the degree of sharing — where you get your Stability and your Googles and everyone else just putting it out on arXiv for your deep seek — really detailed.
Tim: Yeah, I think that’s absolutely right. There is totally an architecture of participation in arXiv.
But I think there’s also a question of models. I guess the thing I would say is yes — the fact that there are many, many models and we can build services — but we have to think about specialized models and how they cooperate. That’s why I’m pretty excited about MCP and other protocols.
Because the initial idea — the winner-takes-all model — is: here we are, we’re OpenAI, you call our APIs, we’re the platform. Just like Windows was. That was literally how Microsoft became so dominant.
You called the Windows API. It abstracted — it hid all the complexity of the underlying hardware. They took on a bunch of hard problems, and developers went, “Oh, it’s much easier to write my applications to the Windows API than to support 30 different devices, or 100 different devices.” It was perfect.
Then Java tried to do a network version of that — remember, “Write once, run anywhere” was their slogan. And in some sense, we’re replaying that with MCP.
But I want to go back to this idea I’ve been playing with — it’s an early Unix idea — and I’ve actually got a piece that I’m writing right now, and it’s about groups. Because part of an architecture of participation is: what’s the unit of participation?
I’ve been thinking a lot about one of the key ideas of the Unix file system, which was that every file had, by default, a set of permissions. And I think we really need to come up with that for AI.
I don’t know why people haven’t picked up on it. If you compare that to things like robots text and so on, there’s a pretty simple way. Let me explain for people who might not remember this. Most people who are developers or whatever will know something about this.
You had a variable called umask, which you set, and it set the default permissions for every file you created. There was also a little command called chmod that would let you change the permissions.
Basically, it was read, write, or execute — and it was for three levels of permission: the user, the group, and the world (everyone) right?
So here we are with AI, saying, “We, OpenAI,” or “We, Grok,” or whoever, “are going to be world,” right? “We’re going to Borg everything, and you’re going to be in our world. Then you’ll depend on us.”
Then some people — like Apple maybe — are saying, or even other companies are saying, “Well, we’ll give you permission to have your own little corner of the world.” That’s user. “We’ll let you own your data.”
But people have forgotten the middle — which is group.
If you look at the history of the last 20 years, it’s people rediscovering — and then forgetting — group. Think about what was the original promise of Twitter, or the Facebook feed. It was: I can curate a group of people that I want to follow, that I want to be part of.
Then they basically went, “No, no, actually that doesn’t really work for us. We’re going to actually override your group with our algorithmic suggestions.”
The algorithmically generated group was a really fabulous idea. Google tried to do a manual version of that when they did — originally Buzz — and then, was it called Circles? Which was from Andy Hertzfeld, and was a great thing.
But what happens? Facebook shuts it off. Twitter shuts it off.
And guess what? Where is it all happening now? WhatsApp groups, Signal groups, Discord groups. People are reinventing group again and again and again.
So my question for the AI community is: Where is group in your thinking?
How do we define it? A group can be a company. It can be a set of people with similar beliefs.
There’s a little bit of this, in the sense that — if you think Grok, the group is — even though it aspires to be the world-level — you could say Anthropic is the, let’s call it, the “woke group,” and Grok is the “right group.”
But where’s the French group? The French have always been famously protective. So I guess Mistral is the French group.
But how do people assert that groupness?
A company is a group.
So the question I have is, for example: how do we have an architecture of participation that says, “My company has valuable data that it can build services on, and your company has valuable data. How do we cooperate?”
That’s again where I’m excited — at least the MCP is the beginning of that. Saying: you can make a set of MCP endpoints anywhere.
It’s a lot like HTTP that way. “Oh, I call you to get the information that I want. Oh, I call you over here for this other information.”
That’s a much more participatory, dynamic world than one where one big company licenses all the valuable data — or just takes all the valuable data and says, “We will have it all.”
Ross: That’s one of the advantages of the agentic world — that if you have the right foundations, the governance, the security, and all of the other layers like team, payments, etc., then you can get entirely an economy of participation of agents.
But I want to look back from what you were saying around groups, coming back to the company’s point around the “AI first” or “AI native,” or whatever it may be. And I think we both believe in augmenting humans.
So what do you see as possible now if we look at an organization that has some great humans in it, and we’ve got AI that changes the nature of the organization? It’s not just tacking on AI to make each person more productive. I think we become creative humans-plus-AI organizations.
So what does that look like at its best? What should we be aspiring to?
Tim: Well, the first thing — and again, I’m just thinking out loud from my own process — the first thing is, there’s all kinds of things that we always wished we could do at O’Reilly, but we just didn’t have the resources for, right?
And so that’s the first layer. The example I always use is, there are people who would like to consume our products in many parts of the world where they don’t speak English. And we always translated a subset of our content into a subset of languages.
Now, with AI, we can make versions that may not be as good, but they’re good enough for many, many more people. So — vast expansion of the market there, just by going, “Okay, here’s this thing we always wished we could do, but could not afford to do.”
Second is: okay, is there a new, AI-native way to do things?
O’Reilly is a learning platform, and I’m looking a lot at — yeah, we have a bunch of corporate customers who are saying, “How do you do assessments? We need to see verified skills assessment.” In other words, test people: do they actually know this thing?
And I go — wow — in an AI-native world, testing is a pretty boneheaded idea, right? Because you could just have the AI watch people.
I was getting a demo from one startup who was showing me something in this territory. They had this great example where the AI was just watching someone do a set of tasks. And it said, “I noticed that you spent a lot more time and you asked a lot more questions in the section that required use of regular expressions. You should spend some time improving your skills there.”
The AI can see things like that.
Then I did kind of a demo for my team. I said, “Okay, let me just show you what I think AI-native assessment looks like.” I basically found some person on GitHub with an open repository.
I said, “Based on this repository, can you give me an assessment of this developer’s skills — not just the technical skills, but also how organized they are, how good they are at documentation, their communication skills?”
It did a great write-up on this person just by observing the code.
Then I pointed to a posted job description for an engineer working on Sora at OpenAI and said, “How good of a match is this person for that job?”
And it kind of went through: “Here are all the skills that they have. Here are all the skills that they need.”
And I go — this is AI-native. It’s something that we do, and we’re doing it in probably a 19th-century way — not even a 20th-century way — and you have completely new ways to do it.
Now, obviously that needs to be worked on. It needs to be made reliable. But it’s what I mean by AI-native — just go figure out what the AI can do that makes something so much easier or so much better.
That’s the point.
And that’s why it drives me nuts when I hear people talk about the “efficiencies” to be gained from AI.
The efficiencies are there. Like, yeah — it was a heck of a lot more efficient to use a steam engine to bring the coal out of the mine than to have a bunch of people do it. Or to drive a train. I mean, yeah, there’s efficiency there.
But it’s more that the capability lets you do more.
So we’re in this process where we should be discovering what’s possible.
In this way, I’m very influenced by a book by a guy named James Bessen. It’s called Learning by Doing, and he studied the Industrial Revolution in Lowell, Massachusetts, when they were bringing cotton mills and textile mills to New England.
He basically found that the narrative — AI had unskilled labor replaced skilled labor — wasn’t quite right. They had these skilled weavers, and then these unskilled factory workers. And he looked at pay records and said it took just as long for the new workers to become fully paid as the old workers.
So they were just differently skilled.
And I think “differently skilled” is a really powerful idea.
And he said okay, why did it take so long for this to show up in productivity statistics — 20, 30 years? And he said, because you need a community.
Again — this is an architectural part. You need people to fix the machines. You need people to figure out how to make them work better. So there’s this whole community of practice that’s discovering, thinking, sharing.
And we’re in that ferment right now.
That’s what we need to be doing — and what we are doing. There’s this huge ferment where people are in fact discovering and sharing.
And back to your question about open source — it’s really less about source code than it is about the open sharing of knowledge. Where people do that.
That goes back to O’Reilly. What we do — we describe our mission as being “changing the world by spreading the knowledge of innovators.”
We used to do it almost entirely through books. Then we did it through books and conferences. Now we have this online learning platform, which still includes books but has a big live training component.
We’re always looking for people who know something and want to teach it to other people.
Then the question is, what do people need to know now that will give them leverage, advantage, and make them — and their company — better?
Ross: So just to round out, I mean, you’ve already — well, more than touched on this idea of learning.
So part of it is, as you say, there are some new skills which you need to learn. There’s new capabilities. We want to go away from the old job description because we want people to evolve into how they can add value in various ways.
And so, what are the ways? What are the architectures of learning?
I suppose, as you say, that is a community. It’s not just about delivering content or interacting. There’s a community aspect.
So what are the architectures of learning that will allow organizations to grow into what they can be as AI-native organizations?
Tim: I think the architecture of learning that’s probably most important is for companies to give people freedom to explore.
There’s so many ideas and so much opportunity to try things in a new way. And I worry too much that companies are looking for — they’re trying to guide the innovation top-down.
I have another story that sort of goes back to — it’s kind of a fun story about open source.
So, yeah, one of the top guys at Microsoft is a guy named Scott Guthrie. So Scott and one of his coworkers, Mark Anders, were engineers at Microsoft, and they had basically this idea back in the early — this is 20-plus years ago — and they basically were trying to figure out how to make Windows better fitted for the web.
And they did a project by themselves over Christmas, just for the hell of it. And it spread within Microsoft. It was eventually what became ASP.NET, which was a very big Microsoft technology — I guess it was in the early 2000s.
It kind of spread like an open source project, just within Microsoft — which, of course, had tens of thousands of employees.
Eventually, Bill Gates heard about it and called them into his office. And they’re like, “Oh shit, we’re gonna get fired.”
And he’s like, “This is great.” He elevated them, and they became a Microsoft product.
But it literally grew like an open source project.
And that’s what you really want to have happen. You want to have people scratching their own itch.
It reminds me of another really great developer story. I was once doing a little bit of — I’d been called into a group at SAP where they wanted to get my advice on things. And they had also reached out to the Head of Developer Relations at Google.
And he asked — and we were kind of trying to — I forget what the name of their technology was. And this guy asked a really perfect question. He said, “Do any of your engineers play with this after hours?”
And they said, “No.”
And he said, “You’re fucked. It’s not going to work.”
So that — that play,
Ross: Yeah. Right?
Tim: Encourage and allow that play. Let people be curious. Let them find out. Let them invent. And let them reinvent your business.
Ross: That’s fantastic.
Tim: Because that’s — that will, that will — their learning will be your learning, and their reinvention of themselves will be your reinvention.
Ross: So, any final messages to everyone out there who is thick in the AI revolution?
Tim: I think it’s to try to forget the overheated financing environment.
You know, we talked at the very beginning about these various revolutions that I’ve seen. And the most interesting ones have always been when money was off the table.
It was like — everybody had kind of given up on search when Google came along, for example. It was just like, “This is a dead end.” And it wasn’t.
And open source — it was sort of like Microsoft was ruling the world and there was nothing left for developers to do. So they just went and worked on their own fun projects.
Right now, everybody’s going after the main chance. And — I mean, obviously not everybody — there are people who are going out and trying to really create value.
But there are too many companies — too many investors in particular — who are really trying to create financial instruments. Their model is just, “Value go up.” Versus a company that’s saying, “Yeah, we want value for our users to go up. We’re not even worried about that [financial outcome] right now.”
It’s so interesting — there was a story in The Information recently about Surge AI, which didn’t raise any money from investors, actually growing faster than Scale (scale.ai), which Meta just put all this money through — because they were just focused on getting the job done.
So I guess my point is: try to create value for others, and it will come to you if you do that.
Ross: Absolutely agree. That’s a wonderful message to end on.
So thank you so much for all of your work over the years and your leadership in helping us frame this AI as a positive boon for all of us.
Tim: Right. Well, thank you very much.
And it’s an amazing, fun time to be in the industry. We should all rejoice — challenging but fun.
The post Tim O’Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11) appeared first on Humans + AI.
5
66 ratings
“We’re in this process where we should be discovering what’s possible… That’s what I mean by AI-native — just go figure out what the AI can do that makes something so much easier or so much better.”
– Tim O’Reilly
Tim O’Reilly is the founder, CEO, and Chairman of leading technical publisher O’Reilly Media, and a partner at early stage venture firm O’Reilly AlphaTech Ventures. He has played a central role in shaping the technology landscape, including in open source software, web 2.0, and the Maker movement. He is author of numerous books including WTF? What’s the Future and Why It’s Up to Us.
Website:
www.oreilly.com
LinkedIn Profile:
Tim O’Reilly
X Profile:
Tim O’Reilly
Articles:
AI First Puts Humans First
An Architecture of Participation for AI?
AI and Programming: The Beginning of a New Era
Redefining AI-native beyond automation
Tracing the arc of human-computer communication
Resisting the enshittification of tech platforms
Designing for participation, not control
Embracing group dynamics in AI architecture
Unlocking new learning through experimentation
Prioritizing value creation over financial hype
Ross Dawson: Tim, it is fantastic to have you on the show. You were my very first guest on the show three years ago, and it’s wonderful to have you back.
Tim O’Reilly: Well, thanks for having me again.
Ross: So you have seen technology waves over decades and been right in there forming some of those. And so I’d love to get your perspectives on AI today.
Tim: Well, I think, first off, it’s the real deal. It’s a major transformation, but I like to put it in context. The history of computing is the history of making it easier and easier for people to communicate with machines.
I mean literally in the beginning, they had to actually wire physical circuits into a particular calculation, and then they came up with the stored program computer. And then you could actually input a program one bit at a time, first with switches on the front of the computer. And then, wow, punch cards.
And we got slightly higher level languages. First it was big, advanced assembly programming, and then big, advanced, higher level languages like Fortran, and that whole generation.
Then we had GUIs. I mean, first we had command lines. Literally the CRT was this huge thing. You could literally type and have a screen.
And I guess the point is, each time that we had an advance in the ease of communication, more people used computers. They did more things with them, and the market grew.
And I think I have a lot of disdain for this idea that AI is just going to take away jobs. Yes, it will be disruptive. There’s a lot of disruption in the past of computing. I mean, hey, if you were a programmer, you used to have to know how to use an oscilloscope to debug your program.
And a lot of that old sort of analog hardware that was sort of looking at the waveforms and stuff — not needed anymore, right?
I remember stepping through programs one instruction at a time. There’s all kinds of skills that went away. And so maybe programming in a language like Python or Java goes away, although I don’t think we’re there yet, because of course it is simply the intermediate code that the AIs themselves are generating, and we have to look at it and inspect it.
So we have a long way before we’re at the point that some people are talking about — evanescent programs that just get generated and disappear, that are generated on demand because the AI is so good at it. It just — you ask it to do something, and yeah, it generates code, just like maybe a compiler generates code.
But I think that’s a bit of a wish list, because these machines are not deterministic in the way that previous computers were.
And I love this framework that there’s really — we now have two different kinds of computers. Wonderful post — trying to think who, name’s escaping me at the moment — but it was called “LLMs Are Weird Computers.” And it made the point that you have, effectively, one machine that we’re working with that can write a sonnet but really struggles to do math repeatedly. And you have another type of machine that can come up with the same answer every single time but couldn’t write a sonnet to save its life.
So we have to get the best of both of these things. And I really love that as a framework. It’s a big expansion of capability.
But returning back to this idea of more — the greater ease of use expanding the market — just think back to literacy. There was a time when there was a priesthood. They were the only people who could read and write. And they actually even read and wrote in a dead language — Latin — that nobody else even spoke. So it was this real secret, and it was a source of great power.
And it was subversive when they first, for example, printed the Bible in English. And literally, when they printed the printed book — the printed book was the equivalent of our current, “Oh my God, social media turbocharged with AI, social disruption.”
There was 100 years of war after the dissemination of movable type, because suddenly the Bible and other books were available in English. And it was all this mass communication, and people fought for 100 years.
Now, hopefully we won’t fight for 100 years. But disruption does happen, and it’s not pretty. But it’s not — there’s a way that the millennialist kind of version of where this is somehow terminal is just wrong.
I mean, we will evolve. We will figure out how to coexist with the machines. We’ll figure out new things to do with them. And I think we need to get on with it.
But I guess, back to this post I wrote called “AI First Puts Humans First,” there’s a lot of pressure from various companies. They’re saying you must use AI. And they’ve been talking about AI first as a way of, like, “If you try to do it with AI first because we want to get rid of the people.”
And I think of AI first — or what I prefer, the term AI native — as a way of noticing: no, we want to figure out what the capabilities of this machine are. So try it first, and then build with it.
And in particular, I think of the right way to think about it as a lot like the term “mobile first.” It didn’t mean that you didn’t have other applications anymore. It just meant, when companies started talking about mobile first, it meant we don’t want it to be an afterthought.
And I think we need to think that way about AI. How can we reinvent the things that we’re doing using AI? And anybody who thinks it’s just about replacing people is missing the point.
Ross: Yeah, well, that’s going back to the main point around the ease of communication. So the layers of which we are getting our intent to be able to flow through into what the computers do.
So what struck me with the beginning of LLMs is that what is distinctive about humans is our intention and our intention to achieve something. So now, as you’re saying, the gap between what we intend and what we can achieve is becoming smaller and smaller, or it’s getting narrower and faster.
Also, we can democratize it in the sense of — yeah, there is more available to more people in various guises, to different degrees, where you can then manifest in software and technology your intention.
Yeah, so that democratizes — as you say, this is — there are ways in which this is akin to the printing press, because it democratizes that ability to not just understand, but also to achieve and to do and to connect.
Tim: Yeah, there is an issue that I do think we need to confront as an industry and as a society, and that is what Cory Doctorow calls “enshittification.”
This idea — actually, I had a different version of it, but let’s talk about Cory’s version first. The platforms first are really good to their users. They create these wonderful experiences. Then they use the mass of users that they’ve collected to attract businesses, such as advertisers, and they’re really good to the advertisers but they’re increasingly bad to the users.
Then, as the market reaches a certain saturation point, they go, “Well, we have to be bad to everybody, because we need the money first. We need to keep growing.”
I did a version of this. I wrote a paper called Rising Tide Rents and Robber Baron Rents, where I used the language of economic rents. We have this notion of Schumpeterian rents — or Schumpeterian profits — where a company has innovated, they get ahead of the competition, and they have outsized profits because they are ahead.
But in the theory, those rents are supposed to be competed away as knowledge diffuses. What we’ve seen in practice is companies put up all kinds of moats and try to keep the knowledge from diffusing. They try to lock in their users and so on. Eventually, the market stagnates, and they start preying on their users.
We’re in that stage in many ways as an industry. So, coming to AI, this is what typically happens. Companies stagnate. They become less innovative. They become protective of their profits. They try to keep growing with, effectively, the robber baron rents as opposed to the innovation rents.
New competition comes along, but here we have a problem — the amount of capital that’s had to go into AI means that none of these companies are profitable. So they’re actually enshittified from the beginning, or the enshittification cycle will go much, much more quickly, because the investors need their money.
I worry about that.
This has really been happening since the financial crisis made capital really cheap. We saw this with companies like Lyft and Uber and WeWork — that whole generation of technology companies — where the market didn’t choose the winner. Capital chose the winner.
The guy who actually invented all of that technology for on-demand cars was Sunil Paul with Sidecar. Believe it or not, he raised the same amount of money that Google raised — which was $35 million.
Uber and Lyft copied his innovations. Their venture was doing something completely different. Uber was black cars summoned by SMS. Lyft was a web app for inner-city people trying to find other people to share rides between cities.
They pivoted to do what Sunil Paul had invented, and they threw billions at it, and they bought the market.
Sure enough, the companies go public, unprofitable. Eventually, after the investors have taken out their money — it’s all great — then they have to start raising prices. They have to make the service worse.
Suddenly, you’re not getting a car in a minute. You’re getting a car in 10 minutes. They’re telling you it’s coming in five, and it’s actually coming in 15.
So it’s — and I think that we have some of that with AI. We’re basically having these subsidized services that are really great. At some point, that’s going to shake out.
I think there’s also a way that the current model of AI is fundamentally — it’s kind of colonialism in a certain way. It’s like, we’re going to take all this value because we need it to make our business possible. So we’re going to take all the content that we need. We’re not going to compensate people. We’re going to make these marvelous new services, and therefore we deserve it.
I think they’re not thinking holistically.
Because this capital has bought so much market share, we’re not having that kind of process of discovery that we had in previous generations. I mean, there’s still a lot of competition and a lot of innovation, and it may work out.
Ross: I’m just very interested in that point. There’s been a massive amount of capital. There’s this thesis that there is a winner-takes-most economy — so if you’re in, you have a chance of getting it all.
But overlaid on that — and I think there’s almost nobody better to ask — is open source, where of course you’ve got commercial source, you’ve commercially got open source, and quite a bit in between.
I’d love to hear your views on the degree to which open source will be competitive against the closed models in how it plays out coming up.
Tim: I think that people have always misunderstood open source, because I don’t think that it is necessarily the availability of source code or the license. It’s what I call an architecture of participation.
This is something where I kind of had a falling out with all of the license weenies back in the late ’90s and early 2000s, because — see, my first exposure to what we now call open source was with Berkeley Unix, which grew up in the shadow of the AT&T System V license. That was a proprietary license, and yet all this stuff was happening — this community, this worldwide community of people sharing code.
It was because of the architecture of Unix, which allowed you to add. It was small. It was a small kernel. It was a set of utilities that all spoke the same protocol — i.e., you read and wrote ASCII into a stream, which could go into a file.
There were all these really powerful concepts for network-based computing.
Then, of course, the internet came along, and it also had an architecture of participation. I still remember the old battle — Netscape was the OpenAI of its day. They were going to wrest control from Microsoft, in just the same way that OpenAI now wants to wrest control from Google and be the big kahuna.
The internet’s architecture of participation — it was really Apache that broke it open more than Linux, in some ways. Apache was just like, “Hey, you just download this thing, you build your own website.”
But it wasn’t just that anybody could build a website. It was also that Apache itself didn’t try to Borg everything.
I remember there was this point in time when everybody was saying Apache is not keeping up — Internet Information Server and Netscape Server are adding all these new features — and Apache was like, “Yeah, we’re a web server, but we have this extension layer, and all these people can add things on top of it.”
It had an architecture of participation.
The same thing happened with things like OpenOffice and the GIMP, which were like, “Okay, we’re going to do Microsoft Office, we’re going to do Photoshop.”
They didn’t work, despite having the license, despite making the source code available — because they started with a big hairball of code. It didn’t have an architecture of participation. You couldn’t actually build a community around it.
So I think — my question here with AI is: Where is the architecture of participation?
Ross: I would argue that it’s an arXiv, as in that now basically, the degree of sharing — where you get your Stability and your Googles and everyone else just putting it out on arXiv for your deep seek — really detailed.
Tim: Yeah, I think that’s absolutely right. There is totally an architecture of participation in arXiv.
But I think there’s also a question of models. I guess the thing I would say is yes — the fact that there are many, many models and we can build services — but we have to think about specialized models and how they cooperate. That’s why I’m pretty excited about MCP and other protocols.
Because the initial idea — the winner-takes-all model — is: here we are, we’re OpenAI, you call our APIs, we’re the platform. Just like Windows was. That was literally how Microsoft became so dominant.
You called the Windows API. It abstracted — it hid all the complexity of the underlying hardware. They took on a bunch of hard problems, and developers went, “Oh, it’s much easier to write my applications to the Windows API than to support 30 different devices, or 100 different devices.” It was perfect.
Then Java tried to do a network version of that — remember, “Write once, run anywhere” was their slogan. And in some sense, we’re replaying that with MCP.
But I want to go back to this idea I’ve been playing with — it’s an early Unix idea — and I’ve actually got a piece that I’m writing right now, and it’s about groups. Because part of an architecture of participation is: what’s the unit of participation?
I’ve been thinking a lot about one of the key ideas of the Unix file system, which was that every file had, by default, a set of permissions. And I think we really need to come up with that for AI.
I don’t know why people haven’t picked up on it. If you compare that to things like robots text and so on, there’s a pretty simple way. Let me explain for people who might not remember this. Most people who are developers or whatever will know something about this.
You had a variable called umask, which you set, and it set the default permissions for every file you created. There was also a little command called chmod that would let you change the permissions.
Basically, it was read, write, or execute — and it was for three levels of permission: the user, the group, and the world (everyone) right?
So here we are with AI, saying, “We, OpenAI,” or “We, Grok,” or whoever, “are going to be world,” right? “We’re going to Borg everything, and you’re going to be in our world. Then you’ll depend on us.”
Then some people — like Apple maybe — are saying, or even other companies are saying, “Well, we’ll give you permission to have your own little corner of the world.” That’s user. “We’ll let you own your data.”
But people have forgotten the middle — which is group.
If you look at the history of the last 20 years, it’s people rediscovering — and then forgetting — group. Think about what was the original promise of Twitter, or the Facebook feed. It was: I can curate a group of people that I want to follow, that I want to be part of.
Then they basically went, “No, no, actually that doesn’t really work for us. We’re going to actually override your group with our algorithmic suggestions.”
The algorithmically generated group was a really fabulous idea. Google tried to do a manual version of that when they did — originally Buzz — and then, was it called Circles? Which was from Andy Hertzfeld, and was a great thing.
But what happens? Facebook shuts it off. Twitter shuts it off.
And guess what? Where is it all happening now? WhatsApp groups, Signal groups, Discord groups. People are reinventing group again and again and again.
So my question for the AI community is: Where is group in your thinking?
How do we define it? A group can be a company. It can be a set of people with similar beliefs.
There’s a little bit of this, in the sense that — if you think Grok, the group is — even though it aspires to be the world-level — you could say Anthropic is the, let’s call it, the “woke group,” and Grok is the “right group.”
But where’s the French group? The French have always been famously protective. So I guess Mistral is the French group.
But how do people assert that groupness?
A company is a group.
So the question I have is, for example: how do we have an architecture of participation that says, “My company has valuable data that it can build services on, and your company has valuable data. How do we cooperate?”
That’s again where I’m excited — at least the MCP is the beginning of that. Saying: you can make a set of MCP endpoints anywhere.
It’s a lot like HTTP that way. “Oh, I call you to get the information that I want. Oh, I call you over here for this other information.”
That’s a much more participatory, dynamic world than one where one big company licenses all the valuable data — or just takes all the valuable data and says, “We will have it all.”
Ross: That’s one of the advantages of the agentic world — that if you have the right foundations, the governance, the security, and all of the other layers like team, payments, etc., then you can get entirely an economy of participation of agents.
But I want to look back from what you were saying around groups, coming back to the company’s point around the “AI first” or “AI native,” or whatever it may be. And I think we both believe in augmenting humans.
So what do you see as possible now if we look at an organization that has some great humans in it, and we’ve got AI that changes the nature of the organization? It’s not just tacking on AI to make each person more productive. I think we become creative humans-plus-AI organizations.
So what does that look like at its best? What should we be aspiring to?
Tim: Well, the first thing — and again, I’m just thinking out loud from my own process — the first thing is, there’s all kinds of things that we always wished we could do at O’Reilly, but we just didn’t have the resources for, right?
And so that’s the first layer. The example I always use is, there are people who would like to consume our products in many parts of the world where they don’t speak English. And we always translated a subset of our content into a subset of languages.
Now, with AI, we can make versions that may not be as good, but they’re good enough for many, many more people. So — vast expansion of the market there, just by going, “Okay, here’s this thing we always wished we could do, but could not afford to do.”
Second is: okay, is there a new, AI-native way to do things?
O’Reilly is a learning platform, and I’m looking a lot at — yeah, we have a bunch of corporate customers who are saying, “How do you do assessments? We need to see verified skills assessment.” In other words, test people: do they actually know this thing?
And I go — wow — in an AI-native world, testing is a pretty boneheaded idea, right? Because you could just have the AI watch people.
I was getting a demo from one startup who was showing me something in this territory. They had this great example where the AI was just watching someone do a set of tasks. And it said, “I noticed that you spent a lot more time and you asked a lot more questions in the section that required use of regular expressions. You should spend some time improving your skills there.”
The AI can see things like that.
Then I did kind of a demo for my team. I said, “Okay, let me just show you what I think AI-native assessment looks like.” I basically found some person on GitHub with an open repository.
I said, “Based on this repository, can you give me an assessment of this developer’s skills — not just the technical skills, but also how organized they are, how good they are at documentation, their communication skills?”
It did a great write-up on this person just by observing the code.
Then I pointed to a posted job description for an engineer working on Sora at OpenAI and said, “How good of a match is this person for that job?”
And it kind of went through: “Here are all the skills that they have. Here are all the skills that they need.”
And I go — this is AI-native. It’s something that we do, and we’re doing it in probably a 19th-century way — not even a 20th-century way — and you have completely new ways to do it.
Now, obviously that needs to be worked on. It needs to be made reliable. But it’s what I mean by AI-native — just go figure out what the AI can do that makes something so much easier or so much better.
That’s the point.
And that’s why it drives me nuts when I hear people talk about the “efficiencies” to be gained from AI.
The efficiencies are there. Like, yeah — it was a heck of a lot more efficient to use a steam engine to bring the coal out of the mine than to have a bunch of people do it. Or to drive a train. I mean, yeah, there’s efficiency there.
But it’s more that the capability lets you do more.
So we’re in this process where we should be discovering what’s possible.
In this way, I’m very influenced by a book by a guy named James Bessen. It’s called Learning by Doing, and he studied the Industrial Revolution in Lowell, Massachusetts, when they were bringing cotton mills and textile mills to New England.
He basically found that the narrative — AI had unskilled labor replaced skilled labor — wasn’t quite right. They had these skilled weavers, and then these unskilled factory workers. And he looked at pay records and said it took just as long for the new workers to become fully paid as the old workers.
So they were just differently skilled.
And I think “differently skilled” is a really powerful idea.
And he said okay, why did it take so long for this to show up in productivity statistics — 20, 30 years? And he said, because you need a community.
Again — this is an architectural part. You need people to fix the machines. You need people to figure out how to make them work better. So there’s this whole community of practice that’s discovering, thinking, sharing.
And we’re in that ferment right now.
That’s what we need to be doing — and what we are doing. There’s this huge ferment where people are in fact discovering and sharing.
And back to your question about open source — it’s really less about source code than it is about the open sharing of knowledge. Where people do that.
That goes back to O’Reilly. What we do — we describe our mission as being “changing the world by spreading the knowledge of innovators.”
We used to do it almost entirely through books. Then we did it through books and conferences. Now we have this online learning platform, which still includes books but has a big live training component.
We’re always looking for people who know something and want to teach it to other people.
Then the question is, what do people need to know now that will give them leverage, advantage, and make them — and their company — better?
Ross: So just to round out, I mean, you’ve already — well, more than touched on this idea of learning.
So part of it is, as you say, there are some new skills which you need to learn. There’s new capabilities. We want to go away from the old job description because we want people to evolve into how they can add value in various ways.
And so, what are the ways? What are the architectures of learning?
I suppose, as you say, that is a community. It’s not just about delivering content or interacting. There’s a community aspect.
So what are the architectures of learning that will allow organizations to grow into what they can be as AI-native organizations?
Tim: I think the architecture of learning that’s probably most important is for companies to give people freedom to explore.
There’s so many ideas and so much opportunity to try things in a new way. And I worry too much that companies are looking for — they’re trying to guide the innovation top-down.
I have another story that sort of goes back to — it’s kind of a fun story about open source.
So, yeah, one of the top guys at Microsoft is a guy named Scott Guthrie. So Scott and one of his coworkers, Mark Anders, were engineers at Microsoft, and they had basically this idea back in the early — this is 20-plus years ago — and they basically were trying to figure out how to make Windows better fitted for the web.
And they did a project by themselves over Christmas, just for the hell of it. And it spread within Microsoft. It was eventually what became ASP.NET, which was a very big Microsoft technology — I guess it was in the early 2000s.
It kind of spread like an open source project, just within Microsoft — which, of course, had tens of thousands of employees.
Eventually, Bill Gates heard about it and called them into his office. And they’re like, “Oh shit, we’re gonna get fired.”
And he’s like, “This is great.” He elevated them, and they became a Microsoft product.
But it literally grew like an open source project.
And that’s what you really want to have happen. You want to have people scratching their own itch.
It reminds me of another really great developer story. I was once doing a little bit of — I’d been called into a group at SAP where they wanted to get my advice on things. And they had also reached out to the Head of Developer Relations at Google.
And he asked — and we were kind of trying to — I forget what the name of their technology was. And this guy asked a really perfect question. He said, “Do any of your engineers play with this after hours?”
And they said, “No.”
And he said, “You’re fucked. It’s not going to work.”
So that — that play,
Ross: Yeah. Right?
Tim: Encourage and allow that play. Let people be curious. Let them find out. Let them invent. And let them reinvent your business.
Ross: That’s fantastic.
Tim: Because that’s — that will, that will — their learning will be your learning, and their reinvention of themselves will be your reinvention.
Ross: So, any final messages to everyone out there who is thick in the AI revolution?
Tim: I think it’s to try to forget the overheated financing environment.
You know, we talked at the very beginning about these various revolutions that I’ve seen. And the most interesting ones have always been when money was off the table.
It was like — everybody had kind of given up on search when Google came along, for example. It was just like, “This is a dead end.” And it wasn’t.
And open source — it was sort of like Microsoft was ruling the world and there was nothing left for developers to do. So they just went and worked on their own fun projects.
Right now, everybody’s going after the main chance. And — I mean, obviously not everybody — there are people who are going out and trying to really create value.
But there are too many companies — too many investors in particular — who are really trying to create financial instruments. Their model is just, “Value go up.” Versus a company that’s saying, “Yeah, we want value for our users to go up. We’re not even worried about that [financial outcome] right now.”
It’s so interesting — there was a story in The Information recently about Surge AI, which didn’t raise any money from investors, actually growing faster than Scale (scale.ai), which Meta just put all this money through — because they were just focused on getting the job done.
So I guess my point is: try to create value for others, and it will come to you if you do that.
Ross: Absolutely agree. That’s a wonderful message to end on.
So thank you so much for all of your work over the years and your leadership in helping us frame this AI as a positive boon for all of us.
Tim: Right. Well, thank you very much.
And it’s an amazing, fun time to be in the industry. We should all rejoice — challenging but fun.
The post Tim O’Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11) appeared first on Humans + AI.
2,648 Listeners
7,165 Listeners
686 Listeners
1,472 Listeners
742 Listeners
9,207 Listeners
5,462 Listeners
15,335 Listeners
3,289 Listeners
491 Listeners