
Sign up to save your podcasts
Or


This week Dan and I tackle a question that's been bugging both of us since Christmas: what if hallucinations—those supposedly broken outputs that make AI unreliable—are actually just creativity in disguise? It's the kind of reframe that changes how you work with these systems entirely. I open with my custom scheduling system that beats a $4 billion ERP, and from there we tumble into the deep end of practical AI deployment, architectural thinking, and the future of work itself.
We dig into what Dan calls the "recursive loop"—the idea that you don't have to trust AI's first output. Instead, you throw it back at the system five times with different lenses: "Check this section. Now verify this assumption. Now fact-check the whole thing." By the time you've cycled through, the hallucinations have been wrung out and you've got something real. This is less about building perfect AI and more about building a partnership with a system that wants to help you.
Then we dig into OpenClaw and Dan's autonomous agent running on a spare machine that's basically become his personal coach, business analyst, and productivity engine. It manages his daily revenue reports, trades ideas, emails, nutrition tracking, and evening reflections. And here's the thing: it's not magic. It's just someone asking good questions and building the right file structure (claude.md, memory.md, context.md) to help the agent remember what matters.
We also touch on the 100X engineer (who's also product, marketing, and engineering), Google's antitrust handcuffs, why three machines is becoming normal again, and Sean's philosophy that you should want your employees to automate themselves into better work. There's real anxiety about displacement here, but also genuine excitement about what opens up when you're freed from the paper cuts of your day.
This episode is technical and it gets into the weeds, but it's also about how a slight shift in thinking can make you exponentially more capable. If you've been curious about using AI beyond "ask it a question," this one's for you.
Cheers, Sean
Books DiscussedPer Sean's in-episode request at 13:14 — this week's fact-check is written in the style of Charles Bukowski. You asked for it.
look, they said some things. most people do. the difference is these two actually meant some of it.
🟢 = nailed it | 🟡 = close enough | 🔴 = whiffed it
🟢 Twitter laid off about 90% of staff
Dan said Twitter got rid of 90% of all staff and they did fine. and he's right, more or less. Musk walked in and fired somewhere between 80 and 90 percent of the building. the tweets kept tweeting. the servers kept serving. whether "fine" is the right word depends on how you feel about the place now, but the lights stayed on. that part's true. sometimes the bar stays open even after you fire the bartender.
🟢 Google's 20% Time
Sean said Google had 20% time on Fridays for a long time. they did. one day a week, go build whatever you want. Gmail came out of that. Google News too. it was the kind of policy that made you think maybe corporations had souls. they quietly killed it, of course. but for a while there, Fridays meant something.
🟢 Temperature controls randomness/creativity in LLMs
Dan said you crank the temperature up for stories and down for code. he's right. temperature is the knob between chaos and precision. turn it up and the machine starts to dream. turn it down and it becomes an accountant. most of us live somewhere in the middle, but nobody writes poems about the middle.
🟡 The Giver plot summary
Dan described a society where knowledge was compartmentalized, one old man carrying the weight of every memory so nobody else had to feel anything. that's Lois Lowry's book, more or less. Dan said he wasn't close enough to it anymore to remember the whole structure. fair enough. most of us aren't close enough to anything anymore. the analogy landed. the details were soft around the edges. partial credit, which is what life mostly is.
🟢 ERP limitations on custom scheduling
Dan said their $4 billion ERP couldn't handle their scheduling because the process had nuances that sat just outside what the system could do. anyone who's ever worked inside a corporation just nodded. you spend the GDP of a small country on software and it still can't do the one thing you actually need it to do. that's not a claim that needs verification. that's just Tuesday.
🟢 Google's antitrust exposure constrained their AI moves
Sean said Google had to wait for OpenAI to enter AI search before Google could go there, because moving first would look like leveraging their search monopoly into an adjacent market. he was working it out on the fly and even said "I'm gonna scrub this from the record." but the instinct was right. the legal concept is called monopoly leveraging— using dominance in one market to foreclose competition in another. Section 2 of the Sherman Act. the FTC's tying doctrine. real stuff. there's no specific ruling that says "Google must wait for a competitor to go first," but in August 2024 a federal court found Google maintained an illegal monopoly in search, and the September 2025 remedies banned their exclusive distribution deals for Search, Chrome, and Gemini. so yeah—Google's legal team absolutely would have known that charging into AI search unprovoked was handing the DOJ another exhibit. sometimes the smartest move a monopolist can make is to let somebody else walk through the door first. Sean got there. he just didn't trust himself enough to leave it in.
Final Score: 5 green, 1 yellow, 0 red
not bad for two guys talking into microphones about machines that dream. the facts held up. the stories were better. that's usually how it goes with the good ones.
Chapters
By Sean Filipow and Daniel HatkeThis week Dan and I tackle a question that's been bugging both of us since Christmas: what if hallucinations—those supposedly broken outputs that make AI unreliable—are actually just creativity in disguise? It's the kind of reframe that changes how you work with these systems entirely. I open with my custom scheduling system that beats a $4 billion ERP, and from there we tumble into the deep end of practical AI deployment, architectural thinking, and the future of work itself.
We dig into what Dan calls the "recursive loop"—the idea that you don't have to trust AI's first output. Instead, you throw it back at the system five times with different lenses: "Check this section. Now verify this assumption. Now fact-check the whole thing." By the time you've cycled through, the hallucinations have been wrung out and you've got something real. This is less about building perfect AI and more about building a partnership with a system that wants to help you.
Then we dig into OpenClaw and Dan's autonomous agent running on a spare machine that's basically become his personal coach, business analyst, and productivity engine. It manages his daily revenue reports, trades ideas, emails, nutrition tracking, and evening reflections. And here's the thing: it's not magic. It's just someone asking good questions and building the right file structure (claude.md, memory.md, context.md) to help the agent remember what matters.
We also touch on the 100X engineer (who's also product, marketing, and engineering), Google's antitrust handcuffs, why three machines is becoming normal again, and Sean's philosophy that you should want your employees to automate themselves into better work. There's real anxiety about displacement here, but also genuine excitement about what opens up when you're freed from the paper cuts of your day.
This episode is technical and it gets into the weeds, but it's also about how a slight shift in thinking can make you exponentially more capable. If you've been curious about using AI beyond "ask it a question," this one's for you.
Cheers, Sean
Books DiscussedPer Sean's in-episode request at 13:14 — this week's fact-check is written in the style of Charles Bukowski. You asked for it.
look, they said some things. most people do. the difference is these two actually meant some of it.
🟢 = nailed it | 🟡 = close enough | 🔴 = whiffed it
🟢 Twitter laid off about 90% of staff
Dan said Twitter got rid of 90% of all staff and they did fine. and he's right, more or less. Musk walked in and fired somewhere between 80 and 90 percent of the building. the tweets kept tweeting. the servers kept serving. whether "fine" is the right word depends on how you feel about the place now, but the lights stayed on. that part's true. sometimes the bar stays open even after you fire the bartender.
🟢 Google's 20% Time
Sean said Google had 20% time on Fridays for a long time. they did. one day a week, go build whatever you want. Gmail came out of that. Google News too. it was the kind of policy that made you think maybe corporations had souls. they quietly killed it, of course. but for a while there, Fridays meant something.
🟢 Temperature controls randomness/creativity in LLMs
Dan said you crank the temperature up for stories and down for code. he's right. temperature is the knob between chaos and precision. turn it up and the machine starts to dream. turn it down and it becomes an accountant. most of us live somewhere in the middle, but nobody writes poems about the middle.
🟡 The Giver plot summary
Dan described a society where knowledge was compartmentalized, one old man carrying the weight of every memory so nobody else had to feel anything. that's Lois Lowry's book, more or less. Dan said he wasn't close enough to it anymore to remember the whole structure. fair enough. most of us aren't close enough to anything anymore. the analogy landed. the details were soft around the edges. partial credit, which is what life mostly is.
🟢 ERP limitations on custom scheduling
Dan said their $4 billion ERP couldn't handle their scheduling because the process had nuances that sat just outside what the system could do. anyone who's ever worked inside a corporation just nodded. you spend the GDP of a small country on software and it still can't do the one thing you actually need it to do. that's not a claim that needs verification. that's just Tuesday.
🟢 Google's antitrust exposure constrained their AI moves
Sean said Google had to wait for OpenAI to enter AI search before Google could go there, because moving first would look like leveraging their search monopoly into an adjacent market. he was working it out on the fly and even said "I'm gonna scrub this from the record." but the instinct was right. the legal concept is called monopoly leveraging— using dominance in one market to foreclose competition in another. Section 2 of the Sherman Act. the FTC's tying doctrine. real stuff. there's no specific ruling that says "Google must wait for a competitor to go first," but in August 2024 a federal court found Google maintained an illegal monopoly in search, and the September 2025 remedies banned their exclusive distribution deals for Search, Chrome, and Gemini. so yeah—Google's legal team absolutely would have known that charging into AI search unprovoked was handing the DOJ another exhibit. sometimes the smartest move a monopolist can make is to let somebody else walk through the door first. Sean got there. he just didn't trust himself enough to leave it in.
Final Score: 5 green, 1 yellow, 0 red
not bad for two guys talking into microphones about machines that dream. the facts held up. the stories were better. that's usually how it goes with the good ones.
Chapters