
Sign up to save your podcasts
Or


Graeme Rudd spent years taking emerging technology into austere environments and watching it fail.
He assessed over 350 technologies for a Department of Defense lab. The pattern was consistent: engineers solved technical problems brilliantly. Lawyers checked compliance boxes. Leadership approved budgets. And the technology failed because no one was looking at how humans would actually use it.
So he went to law school—not to practice law, but because solving technology problems requires understanding legal systems, human behavior, operational constraints, and business incentives simultaneously.
Now he works on AI governance, and the stakes are higher. "Ship it and patch later" becomes catastrophic when AI sits on top of your data and can manipulate it. You need engineers, lawyers, operators, and the people who'll actually use the system—especially junior employees who spot failure points leadership never sees—in the room together before you deploy.
This conversation is about why single-discipline thinking fails when technology intersects with humans under stress.
Why pre-mortems with your most junior people matter more than post-mortems with experts.
Why the multidisciplinary approach isn't just nice to have—it's the only way to answer the question that matters:
Does it work when a human being needs to operate it under conditions you didn't anticipate?
By BKBT Productions5
1010 ratings
Graeme Rudd spent years taking emerging technology into austere environments and watching it fail.
He assessed over 350 technologies for a Department of Defense lab. The pattern was consistent: engineers solved technical problems brilliantly. Lawyers checked compliance boxes. Leadership approved budgets. And the technology failed because no one was looking at how humans would actually use it.
So he went to law school—not to practice law, but because solving technology problems requires understanding legal systems, human behavior, operational constraints, and business incentives simultaneously.
Now he works on AI governance, and the stakes are higher. "Ship it and patch later" becomes catastrophic when AI sits on top of your data and can manipulate it. You need engineers, lawyers, operators, and the people who'll actually use the system—especially junior employees who spot failure points leadership never sees—in the room together before you deploy.
This conversation is about why single-discipline thinking fails when technology intersects with humans under stress.
Why pre-mortems with your most junior people matter more than post-mortems with experts.
Why the multidisciplinary approach isn't just nice to have—it's the only way to answer the question that matters:
Does it work when a human being needs to operate it under conditions you didn't anticipate?

32,049 Listeners

43,726 Listeners

39,046 Listeners

375 Listeners

653 Listeners

1,023 Listeners

418 Listeners

8,038 Listeners

177 Listeners

316 Listeners

188 Listeners

169 Listeners

74 Listeners

137 Listeners

5,488 Listeners