
Sign up to save your podcasts
Or


Graeme Rudd spent years taking emerging technology into austere environments and watching it fail.
He assessed over 350 technologies for a Department of Defense lab. The pattern was consistent: engineers solved technical problems brilliantly. Lawyers checked compliance boxes. Leadership approved budgets. And the technology failed because no one was looking at how humans would actually use it.
So he went to law school—not to practice law, but because solving technology problems requires understanding legal systems, human behavior, operational constraints, and business incentives simultaneously.
Now he works on AI governance, and the stakes are higher. "Ship it and patch later" becomes catastrophic when AI sits on top of your data and can manipulate it. You need engineers, lawyers, operators, and the people who'll actually use the system—especially junior employees who spot failure points leadership never sees—in the room together before you deploy.
This conversation is about why single-discipline thinking fails when technology intersects with humans under stress.
Why pre-mortems with your most junior people matter more than post-mortems with experts.
Why the multidisciplinary approach isn't just nice to have—it's the only way to answer the question that matters:
Does it work when a human being needs to operate it under conditions you didn't anticipate?
By BKBT Productions5
1010 ratings
Graeme Rudd spent years taking emerging technology into austere environments and watching it fail.
He assessed over 350 technologies for a Department of Defense lab. The pattern was consistent: engineers solved technical problems brilliantly. Lawyers checked compliance boxes. Leadership approved budgets. And the technology failed because no one was looking at how humans would actually use it.
So he went to law school—not to practice law, but because solving technology problems requires understanding legal systems, human behavior, operational constraints, and business incentives simultaneously.
Now he works on AI governance, and the stakes are higher. "Ship it and patch later" becomes catastrophic when AI sits on top of your data and can manipulate it. You need engineers, lawyers, operators, and the people who'll actually use the system—especially junior employees who spot failure points leadership never sees—in the room together before you deploy.
This conversation is about why single-discipline thinking fails when technology intersects with humans under stress.
Why pre-mortems with your most junior people matter more than post-mortems with experts.
Why the multidisciplinary approach isn't just nice to have—it's the only way to answer the question that matters:
Does it work when a human being needs to operate it under conditions you didn't anticipate?

31,993 Listeners

43,618 Listeners

39,005 Listeners

372 Listeners

651 Listeners

1,021 Listeners

415 Listeners

8,061 Listeners

179 Listeners

315 Listeners

188 Listeners

173 Listeners

74 Listeners

139 Listeners

5,510 Listeners