
Sign up to save your podcasts
Or


EPISODE DESCRIPTION
At a laid-back campus event, students are invited to ask questions about AI governance to Taiye Lambo, founder of the Holistic Information Security Practitioner Institute (HISPI), and Dr. Tuboise Floyd of Human Signal, an AI governance researcher and podcast host. Speakers emphasize that AI literacy is a civic and professional survival skill: employers expect workers to critically evaluate AI outputs, frame AI literacy as risk awareness, and focus on asking the right questions rather than becoming data scientists. Discussion covers deepfakes and short-form media, overreliance on AI (including a lawyer citing fabricated ChatGPT case law), "never blindly trust, always verify," and the need for continuous auditing, accountability, and an "honest human in the loop," especially in clinical and environmental contexts. Students are advised to build strong domain knowledge, think critically, pursue internships, and invest in AI governance and risk certifications over tool-specific training.
⏱️ Chapters 00:00 Welcome and Setup 00:52 Meet the Experts 01:57 Taiye on Governance Focus 02:53 Dr. Floyd Background and Podcast 04:39 Open Forum Begins 05:02 AI Literacy for Careers 07:23 Threat or Opportunity Poll 10:01 AI Literacy Beyond STEM 10:49 Spotting Deepfakes in Shorts 15:35 Using AI Without Replacing Learning 16:14 Lawyer Case and Overtrusting AI 18:08 Never Blindly Trust — Verify 19:06 Wikipedia Analogy and Real Risks 20:31 Business Ethics Reality Check 21:06 Continuous Audits in Clinics 21:28 Human in the Loop Matters 22:04 Environmental AI Data Gaps 23:13 Public Trust and Accountability 23:33 Honest Human Oversight 25:28 Tokens and Hallucinations 26:51 Bias in Training Data 27:56 Interviewing in the AI Era 30:28 AI Disruption and Generational Shift 33:21 High-Stakes AI Blind Spots 36:02 Rapid Fire Career Advice 41:03 Closing and Next Steps
GUEST
Taiye Lambo, Founder & Chief Artificial Intelligence Officer Holistic Information Security Practitioner Institute (HISPI) 🔗 https://www.hispi.org 🔗 https://projectcerebellum.com
TAIMScore™ Assessor Workshop 🔗 https://humansignal.io/taimscore_assessor_workshop
SUBSCRIBE & SUPPORT
Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.
Support Human Signal — help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/support
Every contribution sustains the signal.
ABOUT THE HOST
Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.
PRODUCTION NOTES
Host & Producer: Dr. Tuboise Floyd Creative Director: Jeremy Jarvis
Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.
CONNECT
LinkedIn: linkedin.com/in/tuboise Email: [email protected]
TRANSCRIPT
Full transcript available upon request at [email protected]
TAGS
AI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership
#AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellum
LEGAL
© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.
By Dr. Tuboise FloydEPISODE DESCRIPTION
At a laid-back campus event, students are invited to ask questions about AI governance to Taiye Lambo, founder of the Holistic Information Security Practitioner Institute (HISPI), and Dr. Tuboise Floyd of Human Signal, an AI governance researcher and podcast host. Speakers emphasize that AI literacy is a civic and professional survival skill: employers expect workers to critically evaluate AI outputs, frame AI literacy as risk awareness, and focus on asking the right questions rather than becoming data scientists. Discussion covers deepfakes and short-form media, overreliance on AI (including a lawyer citing fabricated ChatGPT case law), "never blindly trust, always verify," and the need for continuous auditing, accountability, and an "honest human in the loop," especially in clinical and environmental contexts. Students are advised to build strong domain knowledge, think critically, pursue internships, and invest in AI governance and risk certifications over tool-specific training.
⏱️ Chapters 00:00 Welcome and Setup 00:52 Meet the Experts 01:57 Taiye on Governance Focus 02:53 Dr. Floyd Background and Podcast 04:39 Open Forum Begins 05:02 AI Literacy for Careers 07:23 Threat or Opportunity Poll 10:01 AI Literacy Beyond STEM 10:49 Spotting Deepfakes in Shorts 15:35 Using AI Without Replacing Learning 16:14 Lawyer Case and Overtrusting AI 18:08 Never Blindly Trust — Verify 19:06 Wikipedia Analogy and Real Risks 20:31 Business Ethics Reality Check 21:06 Continuous Audits in Clinics 21:28 Human in the Loop Matters 22:04 Environmental AI Data Gaps 23:13 Public Trust and Accountability 23:33 Honest Human Oversight 25:28 Tokens and Hallucinations 26:51 Bias in Training Data 27:56 Interviewing in the AI Era 30:28 AI Disruption and Generational Shift 33:21 High-Stakes AI Blind Spots 36:02 Rapid Fire Career Advice 41:03 Closing and Next Steps
GUEST
Taiye Lambo, Founder & Chief Artificial Intelligence Officer Holistic Information Security Practitioner Institute (HISPI) 🔗 https://www.hispi.org 🔗 https://projectcerebellum.com
TAIMScore™ Assessor Workshop 🔗 https://humansignal.io/taimscore_assessor_workshop
SUBSCRIBE & SUPPORT
Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.
Support Human Signal — help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/support
Every contribution sustains the signal.
ABOUT THE HOST
Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.
PRODUCTION NOTES
Host & Producer: Dr. Tuboise Floyd Creative Director: Jeremy Jarvis
Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.
CONNECT
LinkedIn: linkedin.com/in/tuboise Email: [email protected]
TRANSCRIPT
Full transcript available upon request at [email protected]
TAGS
AI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership
#AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellum
LEGAL
© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.