
Sign up to save your podcasts
Or
In this episode of Everything AI and Law, Tolulope sits down with Monica D. Higgins to talk about something that's becoming more and more urgent: when AI messes up—who’s to blame?
From biased algorithms to deepfakes, from legal personhood to job loss fears—this conversation goes deep. Monica shares honest, practical insights from her work at Future Proof, where she helps companies and individuals make AI useful and actually make sense in the real world. It’s smart, eye-opening, and packed with gems—whether you’re into tech, law, or just trying to figure out where you fit in the AI age.
🧭 Timestamps & Topics:
00:00 – The biggest ethical blind spot in AI adoption
02:00 – OpenAI’s Sora & systemic bias in AI-generated content
04:56 – Should AI be granted legal personhood?
07:58 – Rights vs. responsibilities: Who is truly accountable?
09:09 – Agentic AI: Excitement, fear & future implications
10:05 – Upskilling: The antidote to AI-induced job displacement
12:30 – How Future Proof designs role-specific AI training
14:37 – Deepfakes: Real concerns in a synthetic world
17:03 – A universal code of ethics for AI?
18:33 – Transparency, bias & the ethics checklist
19:13 – AI companionship: Support or silent danger?
21:02 – Negligence & product liability in AI deployment
23:00 – Should AI liability be a new legal category?
23:53 – AI in hiring: The case for algorithmic bias audits
24:49 – Should agency be granted to AI?
25:09 – What’s next for Future Proof & AI innovation
27:35 – Final thoughts: Upskilling, value, and the future of work
28:50 – Monica’s vision: “Let technology live up to its hype”
Connect with Monica D. Higgins & FuturProof:
🔗 LinkedIn
🌐 FuturProof Website
Connect with Tolulope Awoyomi (Host):
🔗 LinkedIn
Like what you heard? Hit subscribe, leave a rating/review, and share it with someone who needs to hear this.
Tag us with your thoughts using #EverythingAIAndLaw
In this episode of Everything AI and Law, Tolulope sits down with Monica D. Higgins to talk about something that's becoming more and more urgent: when AI messes up—who’s to blame?
From biased algorithms to deepfakes, from legal personhood to job loss fears—this conversation goes deep. Monica shares honest, practical insights from her work at Future Proof, where she helps companies and individuals make AI useful and actually make sense in the real world. It’s smart, eye-opening, and packed with gems—whether you’re into tech, law, or just trying to figure out where you fit in the AI age.
🧭 Timestamps & Topics:
00:00 – The biggest ethical blind spot in AI adoption
02:00 – OpenAI’s Sora & systemic bias in AI-generated content
04:56 – Should AI be granted legal personhood?
07:58 – Rights vs. responsibilities: Who is truly accountable?
09:09 – Agentic AI: Excitement, fear & future implications
10:05 – Upskilling: The antidote to AI-induced job displacement
12:30 – How Future Proof designs role-specific AI training
14:37 – Deepfakes: Real concerns in a synthetic world
17:03 – A universal code of ethics for AI?
18:33 – Transparency, bias & the ethics checklist
19:13 – AI companionship: Support or silent danger?
21:02 – Negligence & product liability in AI deployment
23:00 – Should AI liability be a new legal category?
23:53 – AI in hiring: The case for algorithmic bias audits
24:49 – Should agency be granted to AI?
25:09 – What’s next for Future Proof & AI innovation
27:35 – Final thoughts: Upskilling, value, and the future of work
28:50 – Monica’s vision: “Let technology live up to its hype”
Connect with Monica D. Higgins & FuturProof:
🔗 LinkedIn
🌐 FuturProof Website
Connect with Tolulope Awoyomi (Host):
🔗 LinkedIn
Like what you heard? Hit subscribe, leave a rating/review, and share it with someone who needs to hear this.
Tag us with your thoughts using #EverythingAIAndLaw