The AI Fundamentalists

Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall


Listen Later

Join us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making.

Show notes

Governance, model explainability, and high-risk applications 00:00:03 

  • Intro to Patrick
  • His latest book: Machine Learning for High-Risk Applications: Approaches to Responsible AI (2023)


The benefits of NIST AI Risk Management Framework 00:04:01 

  • Does not have a profit motive, which avoids the potential for conflicts of interest when providing guidance on responsible AI. 
  • Solicits, adjudicates, and incorporates feedback from the public and other stakeholders.
  • NIST is not law, however it's recommendations set companies up for outcome-based reviews by regulators.


Accountability challenges in "blame-free" cultures 00:10:24 

  • Cites these cultures have the hardest time with the framework's recommendations
  • Practices like documentation and fair model reviews need accountability and objectivity
  • If everyone's responsible, no one's responsible.


The value of explainable models vs black-box models 00:15:00 

  • Concerns about replacing explainable models with LLMs for LLM's sake 
  • Why generative AI is bad for decision-making 


AI and its impact on students 00:21:49 

  • Students are more indicative of where the hype and market is today
  • Teaching them how to work through the best model for the best job despite the hype


AI incidents and contextual failures 00:26:17 

  • AI Incident Database 
  • AI, as it currently stands, is a memorizing and calculating technology. It lacks the ability to incorporate real-world context.
  • McDonald's AI Drive-Thru debacle is a warning to us all


Generative AI and homogenization problems 00:34:30

Recommended resources from Patrick:

  • Ed Zitron “Better Offline” 
  • NIST ARIA 
  • AI Safety Is a Narrative Problem


What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
...more
View all episodesView all episodes
Download on the App Store

The AI FundamentalistsBy Dr. Andrew Clark & Sid Mangalik

  • 5
  • 5
  • 5
  • 5
  • 5

5

9 ratings


More shows like The AI Fundamentalists

View all
The Daily by The New York Times

The Daily

111,157 Listeners

Practical AI by Practical AI LLC

Practical AI

187 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

8,761 Listeners