
Sign up to save your podcasts
Or


In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether.
• Why guardrails matter for PII, secrets, and access control
• Where to place controls across prompt, training, and output
• Prompt injection, jailbreaks, and adversarial handling
• RAG design with vector DB separation and permissions
• Evaluation methods, risk scoring, and cost trade-offs
• AWS Bedrock guardrails vs open-source customization
• Domain-adapted safety models and policy matching
• When deterministic systems beat LLM complexity
This episode is part of our "AI in Practice” series, where we invite guests to talk about the reality of their work in AI. From hands-on development to scientific research, be sure to check out other episodes under this heading in our listings.
Related research:
What did you think? Let us know.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
By Dr. Andrew Clark & Sid Mangalik5
1010 ratings
In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether.
• Why guardrails matter for PII, secrets, and access control
• Where to place controls across prompt, training, and output
• Prompt injection, jailbreaks, and adversarial handling
• RAG design with vector DB separation and permissions
• Evaluation methods, risk scoring, and cost trade-offs
• AWS Bedrock guardrails vs open-source customization
• Domain-adapted safety models and policy matching
• When deterministic systems beat LLM complexity
This episode is part of our "AI in Practice” series, where we invite guests to talk about the reality of their work in AI. From hands-on development to scientific research, be sure to check out other episodes under this heading in our listings.
Related research:
What did you think? Let us know.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

5,519 Listeners