Two Voice Devs

Episode 188 - Building Responsible AI with Gemini


Listen Later

As large language models (LLMs) become increasingly powerful, ensuring their responsible use is crucial. In this episode of Two Voice Devs, Allen and Mark delve into Google's Gemini LLM, specifically its built-in safety features designed to prevent harmful outputs like harassment, hate speech, sexually explicit content, and dangerous information.


Join them as they discuss:

(00:01:55) The importance of safety features in LLMs and Google's approach to responsible AI.

(00:03:08) A walkthrough of Gemini's safety settings in AI Studio, including the four categories of evaluation and developer control options.

(00:06:51) Examples of how Gemini flags potentially harmful prompts and responses, and how developers can adjust settings to control output.

(00:08:55) A deep dive into the API, exploring the parameters and responses related to safety features.

(00:19:38) The challenges of handling incomplete responses due to safety violations and the need for better recovery strategies.

(00:26:47) The importance of industry standards and finer-grained control for responsible AI development.

(00:29:00) A call to action for developers and conversation designers to discuss and collaborate on best practices for handling safety issues in LLMs.


This episode offers valuable insights for developers working with LLMs and anyone interested in the future of responsible AI. Tune in and share your thoughts on how we can build safer and more ethical AI systems!

...more
View all episodesView all episodes
Download on the App Store

Two Voice DevsBy Mark and Allen

  • 1
  • 1
  • 1
  • 1
  • 1

1

1 ratings


More shows like Two Voice Devs

View all
Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

354 Listeners

The Daily AI Show by The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran

The Daily AI Show

3 Listeners