Allison Pickens' Podcast

Preventing Catastrophes from Generative AI


Listen Later

Generative AI has sparked our imaginations. It also has terrified us. Bing’s chatbot declaring its love for a NYTimes columnist, CNET’s saga with AI-generated content, and many other incidents have shown that AI can generate horrific outcomes. Moreover, the nature of the adverse outcome can be totally unpredictable.

Randol Aikin helped build safety systems for self-driving cars at Uber and elsewhere. He was particularly affected by a fatality caused by a self-driving car and began to envision how he could prevent other types of generative AI from generating catastrophes like this. That’s now the mission of his new startup: to create guardrails for generative AI more broadly.

In this conversation, we discussed:

* How are safety systems for self-driving cars instructive for preventing adverse outcomes from generative AI?

* What are some examples of adverse outcomes that would be hard to predict?

* What are best practices for avoiding them?

* What should be the role of boards in monitoring generative AI used by their companies?

You can listen to the podcast or read the lightly edited transcript. Let’s dive in!



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit allisonpickens.substack.com
...more
View all episodesView all episodes
Download on the App Store

Allison Pickens' PodcastBy Allison Pickens