The Future of Everything

The future of AI Chat: Foundation models and responsible innovation


Listen Later

Guest Percy Liang is an authority on AI who says that we are undergoing a paradigm shift in AI powered by foundation models, which are general-purpose models trained at immense scale, such as ChatGPT. In this episode of Stanford Engineering’s The Future of Everything podcast, Liang tells host Russ Altman about how foundation models are built, how to evaluate them, and the growing concerns with lack of openness and transparency.

Connect With Us:

  • Episode Transcripts >>> The Future of Everything Website
  • Connect with Russ >>> Threads / Bluesky / Mastodon
  • Connect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / Facebook

Chapters:

(00:00:00) Introduction
Host Russ Altman introduces Percy Liang, who runs the Stanford Center on Foundation Models 

(00:02:26) Defining Foundation Models

Percy Liang explains the concept of foundation models and the paradigm shift they represent. 

(00:04:22) How are Foundation Models Built & Trained?

Explanation of the training data sources and the scale of training data: training on trillions of words. Details on the network architecture, parameters, and the objective function.

(00:10:36) Context Length & Predictive Capabilities

Discussion on context length and its role in predictions. Examples illustrating the influence of context length on predictive accuracy. 

(00:12:28) Understanding Hallucination

Percy Liang explains how foundation models “hallucinate”, and the need for both truth and creative tasks which requires “lying”.

 (00:15:19) Alignment and Reinforcement in Training

The role of alignment and reinforcement learning from human feedback in controlling model outputs. 

(00:18:14) Evaluating Foundation Models

The shift from task-specific evaluations to comprehensive model evaluations, Introduction of HELM & the challenges in evaluation these models. 

(00:25:09) Foundation Models Transparency Index

Percy Liang details the Foundation Models Transparency Index, the initial results and reactions by the companies evaluated by it.

(00:29:42) Open vs. Closed AI Models: Benefits & Risks

The spectrum between open and closed AI models , benefits and security impacts

Connect With Us:

Episode Transcripts >>> The Future of Everything Website

Connect with Russ >>> Threads / Bluesky / Mastodon

Connect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

...more
View all episodesView all episodes
Download on the App Store

The Future of EverythingBy Stanford Engineering

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

127 ratings


More shows like The Future of Everything

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,129 Listeners

Economist Podcasts by The Economist

Economist Podcasts

4,185 Listeners

Acquired by Ben Gilbert and David Rosenthal

Acquired

4,450 Listeners

Entrepreneurial Thought Leaders (ETL) by Stanford eCorner

Entrepreneurial Thought Leaders (ETL)

709 Listeners

Gartner ThinkCast by Gartner

Gartner ThinkCast

110 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

341 Listeners

The Daily by The New York Times

The Daily

112,539 Listeners

Masters of Scale by WaitWhat

Masters of Scale

3,984 Listeners

Bold Names by The Wall Street Journal

Bold Names

1,448 Listeners

Big Brains by University of Chicago Podcast Network

Big Brains

468 Listeners

Physics World Weekly Podcast by Physics World

Physics World Weekly Podcast

77 Listeners

Stanford Legal by Stanford Law School

Stanford Legal

41 Listeners

MIT Technology Review Narrated by MIT Technology Review

MIT Technology Review Narrated

258 Listeners

Hard Fork by The New York Times

Hard Fork

5,469 Listeners

The Joy of Why by Steven Strogatz, Janna Levin and Quanta Magazine

The Joy of Why

491 Listeners

HBR On Strategy by Harvard Business Review

HBR On Strategy

78 Listeners