
Sign up to save your podcasts
Or


It’s the AI Episode. DC AI policy guy Brian Chau convinces me, an AI skeptic, that machine learning actually is all it’s cracked up to be.
The key lay in Brian’s description of gradient descent, the so-called “workhorse behind machine learning.”
Basically, gradient descent is how you teach a computer through mathematics to climb or descend a linear hill. Sort of how you give the computer the ability to “see” in the right direction. The computer responds to right and wrong inputs by creating its own vast calculus that humans can’t reproduce—in that sense it really does form its own “impressions” and “ideas” about how to respond in certain scenarios. It is thus more than just “glorified autocorrect” or a “stochastic parrot”—just a probabilistic word/image calculator—which is what I had thought beforehand based on articles like ChatGPT is a Blurry JPEG of the Web.
Here are some of the other articles discussed:
A Visual Example of Interpretability: https://openai.com/research/microscope…
Brians’s article explaining transformers:
FTC statement: https://ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services…
Biden executive order: https://whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Brian’s AI policy think tank Alliance for the Future will launch next month.
The Carousel is a reader-supported publication.
By Isaac Simpson3.7
2626 ratings
It’s the AI Episode. DC AI policy guy Brian Chau convinces me, an AI skeptic, that machine learning actually is all it’s cracked up to be.
The key lay in Brian’s description of gradient descent, the so-called “workhorse behind machine learning.”
Basically, gradient descent is how you teach a computer through mathematics to climb or descend a linear hill. Sort of how you give the computer the ability to “see” in the right direction. The computer responds to right and wrong inputs by creating its own vast calculus that humans can’t reproduce—in that sense it really does form its own “impressions” and “ideas” about how to respond in certain scenarios. It is thus more than just “glorified autocorrect” or a “stochastic parrot”—just a probabilistic word/image calculator—which is what I had thought beforehand based on articles like ChatGPT is a Blurry JPEG of the Web.
Here are some of the other articles discussed:
A Visual Example of Interpretability: https://openai.com/research/microscope…
Brians’s article explaining transformers:
FTC statement: https://ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services…
Biden executive order: https://whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Brian’s AI policy think tank Alliance for the Future will launch next month.
The Carousel is a reader-supported publication.

441 Listeners

3,365 Listeners

1,047 Listeners

4,133 Listeners

2,036 Listeners

337 Listeners

2,160 Listeners

368 Listeners

1,233 Listeners

5,281 Listeners

2,366 Listeners

208 Listeners

495 Listeners

276 Listeners

124 Listeners