
Sign up to save your podcasts
Or


It’s the AI Episode. DC AI policy guy Brian Chau convinces me, an AI skeptic, that machine learning actually is all it’s cracked up to be.
The key lay in Brian’s description of gradient descent, the so-called “workhorse behind machine learning.”
Basically, gradient descent is how you teach a computer through mathematics to climb or descend a linear hill. Sort of how you give the computer the ability to “see” in the right direction. The computer responds to right and wrong inputs by creating its own vast calculus that humans can’t reproduce—in that sense it really does form its own “impressions” and “ideas” about how to respond in certain scenarios. It is thus more than just “glorified autocorrect” or a “stochastic parrot”—just a probabilistic word/image calculator—which is what I had thought beforehand based on articles like ChatGPT is a Blurry JPEG of the Web.
Here are some of the other articles discussed:
A Visual Example of Interpretability: https://openai.com/research/microscope…
Brians’s article explaining transformers:
FTC statement: https://ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services…
Biden executive order: https://whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Brian’s AI policy think tank Alliance for the Future will launch next month.
The Carousel is a reader-supported publication.
By Isaac Simpson3.7
2626 ratings
It’s the AI Episode. DC AI policy guy Brian Chau convinces me, an AI skeptic, that machine learning actually is all it’s cracked up to be.
The key lay in Brian’s description of gradient descent, the so-called “workhorse behind machine learning.”
Basically, gradient descent is how you teach a computer through mathematics to climb or descend a linear hill. Sort of how you give the computer the ability to “see” in the right direction. The computer responds to right and wrong inputs by creating its own vast calculus that humans can’t reproduce—in that sense it really does form its own “impressions” and “ideas” about how to respond in certain scenarios. It is thus more than just “glorified autocorrect” or a “stochastic parrot”—just a probabilistic word/image calculator—which is what I had thought beforehand based on articles like ChatGPT is a Blurry JPEG of the Web.
Here are some of the other articles discussed:
A Visual Example of Interpretability: https://openai.com/research/microscope…
Brians’s article explaining transformers:
FTC statement: https://ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services…
Biden executive order: https://whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Brian’s AI policy think tank Alliance for the Future will launch next month.
The Carousel is a reader-supported publication.

438 Listeners

3,373 Listeners

1,066 Listeners

4,167 Listeners

2,018 Listeners

351 Listeners

2,176 Listeners

372 Listeners

1,227 Listeners

5,322 Listeners

2,378 Listeners

208 Listeners

505 Listeners

288 Listeners

162 Listeners