
Sign up to save your podcasts
Or
It’s the AI Episode. DC AI policy guy Brian Chau convinces me, an AI skeptic, that machine learning actually is all it’s cracked up to be.
The key lay in Brian’s description of gradient descent, the so-called “workhorse behind machine learning.”
Basically, gradient descent is how you teach a computer through mathematics to climb or descend a linear hill. Sort of how you give the computer the ability to “see” in the right direction. The computer responds to right and wrong inputs by creating its own vast calculus that humans can’t reproduce—in that sense it really does form its own “impressions” and “ideas” about how to respond in certain scenarios. It is thus more than just “glorified autocorrect” or a “stochastic parrot”—just a probabilistic word/image calculator—which is what I had thought beforehand based on articles like ChatGPT is a Blurry JPEG of the Web.
Here are some of the other articles discussed:
A Visual Example of Interpretability: https://openai.com/research/microscope…
Brians’s article explaining transformers:
FTC statement: https://ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services…
Biden executive order: https://whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Brian’s AI policy think tank Alliance for the Future will launch next month.
The Carousel is a reader-supported publication.
3.6
2525 ratings
It’s the AI Episode. DC AI policy guy Brian Chau convinces me, an AI skeptic, that machine learning actually is all it’s cracked up to be.
The key lay in Brian’s description of gradient descent, the so-called “workhorse behind machine learning.”
Basically, gradient descent is how you teach a computer through mathematics to climb or descend a linear hill. Sort of how you give the computer the ability to “see” in the right direction. The computer responds to right and wrong inputs by creating its own vast calculus that humans can’t reproduce—in that sense it really does form its own “impressions” and “ideas” about how to respond in certain scenarios. It is thus more than just “glorified autocorrect” or a “stochastic parrot”—just a probabilistic word/image calculator—which is what I had thought beforehand based on articles like ChatGPT is a Blurry JPEG of the Web.
Here are some of the other articles discussed:
A Visual Example of Interpretability: https://openai.com/research/microscope…
Brians’s article explaining transformers:
FTC statement: https://ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services…
Biden executive order: https://whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Brian’s AI policy think tank Alliance for the Future will launch next month.
The Carousel is a reader-supported publication.
5,944 Listeners
434 Listeners
830 Listeners
39 Listeners
4,099 Listeners
2,136 Listeners
331 Listeners
335 Listeners
343 Listeners
1,232 Listeners
361 Listeners
199 Listeners
128 Listeners
398 Listeners
91 Listeners