One Knight in Product

Andriy Burkov - The TRUTH About Large Language Models and Agentic AI (with Andriy Burkov, Author "The Hundred-Page Language Models Book")


Listen Later

Andriy Burkov is a renowned machine learning expert and leader. He's also the author of (so far) three books on machine learning, including the recently-released "The Hundred-Page Language Models Book", which takes curious people from the very basics of language models all the way up to building their own LLM. Andriy is also a formidable online presence and is never afraid to call BS on over-the-top claims about AI capabilities via his punchy social media posts.

Episode highlights:
1. Large Language Models are neither magic nor conscious

LLMs boil down to relatively simple mathematics at an unfathomably large scale. Humans are terrible at visualising big numbers and cannot comprehend the size of the dataset or the number of GPUs that have been used to create the models. You can train the same LLM on a handful of records and get garbage results, or throw millions of dollars at it and get good results, but the fundamentals are identical, and there's no consciousness hiding in between the equations. We see good-looking output, and we think it's talking to us. It isn't.

2. As soon as we saw it was possible to do mathematics on words, LLMs were inevitable

There were language models before LLMs, but the invention of the transformer architecture truly accelerated everything. That said, the fundamentals trace further back to "simpler" algorithms, such as word2vec, which proved that it is possible to encode language information in a numeric format, which meant that the vast majority of linguistic information could be represented by embeddings, which enabled people to run equations on language. After that, it was just a matter of time before they got scaled out.

3. LLMs look intelligent because people generally ask about things they already know about

The best way to be disappointed by an LLM's results is to ask detailed questions about something you know deeply. It's quite likely that it'll give good results to start with, because most people's knowledge is so unoriginal that, somewhere in the LLM's training data, there are documents that talk about the thing you asked about. But, it will degrade over time and confidently keep writing even when it doesn't know the answer. These are not easily solvable problems and are, in fact, fundamental parts of the design of an LLM.

4. Agentic AI relies on unreliable actors with no true sense of agency

The concept of agents is not new, and people have been talking about them for years. The key aspect of AI agents is that they need self-motivation and goals of their own, rather than being told to have goals and then simulating the desire to achieve them. That's not to say that some agents are not useful in their own right, but the goal of fully autonomous, agentic systems is a long way off, and may not even be solvable.

5. LLMs represent the most incredible technical advance since the personal computer, but people should quit it with their most egregious claims

LLMs are an incredible tool and can open up whole new worlds for people who are able to get the best out of them. There are limits to their utility, and some of their shortcomings are likely unsolvable, but we should not minimise their impact. However, there are unethical people out there making completely unsubstantiated claims based on zero evidence and a fundamental misunderstanding of how these models work. These people are scaring people and encouraging terrible decision-making from the gullible. We need to see through the hype.

Buy "The Hundred-Page Language Model Book"

"Large language models (LLMs) have fundamentally transformed how machines process and generate information. They are reshaping white-collar jobs at a pace comparable only to the revolutionary impact of personal computers. Understanding the mathematical foundations and inner workings of language models has become crucial for maintaining relevance and competitiveness in an increasingly automated workforce. This book guides you through the evolution of language models, starting from machine learning fundamentals. Rather than presenting transformers right away, which can feel overwhelming, we build understanding of language models step by step—from simple count-based methods through recurrent neural networks to modern architectures. Each concept is grounded in clear mathematical foundations and illustrated with working Python code."

Check it out on the book's website: https://thelmbook.com/.

You can also check out Machine Learning Engineering: https://www.mlebook.com and The Hundred-Page Machine Learning Book: https://www.themlbook.com/.

Follow Andriy

You can catch up with Andriy here:

  • LinkedIn: https://www.linkedin.com/in/andriyburkov/
  • Twitter/"X": https://twitter.com/burkov
  • True Positive Newsletter: https://aiweekly.substack.com/
  • ...more
    View all episodesView all episodes
    Download on the App Store

    One Knight in ProductBy One Knight in Product

    • 4.7
    • 4.7
    • 4.7
    • 4.7
    • 4.7

    4.7

    14 ratings


    More shows like One Knight in Product

    View all
    a16z Podcast by Andreessen Horowitz

    a16z Podcast

    1,001 Listeners

    The Tim Ferriss Show by Tim Ferriss: Bestselling Author, Human Guinea Pig

    The Tim Ferriss Show

    16,084 Listeners

    The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch by Harry Stebbings

    The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch

    510 Listeners

    Decoder with Nilay Patel by The Verge

    Decoder with Nilay Patel

    3,125 Listeners

    How I Built This with Guy Raz by Guy Raz | Wondery

    How I Built This with Guy Raz

    30,242 Listeners

    The Product Podcast by Product School

    The Product Podcast

    166 Listeners

    Masters of Scale by WaitWhat

    Masters of Scale

    3,966 Listeners

    All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

    All-In with Chamath, Jason, Sacks & Friedberg

    8,773 Listeners

    The Memo by Howard Marks by Oaktree Capital Management

    The Memo by Howard Marks

    407 Listeners

    Hard Fork by The New York Times

    Hard Fork

    5,361 Listeners

    Product Thinking by Melissa Perri

    Product Thinking

    144 Listeners

    Sharp Tech with Ben Thompson by Ben Thompson

    Sharp Tech with Ben Thompson

    90 Listeners

    The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

    The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

    421 Listeners

    BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

    BG2Pod with Brad Gerstner and Bill Gurley

    433 Listeners

    AI + a16z by a16z

    AI + a16z

    33 Listeners