Data Science at Home

What if I train a neural network with random data? (with Stanisław Jastrzębski) (Ep. 87)


Listen Later

What happens to a neural network trained with random data?

Are massive neural networks just lookup tables or do they truly learn something? 

Today’s episode will be about memorisation and generalisation in deep learning, with Stanislaw Jastrzębski from New York University.

Stan spent two summers as a visiting student with Prof. Yoshua Bengio and has been working on 

  • Understanding and improving how deep network generalise
  • Representation Learning
  • Natural Language Processing
  • Computer Aided Drug Design
  •  

    What makes deep learning unique?

    I have asked him a few questions for which I was looking for an answer for a long time. For instance, what is deep learning bringing to the table that other methods don’t or are not capable of? 

    Stan believe that the one thing that makes deep learning special is representation learning. All the other competing methods, be it kernel machines, or random forests, do not have this capability. Moreover, optimisation (SGD) lies at the heart of representation learning in the sense that it allows finding good representations. 

     

    What really improves the training quality of a neural network?

    We discussed about the accuracy of neural networks depending pretty much on how good the Stochastic Gradient Descent method is at finding minima of the loss function. What would influence such minima?

    Stan's answer has revealed that training set accuracy or loss value is not that interesting actually. It is relatively easy to overfit data (i.e. achieve the lowest loss possible), provided a large enough network, and a large enough computational budget. However, shape of the minima, or performance on validation sets are in a quite fascinating way influenced by optimisation.
    Optimisation in the beginning of the trajectory, steers such trajectory towards minima of certain properties that go much further than just training accuracy.

    As always we spoke about the future of AI and the role deep learning will play.

    I hope you enjoy the show!

    Don't forget to join the conversation on our new Discord channel. See you there!

     

    References

     

    Homepage of Stanisław Jastrzębski https://kudkudak.github.io/

    A Closer Look at Memorization in Deep Networks https://arxiv.org/abs/1706.05394

    Three Factors Influencing Minima in SGD https://arxiv.org/abs/1711.04623

    Don't Decay the Learning Rate, Increase the Batch Size https://arxiv.org/abs/1711.00489

    Stiffness: A New Perspective on Generalization in Neural Networks https://arxiv.org/abs/1901.09491

    ...more
    View all episodesView all episodes
    Download on the App Store

    Data Science at HomeBy Francesco Gadaleta

    • 4.2
    • 4.2
    • 4.2
    • 4.2
    • 4.2

    4.2

    72 ratings


    More shows like Data Science at Home

    View all
    Radiolab by WNYC Studios

    Radiolab

    43,843 Listeners

    Learning English Conversations by BBC Radio

    Learning English Conversations

    1,063 Listeners

    Stuff You Should Know by iHeartPodcasts

    Stuff You Should Know

    77,233 Listeners

    Data Skeptic by Kyle Polich

    Data Skeptic

    474 Listeners

    Talk Python To Me by Michael Kennedy

    Talk Python To Me

    584 Listeners

    Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

    Super Data Science: ML & AI Podcast with Jon Krohn

    295 Listeners

    Learning English from the News by BBC Radio

    Learning English from the News

    249 Listeners

    DataFramed by DataCamp

    DataFramed

    267 Listeners

    Practical AI by Practical AI LLC

    Practical AI

    196 Listeners

    The Intelligence from The Economist by The Economist

    The Intelligence from The Economist

    2,537 Listeners

    Hard Fork by The New York Times

    Hard Fork

    5,367 Listeners