A Theory of Deep Learning

(elonlit.com)

48 points | by elonlit 1 day ago

7 comments

  • ks2048 17 minutes ago
    The relevant paper: "A Theory of Generalization in Deep Learning". https://arxiv.org/abs/2605.01172
  • smokel 17 minutes ago
    This essay seems to be related to the paper "There Will Be a Scientific Theory of Deep Learning" [1] which was discussed here recently [2].

    [1] https://arxiv.org/pdf/2604.21691

    [2] https://news.ycombinator.com/item?id=47893779

  • xdavidliu 9 minutes ago
    i'm on my phone so i can only skim, but on first glance this article reminded me of "toward a quantum hermeneutics". maybe im wrong though, i could only glance at it for around 10 seconds.
  • prideout 45 minutes ago
    This is a fascinating mathematical framework, but the post title might be a bit of an overreach. I often wonder if "a theory of deep learning" could exist that could be stated succinctly and that could predict (1) scaling laws and (2) the surprising reliability of gradient descent.

    Note that I said "predict" not "describe". It feels like we're still in the era of Kepler, not Newton.

  • jdw64 40 minutes ago
    Does anyone happen to know what font this site is using? It looks really elegant.
  • airza 50 minutes ago
    A very fascinating read.

    As a fellow tufte css enjoyer, Why is user select turned off on the sidenotes? I would like to be able to copy paste them quite badly.

  • refulgentis 55 minutes ago
    This is a beautifully written way of saying “Some parts of what the network memorizes affect test behavior, and some don’t.” But that’s not a theory of deep learning, the grand unified theory would explain that.

    We're given a signal channel and a reservoir. Signal lives in the channel, noise lives in the reservoir, and the reservoir supposedly doesn’t show up at test time.

    Okay, but then we have: why would SGD put the right things in the right bucket?

    If the answer is “because the reservoir is defined as the stuff that doesn’t transfer to test,” then this is close to circular.

    The Borges/Lavoisier stuff is a tell. "We have unified the field” rhetoric should come after nontrivial predictions and results. Claiming to solve benign overfitting, double descent, grokking, implicit bias, risk of training on population, how to avoid a validation set, and last but not least, skipping training by analytically jumping to the end is 6 theory papers, 3 NeurIPS winners, and a $10B startup. Let's get some results before we tell everyone we unified the field. :) I hope you're right.

    • dwrodri 50 minutes ago
      Admittedly probably some aggrandized boasting here, but I think empirical verification of that Adam modification alone would be a meaningful contribution, unless that's prior work?