62 minutes | May 17, 2021

BI 105 Sanjeev Arora: Off the Convex Path

Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn’t or shouldn’t work as well as it does. Deep learning poses a challenge for mathematics, because its methods aren’t rooted in mathematical theory and therefore are a “black box” for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn’t share the current neuroscience optimism comparing brains to deep nets. Sanjeev’s website.His Research group website.His blog: Off The Convex Path.Papers we discussOn Exact Computation with an Infinitely Wide Neural Net.An Exponential Learning Rate Schedule for Deep Learning Timestamps0:00 – Intro7:32 – Computational complexity12:25 – Algorithms13:45 – Deep learning vs. traditional optimization17:01 – Evolving view of deep learning18:33 – Reproducibility crisis in AI?21:12 – Surprising effectiveness of deep learning27:50 – “Optimization” isn’t the right framework30:08 – Infinitely wide nets35:41 – Exponential learning rates42:39 – Data as the next frontier44:12 – Neuroscience and AI differences47:13 – Focus on algorithms, architecture, and objective functions55:50 – Advice for deep learning theorists58:05 – Decoding minds
Play
Like
Play Next
Mark
Played
Share