Title: Gradients in Deep Neural Networks at Initialization
Speaker: Boris Hanin
Speaker Info: Texas A&M
Brief Description:
Special Note:
Abstract:
Nueral networks are finite (but typically high) dimensional families of non-linear functions whose practical uses and theoretical analysis is a field now called deep learning. I will give a brief introduction to deep learning and will focus on some probabilistic questions about the gradients of function computed by neural networks.Date: Tuesday, April 03, 2018