Myvideo

Guest

Login

Why Deep Learning Works: Implicit Self-Regularization in DNNs, Michael W. Mahoney 20190225

Uploaded By: Myvideo
7 views
0
0 votes
0

Michael W. Mahoney UC Berkeley Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models and smaller models trained from scratch. Empirical and theoretical results clearly indicate that the DNN training process itself implicitly implements a form of self-regularization, implicitly sculpting a more regularized energy or penalty landscape. In particular, the empirical spectral density (ESD) of DNN layer matrices

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later