Myvideo

Guest

Login

Stanford ICME Lecture on Why Deep Learning Works. Jan 2020

Uploaded By: Myvideo
2 views
0
0 votes
0

Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including production quality, pre-trained models and smaller models trained from scratch. Empirical and theoretical results indicate that the DNN training process itself implements a form of self-regularization, evident in the empirical spectral density (ESD) of DNN layer matrices. To understand this, we provide a phenomenology to identify 5 1 Phases of Training, corresponding to increasing amounts of i

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later