Myvideo

Guest

Login

Theoretical Deep Learning #2: Worst-case bounds. Part 3

Uploaded By: Myvideo
9 views
0
0 votes
0

We present a different approach to bound test-train risk difference. This approach naturally leads us to the notion of Rademacher complexity. We upper-bound the latter using covering numbers. These covering numbers appear to be computable for deep ReLU nets with upper-bounded weight norms. Combining this, we obtain bound on test-train risk difference which depends on Lipschitz constant of the learned network. Find all relevant info on github page: Our open-source framework

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later