Myvideo

Guest

Login

Computing Testing Error without a Testing Set

Uploaded By: Myvideo
7 views
0
0 votes
0

We define a methodology that directly measures how well a deep neural network generalizes to unseen samples without the need of a testing dataset. To do this, let us consider a network that maps an input variable x to an output variable y. For each specific input, the network uses a subset of its parameters to compute the corresponding output. We show that if this subset of parameters is similar across most inputs, then our algorithm has learned to generalize, whereas, if the subset of parameters used is highly dependent on the inputs, then our algorithm has used local parameters to memorize samples. These functional differences can be readily computed using algebraic topology, yielding an approach that accurately estimates the testing error of any given network. We report results on several computer vision problems, including object recognition, facial expressions analysis, and image segmentation. PAPER:

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later