We study the learning of fully connected neural networks for binary classification. For the networks of interest, we assume that the L1-norm of the incoming weights of any neuron is bounded by a constant. We further assume that there exists a neural network which separates the positive and negative samples by a constant margin. Under these assumptions, we present an efficient algorithm which learns a neural network with arbitrary generalization error ε>0 . The algorithm's sample complexity and time complex
Hide player controls
Hide resume playing