Subutai Ahmad goes through the framework and results of a study he conducted on dimensionality and sparsity in a deep learning network. Using the GSC dataset, he explores the correlation between dimensionality and the size and accuracy of sparse networks. He also assesses whether there are scaling laws for sparsity in deep learning, similar to the mathematical algorithms for sparse distributed representations in the brain. “How Can We Be So Dense? The Benefits of Using Highly Sparse Representations” paper
Hide player controls
Hide resume playing