0:00 Welcome by Rina Panigrahy 11:40 Workshop's goal 16:23 talk on “How to Augment Supervised Learning with Reasoning“ by Leslie Valiant 39:40:00 talk on “Language in the brain and word representations“ by Christos Papadimitriou 1:02:15 talk on “What Do Our Models Really Learn?“ by Aleksander Madry 1:27:08 talk on “Implicit Symbolic Representation and Reasoning in Deep Networks“ by Jacob Andreas 1:48:16 Panel Discussion on “Is there a Mathematical model for the Mind?“ 2:47:09 talk on “Deep Reinforcement Learning and Distributional Shift“ by Sergey Levine 3:10:53 talk on “Towards a Representation Learning framework for Reinforcement Learning“ by Alekh Agarwal 3:32:37 talk on “Principles for Tackling Distribution Shift: Pessimism, Adaptation, and Anticipation“ by Chelsea Finn 4:20:27 talk on “Can human brain recordings help us design better AI models?“ by Leila Wehbe 4:38:52 talk on “The benefits of unified frameworks for language understanding“ by Colin Raffel 4:54:26 talk on “Are Transformers Universal Approx
Hide player controls
Hide resume playing