The state of the art deep learning models do not really embody understanding. This panel will focusing on AGI transparency, auditability and explainability.. differences btw causal understanding and prediction as well as surrounding practical / systemic / ethical issues. Panelists include AI experts Joscha Bach, Ben Goertzel and Monica Anderson! 0:00 Intro 1:00 Monica Anderson on what is understanding? 2:00 Joscha Bach answers what is understanding? 3:12 discrepancies in descriptions of understanding 3:54 Joscha on creativity vs deciding and understanding 7:19 Context free models vs context containing models (Monica) 10:18 Modelling & Embodiment (Joscha) 14:35 Language models (Monica & Joscha) 17:17 Are causal models required for understanding? 18:26 Can imitation become understanding? 20:26 Can we share understanding? 21:08 Systematic diff btw people driving cars vs self driving cars 22:45 Embodiment and symbol grounding. Do people share symbol groun
Hide player controls
Hide resume playing