For more information about Stanford's Artificial Intelligence professional and graduate programs visit: This lecture covers: 1. The course (10min) 2. Human language and word meaning (15 min) 3. Word2vec algorithm introduction (15 min) 4. Word2vec objective function gradients (25 min) 5. Optimization basics (5min) 6. Looking at word vectors (10 min or less) Key learning: The (really surprising!) result that word meaning can be representing rather well by a large vector of real numbers. This course will teach: 1. The foundations of the effective modern methods for deep learning applied to NLP. Basics first, then key methods used in NLP: recurrent networks, attention, transformers, etc. 2. A big picture understanding of human languages and the difficulties in understanding and producing them 3. An understanding of an ability to build systems (in Pytorch) for some of the major problems in NLP. Word meaning, dependency parsing, machine translation, question answering. To learn more about this course visit: To follow along with the course schedule and syllabus visit: Professor Christopher Manning Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science Director, Stanford Artificial Intelligence Laboratory (SAIL) 0:00 Introduction 1:43 Goals 3:10 Human Language 10:07 Google Translate 10:43 GPT 14:13 Meaning 16:19 Wordnet 19:11 Word Relationships 20:27 Distributional Semantics 23:33 Word Embeddings 27:31 Word tovec 37:55 How to minimize loss 39:55 Interactive whiteboard 41:10 Gradient 48:50 Chain Rule
Hide player controls
Hide resume playing