Myvideo

Guest

Login

How Neural Networks Learned to Talk | ChatGPT: A 30 Year History

Uploaded By: Myvideo
1 view
0
0 votes
0

This video explores the journey of language models, from their modest beginnings through the development of OpenAI's GPT models & hints at Q* / Google Gemini. Our journey takes us through the key moments in neural network research involved in next word prediction. We delve into the early experiments with tiny language models in the 1980s, highlighting significant contributions by researchers like Jordan, who introduced Recurrent Neural Networks, and Elman, whose work on learning word boundaries revolutionized our understanding of language processing. Featuring Noam Chomsky Douglas Hofstadter Michael I. Jordan Jeffrey Elman Geoffrey Hinton Ilya Sutskever Andrej Karpathy Yann LeCun and more. (Sam altman) My script, references & visualizations here: consider joining my channel as a YouTube member: This is the last video in the series “

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later