Myvideo

Guest

Login

Geoffrey Hinton | Will digital intelligence replace biological intelligence

Uploaded By: Myvideo
1 view
0
0 votes
0

The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts & Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto. 0:00:00 - 0:07:20 Opening remarks and introduction 0:07:21 - 0:08:43 Overview 0:08:44 - 0:20:08 Two different ways to do computation 0:20:09 - 0:30:11 Do large language models really understand what they are saying? 0:30:12 - 0:49:50 The first neural net language model and how it works 0:49:51 - 0:57:24 Will we be able to control super-intelligence once it surpasses our intelligence? 0:57:25 - 1:03:18 Does digital intelligence have subjective experience? 1:03:19 - 1:55:36 Q&A 1:55:37 - 1:58:37 Closing remarks Talk title: “Will digital intelligence replace biological intelligence?” Abstract: Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT-4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large-scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control. About Geoffrey Hinton Geoffrey Hinton received his PhD in artificial intelligence from Edinburgh in 1978. After five years as a faculty member at Carnegie Mellon he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto, where he is now an emeritus professor. In 2013, Google acquired Hinton’s neural networks startup, DNN research, which developed out of his research at U of T. Subsequently, Hinton was a Vice President and Engineering Fellow at Google until 2023. He is a founder of the Vector Institute for Artificial Intelligence where he continues to serve as Chief Scientific Adviser. Hinton was one of the researchers who introduced the backpropagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning and deep learning. His research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification. Hinton is among the most widely cited computer scientists in the world. Hinton is a fellow of the UK Royal Society, the Royal Society of Canada, the Association for the Advancement of Artificial Intelligence, and a foreign member of the US National Academy of Engineering and the American Academy of Arts and Sciences. His awards include the David E. Rumelhart Prize, the IJCAI Award for Research Excellence, the Killam Prize for Engineering, the IEEE Frank Rosenblatt Medal, the NSERC Herzberg Gold Medal, the IEEE James Clerk Maxwell Gold Medal, the NEC C&C Award, the BBVA Award, the Honda Prize, and most notably the ACM A.M. Turing Award.

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later