For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong. We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more. If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely. Transcript: Apple Podcasts: Spotify: Follow me on Twitter: Timestamps: (0:00:00) - TIME article (0:09:06) - Are humans aligned? (0:37:35) - Large language models (1:07:15) - Can AIs help with alignment? (1:30:17) - Society’s response to AI (1:44:42) - Predictions (or lack thereof) (1:56:55) - Being Eliezer (2:13:06) - Othogonality (2:35:00) - Could alignment be easier than we
Hide player controls
Hide resume playing