After we explored attacking LLMs, in this video we finally talk about defending against prompt injections. Is it even possible? Buy my shitty font (advertisement): Watch the complete AI series: Language Models are Few-Shot Learners: A Holistic Approach to Undesired Content Detection in the Real World: Chapters: 00:00 - Intro 00:43 - AI Threat Model? 01:51 - Inherently Vulnerable to Prompt Injections 03:00 - It's not a Bug, it's a Feature! 04:49 - Don't Trust User Input 06:29 - Change the Prompt Design 08:07 - User Isolation 09:45 - Focus LLM on a Task 10:42 - Few-Shot Prompt 11:45 - Fine-Tuning Model 13:07 - Restrict Input Length 13:31 - Temperature 0 14:35 - Redundancy in Critical Systems 15:29 - Conclusion 16:21 - Checkout LiveOverfont Hip Hop Rap Instrumental (Crying Over You) by christophermorrow CC BY 3.0 Free Download / Stream: Music promoted by Audio Library =[ ❤️ Support ]= → per Video: → per Month: 2nd Channel: =[ 🐕 Social ]= → Twitter: → Streaming: → TikTok: @liveoverflow_ → Instagram: → Blog: → Subreddit: → Facebook:
Hide player controls
Hide resume playing