We'll show that prompt injections are more than a novelty or nuisance- in fact, a whole new generation of malware and manipulation can now run entirely inside of large language models like ChatGPT. As companies race to integrate them with applications of all kinds we will highlight the need to think thoroughly about the security of these new systems. You'll find out how your personal assistant of the future might be compromised and what consequences could ensue. By: Sahar Abdelnabi , Christoph Endres , Mario Fritz , Kai Greshake , Shailesh Mishra Full Abstract and Presentation Materials: #compromising-llms-the-advent-of-ai-malware-33075
Hide player controls
Hide resume playing