W tym filmie opowiemy Wam o technologii umożliwiającej tworzenie deepfake'ow, za pomocą której można podszywać się pod kogoś innego. Źródła: 1. Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, Nicu Sebe. First Order Motion Model for Image Animation 2. Jun-Yan Zhu*, Taesung Park*, Phillip Isola, and Alexei A. Efros. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks“, in IEEE International Conference on Computer Vision (ICCV), 2017. 3. Steven M. Seintz, and Ira Kemelmacher-Shlizerman, University of Washington. Synthesizing Obama: Learning Lip Sync from Audio 4. Luisa Verdoliva. Media Forensics and DeepFakes: an overview 5. Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner. FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces oraz FaceForensics : Learning to Detect Manipulated Facial Images 6. Yuval Nirkin, Yosi Keller, Tal Hassner. FSGAN: Subject Agnostic Face Swapping and Reenactment 7. Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, Matthias Niessner. Face2Face: Real-Time Face Capture and Reenactment of RGB Videos. 8. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 9. Ye Jia, Yu Zhang, Ron J. Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu. Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis 10. Justus Thies, Mohamed Elgharib, Ayush Tewari, Christian Theobalt, Matthias Nießner. Neural Voice Puppetry: Audio-driven Facial Reenactment 11. Tero Karras, Samuli Laine, Timo Aila. A Style-Based Generator Architecture for Generative Adversarial Networks Artsiom Sanakoyeu, Dmytro Kotovenko, Sabine Lang, Björn Ommer. A Style-Aware Content Loss for Real-time HD Style Transfer 12. Christoph Bregler, Michele Covell, Malcolm Slaney Interval Research Corporation. Video Rewrite: Driving Visual Speech with Audio Wykorzystane materiały wideo: Z prac naukowych: - Synthesizing Obama_ Learning Lip Sync from Audio - Neural Voice Puppetry: Audio-driven Facial Reenactment - [ICCV 2019] FSGAN: Subject Agnostic Face Swapping and Reenactment - FaceForensics : Learning to Detect Manipulated Facial Images (ICCV 2019) - FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces - FaceVR Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality - Face2Face: Real-time Face Capture and Reenactment of RGB Videos (CVPR 2016 Oral) - A Style-Based Generator Architecture for Generative Adversarial Networks - First Order Motion Model for Image Animation - Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Filmy wykorzystane do zrobienia deepfake’ów: - Andrzej Duda Hot16Challenge2 - UWAGA! Dawid Myśliwiec przez cztery godziny MASAKRUJE pseudonaukowy bełkot. Z jedną przerwą w środku - Grey czy gray? | Po Cudzemu 199 - Zjadłam MIĘSO Z PROBÓWKI! Użyte deepfake’i: - Nick Cage DeepFakes Movie Compilation - You Won’t Believe What Obama Says In This Video! - The Shining starring Jim Carrey : Episode 2 - The Bat [DeepFake] - Willem Dafoe as Hannibal Lecter [DeepFake] - Once Upon a Time in The Room [DeepFake] - Home Stallone [DeepFake] - Deepfake AI facial replacement - ZAO App example - Filmy z - Na okńcu filmy załączone do pracy naukowej: Video Rewrite: Driving Visual Speech with Audio Pozostałe: - In the days before Photoshop (1984) | Retro - AE Face Tools - Face Application for After Effects - Insanely Realistic Creepy Computer Faces Are Here #emcepcja
Hide player controls
Hide resume playing