Myvideo

Guest

Login

How do Vision Transformers work Paper explained | multi-head self-attention & convolutions

Uploaded By: Myvideo
1 view
0
0 votes
0

It turns out that multi-head self-attention and convolutions are complementary. So, what makes multi-head self-attention different from convolutions? How and why do Vision Transformers work? In this video, we will find out by explaining the paper “How Do Vision Transformers Work?” by Namuk & Kim, 2021. SPONSOR: Weights & Biases 👉 ⏩ Vision Transformers explained playlist: 📺 ViT: An image is worth 16x16 pixels: 📺 Swin Transformer: 📺 ConvNext: 📺 DeiT: 📺 Adversarial attacks: ❓Check out our daily #MachineLearning Quiz Questions: ► Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏 Don Ro

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later