Myvideo

Guest

Login

Exploring Simple Siamese Representation Learning

Uploaded By: Myvideo
2 views
0
0 votes
0

What makes contrastive learning work so well? This paper highlights the contribution of the Siamese architecture as a compliment to data augmentation and shows how Siamese nets a stop-gradient operation in the negative encoder is all you need for strong contrastive self-supervised learning results. The paper also presents an interesting k-Means style explanation of the optimization problem contrastive self-supervised learning solves. Thanks for watching! Please Subscribe! Paper Links: SimSiam: https://ar

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later