representation learning 8
View all
I-JEPA: Image-based Joint Embedding Predictive Architecture
MaskFeat: Masked Feature Prediction for Self-Supervised Visual Pre-Training
BYOL: Bootstrap Your Own Latent
DINO: Self-Distillation with No Labels
MAE: Masked Autoencoders Are Scalable Vision Learners
SwAV: Swapping Assignments between Views
MoCo: Momentum Contrast for Unsupervised Visual Representation Learning
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
ssl 8
View all
I-JEPA: Image-based Joint Embedding Predictive Architecture
MaskFeat: Masked Feature Prediction for Self-Supervised Visual Pre-Training
BYOL: Bootstrap Your Own Latent
DINO: Self-Distillation with No Labels
MAE: Masked Autoencoders Are Scalable Vision Learners
SwAV: Swapping Assignments between Views
MoCo: Momentum Contrast for Unsupervised Visual Representation Learning
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
vision 8
View all
I-JEPA: Image-based Joint Embedding Predictive Architecture
MaskFeat: Masked Feature Prediction for Self-Supervised Visual Pre-Training
BYOL: Bootstrap Your Own Latent
DINO: Self-Distillation with No Labels
MAE: Masked Autoencoders Are Scalable Vision Learners
SwAV: Swapping Assignments between Views
MoCo: Momentum Contrast for Unsupervised Visual Representation Learning
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
joint embedding 6
View all
I-JEPA: Image-based Joint Embedding Predictive Architecture
BYOL: Bootstrap Your Own Latent
DINO: Self-Distillation with No Labels
SwAV: Swapping Assignments between Views
MoCo: Momentum Contrast for Unsupervised Visual Representation Learning
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations