Qiongjie's Notes
  • PROJECT
  • NOTE
  • TAG
  • ABOUT ME
  • post
    Remote Sensing with Python Monte Carlo–Markov Chains Statistical Methods AI Fundamentals
Qiongjie's Notes

SSL


I-JEPA: Image-based Joint Embedding Predictive Architecture

 Posted on October 9, 2025  |  113 words

A non-generative, self-supervised framework predicting high-level feature representations of masked regions from visible context, enabling scalable and efficient visual pretraining. [Read More]
SSL  Vision  Representation Learning  Joint Embedding  Masked Image Modeling 

MaskFeat: Masked Feature Prediction for Self-Supervised Visual Pre-Training

 Posted on October 9, 2025  |  1026 words

Predict handcrafted features (e.g., HOG) of masked regions instead of raw pixels. [Read More]
SSL  Vision  Representation Learning  Masked Image Modeling 

BYOL: Bootstrap Your Own Latent

 Posted on October 8, 2025  |  952 words

Learn representations by predicting one network’s output from another’s, without using negative samples. [Read More]
SSL  Vision  Representation Learning  Joint Embedding  Distillation Methods 

DINO: Self-Distillation with No Labels

 Posted on October 8, 2025  |  1257 words

A student network learns from a teacher network using self-distillation, producing emergent semantic attention maps. [Read More]
SSL  Vision  Representation Learning  Joint Embedding  Distillation Methods 

MAE: Masked Autoencoders Are Scalable Vision Learners

 Posted on October 8, 2025  |  1330 words

Randomly mask image patches and reconstruct the missing ones to learn context-aware visual representations. [Read More]
SSL  Vision  Representation Learning  Masked Image Modeling 

SwAV: Swapping Assignments between Views

 Posted on October 8, 2025  |  1276 words

Simultaneously cluster the data and learn visual representations by enforcing consistency between cluster assignments, or ‘codes’, generated from different augmented views of the same image. [Read More]
SSL  Vision  Representation Learning  Joint Embedding  Clustering Methods 

MoCo: Momentum Contrast for Unsupervised Visual Representation Learning

 Posted on October 7, 2025  |  1055 words

It stabilizes and scales contrastive learning by maintaining a dynamic dictionary with momentum-based updates, becoming a cornerstone for modern SSL methods. [Read More]
SSL  Vision  Representation Learning  Joint Embedding  Contrastive Methods 

SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

 Posted on October 7, 2025  |  1029 words  • Other languages: CH

Learn invariant representations by maximizing similarity between augmented views of the same image while contrasting with others. [Read More]
SSL  Vision  Representation Learning  Joint Embedding  Contrastive Methods 

Qiongjie.X  • © 2025  •  Qiongjie's Notes

Hugo v0.147.7 powered  •  Theme Beautiful Hugo adapted from Beautiful Jekyll