DINO - Emerging properties in self-supervised vision transformers
Today’s paper: Emerging properties in self-supervised vision transformers by Mathilde Caron et al. Let’s get the dinosaur out of the room: the name DINO refers to self-distillation with no labels.
The self-distillation part refers to self-supervised learning in a student-teacher setup as is often seen for distillation. However, the catch is that in contrast to normal distillation setups where a previously trained teacher network is training a student network, here they work without labels and without pre-training the teacher.
Rethinking Batch in BatchNorm
Today’s paper: Rethinking ‘Batch’ in BatchNorm by Wu & Johnson BatchNorm is a critical building block in modern convolutional neural networks. Its unique property of operating on “batches” instead of individual samples introduces significantly different behaviors from most other operations in deep learning. As a result, it leads to many hidden caveats that can negatively impact model’s performance in subtle ways.
This is a citation from the paper’s abstract and the emphasis is mine which caught my attention.
P-Diff Learning Classifier with noisy labels based on probability difference distributions
Label noise in digital Pathology In the field of digital pathology and other health related deep learning applications, label noise is an important challenge to consider during training.
It’s inherent to the medical fields as the problems are extremely challenging even for trained experts, so there is high intra- as well as inter-observer variability.
This blog post dives into the idea of the paper P-DIFF: Learning Classifier with Noisy Labels based on Probability Difference Distributions which is authored by researchers of Microsoft in China.
Meta-learning from noisy labels
Label noise introduction Training machine learning models requires a lot of data. Often, it is quite costly to obtain sufficient data for your problem. Sometimes, you might even need domain experts which don’t have much time and are expensive.
One option that you can look into is getting cheaper, lower quality data, i.e. have less experienced people annotate data. This usually has the side effect of your labels becoming more noisy.
Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition

Today’s paper: Pyramidal Convolution by Duta et al. This is the third paper of the new series Deep Learning Papers visualized and it’s about using convolutions in a pyramidal style to capture information of different magnifications from an image. The authors show how a pyramidal convolution can be constructed and apply it to several problems in the visual domain. What’s really interesting is that the number of parameters can be kept the same while performance tends to improve.