Frame decisions as bets to improve decision making

An interesting way to think about making decisions is to consider each decision as a bet with yourself about future versions of your life. Annie Duke who was a professional poker player winning high amounts of price money and big titles wrote a fascinating book called “Thinking in Bets” (You can buy it here). I highly recommend reading it as she brings along a lot of fun and great examples, but if you want to get the gist of it, this post is for you.
Read more →

How and why stable diffusion works for text to image generation

Stable diffusion is all the rage in the deep learning community at the moment. It’s trending on Twitter at #stablediffusion and gaining large amounts of attention all over the internet. We’ll take a look into the reasons for all the attention to stable diffusion and more importantly see how it works under the hood by considering the well-written paper “High-resolution image synthesis with latent diffusion models” by Rombach et al which is the foundation of the system.
Read more →

Rethinking Depthwise Separable Convolutions in PyTorch

This is a follow-up to my previous post of Depthwise Separable Convolutions in PyTorch. This article is based on the nice CVPR paper titled “Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets” by Haase and Amthor. Previously I took a look at depthwise separable convolutions which are a drop-in replacement for standard convolutions, but focused on computational and parameter-based efficiency. Basically, you can gain similar results with a lot less parameters and FLOPs, so they are used in MobileNet style architectures.
Read more →

Creating Pleasant Plots With Seaborn

Creating pleasant plots with seaborn Seaborn is an awesome Python library to create great-looking data plots. It’s a bit higher level than the often used matplotlib and this blog entry serves as a self-reminder about the most frequently used plots for myself. It’s way to specify in a declarative way what you want to plot rather than plot details like markers, colors etc is refreshing and frees some cognitive space which you can use for other tasks.
Read more →

DINO - Emerging properties in self-supervised vision transformers

Today’s paper: Emerging properties in self-supervised vision transformers by Mathilde Caron et al. Let’s get the dinosaur out of the room: the name DINO refers to self-distillation with no labels. The self-distillation part refers to self-supervised learning in a student-teacher setup as is often seen for distillation. However, the catch is that in contrast to normal distillation setups where a previously trained teacher network is training a student network, here they work without labels and without pre-training the teacher.
Read more →