-
Animated Local Explanations
Explainability is a part of machine learning is where we try to demystify the factors that influence a particular model prediction, particularly when the predictive functions are complex and nonlinear. In a typical paper or a blog that you may read, you'll see explanations visualised in a nice static image. I was recently after reading a paper and afterwards was inspired to produce animations of explanations, as well as a few additional insights.
-
Taking the Fourier transform of your cat
There are a few YouTube channels that I watch fairly regularly that have recently been making some truly beautiful visualisations of the Fourier transform. The visualisations really were beautiful (I'll link to them in the text) not only aesthetically but also in how they intuitively show what the Fourier transform works and what it achieves. I was inspired to try to replicate the visualisation procedure and the code inside this post is a python implementation of the method. The approach itself in truth isn't terribly complicated, but I find the outcome hypnotic!