Seminars for CV course based on cs231n at ABBYY, 2019
- learn how backpropagation really works
- implement backward pass for main layers (Linear, ReLU, Sigmoid, Softmax)
- implement backward pass for most common criterion (Crossentropy)
- implement batch stochastic gradient descent
- train your tiny neural network on MNIST
Exercises
- why sigmoid/tanh activations may be bad for deep networks?
- can you train classification problem with MSE loss? Why this may not be a good idea?
- "dead ReLU" problem - provide example when it may happen
- derive analytically (on paper) gradients w.r.t. all inputs and parameters for small network (Linear -> ReLU -> Linear -> Softmax) with NLL loss
- backward for many other layers (Conv, Pool, BatchNorm, Dropout, etc)
Seminar 2 (training your network with PyTorch)
- PyTorch in 5 minutes
- training your neural net in PyTorch
- training on GPU
- reproducibility
- explore different activations and loss functions
Seminar 3 (layers & architectures)
- Most confusing layers: Dropout and BatchNorm
- Most common architectures: Alexnet, VGG, Inception, Resnet, Resnext, Densenet, SENet
Seminar 4 (high level frameworks)
- PyTorch ecosystem
- augmentations
- fasi.ai vs pytorch lightning vs catalyst
- catalyst
Seminar 5 (segmentation)
- semantic segmentation with catalyst config api (based on segmentation tutorial)
Seminar 6 (detection)
- understanding detection with your hands
- detectron2 - handy detection framework, colab notebook
- attribution with TorchRay