This repository contains major Machine learning topics with detailed analysis through major Papers , blogs , links and youtube tutorials
- Covariance matrix
- Linear Regression In depth
- Multicollinearity Analysis 1
- Multicollinearity Analysis 2
- Residual Analysis
- Cofficient of determination
- Outlier Analysis 1
- Outlier Analysis 1
- Local Outlier factors
- Regression Shrinkage and selection via Lasso
- Elastic Net - Regularisation and variable selection
- Decison Tress intermidiate
- Understanding Random Forests
- Decison tree from Scratch 1
- Decison tree from Scratch 2
- Random Subspace Method
- CV vs Bootstraping
- Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation
- Andrew Ng's Percentile cv method of cross-validation
- Boosting 1
- Boosting 2
- Gradient Boost overview
- Gradient Boost from scratch
- Gradient Boosting Machines
- XgBoost
- XGBoost Supplement
- AdaBoost and the Super Bowl of Classifiers
- Adaboost Wikipedia
- Adaboost Tutorial
- Logit Boosting Algorithm
- MadaBoost Algorithm
- BrownBoost Algorithm
- LPBoost Algorithm
- Stack Generalization by Wolpert
- When does it work?
- Issues with Stack generalisation
- Stacking from scratch in python
- Non-Negative least square cofficient method for classification
- Intiutve Explanation of CNN
- How is a convolutional neural network able to learn invariant features?
- Max vs average Pooling
- Dropout
- Explaining Xavier Initialisation easily
- Xaviers Initialisation
- Batch Normalizaton
- Understanding the backward pass through Batch Normalization Layer
- Random vs Grid Search
- Random Search For Hyper Parameter Optimisation
- Gradient Descent Optimisation
- Adam Optimisation
- Fractional Pooling
- ReLU's
- Data Augmentation
- CNN in Pytorch
- Resnet
- Principal Compponent Analysis
- Understanding RNN
- Guide to Recurrent Neural Networks: Understanding the Intuition
- Guide to LSTM’s and GRU’s: A step by step explanation
- Neural Turing machines
- HMM
- The magic of LSTM neural networks
- The fall of RNN / LSTM
- Pixel CNN
- Blind Spot problem in PixelCNN
- Gated Pixel CNN
- An Introduction to different Types of Convolutions in Deep Learning
- Up-sampling with Transposed Convolution
- Demystifying Transpose Convolution
- Deconvolution and Checkerboard Artifacts
- Convolution arithmetic tutorial
- GAN — GAN Series (from the beginning to the end)
- What is wrong with the GAN cost function?
- EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES
- Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
- Differentiable Inference and Generative Models
- Pytorch CycleGAN
- Various GAN's implemented in torch
- Nash Equilibrium
- InfoGAN
- UnrolledGAN
- GAN NUmerics
- cycleGAN
- GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
- Improved Techniques for Training GANs
- Are GANs Created Equal? A Large-Scale Study
- Probabilistic Graphical Models Tutorial — Part 1
- Probabilistic Graphical Models Tutorial — Part 2
- Bayesian Directed Networks
- Markov Undirected Networks
- Gaussian BN
- Exponential Families
- Exact Inference - Vatiable Elimination
- Graph Models in Nutshell
- Introduction
- Very Good Explanation of RBM I
- Very Good Explanation of RBM II
- Orignal Paper
- [Guide to Training RBM's]
- Tensorflow Example
- Adversarial Examples and Adversarial Training
- Explaining and Harnessing Adversarial Examples
- Intriguing properties of neural networks
- Regularisation of Neural Networks by Enforcing Lipschitz Continuity
- Adversarial Diversity and Hard Positive Generation
- DeepFool: a simple and accurate method to fool deep neural networks
- Universal adversarial perturbations
- Towards Evaluating the Robustness of Neural Networks
- Machine learning cheat sheet
- Probability cheat sheet
- Regularisation
- error Analysis
- MIT probability course
- MIT Machine learning course
- “Deep learning - Information theory & Maximum likelihood.”
- Exemplar CNNs and Information Maximization
- Dilated Convolutions and Kronecker Factored Convolutions
- Swish Activation Function by Google
- Capsule Networks
- Maximum Likelihood Estimation
- Maximum Aposteriori
- Understanding and Using Principal Component Analysis (PCA)
- How PCA works
- Probabilistic Principal Component Analysis
- The Neural Autoregressive Distribution Estimator
- Deep AutoRegressive Networks
- When Bayes, Ockham, and Shannon come together to define machine learning
- No-Free-Lunch and the Minimum Description Length
- No Free Lunch versus Occam’s Razor in Supervised Learning
- No-Free-Lunch and the Problem Description Length
- Demystifying KL Divergence