- Taught by: Laurence Moroney
- Instructions to use the repository
- My Learnings from the Specialization
- Programming Assignments
- Results
- Clone this repository to use. It contains all my work for this specialization. All the code base, screenshots, and images are taken from unless specified, TensorFlow: Advanced Techniques Specialization on Coursera.
Note
: The solutions uploaded in this repository are only for reference when you got stuck somewhere. Please don't use these solutions to pass the programming assignments.
This specialization from coursera consists of four courses. Below are my learnings from individual courses.
-
Course1: Custom Models, Layers, and Loss Functions with TensorFlow
- Build models that produce multiple outputs (including a Siamese network) using the Functional API
- Build custom loss functions (including the contrastive loss function used in a Siamese network)
- Build custom layers using existing standard layers, customized network layer with a lambda layer and explored activation functions for custom layers
- Build custom classes instead of using the Functional or Sequential APIs
- Build models that can be inherited from the TensorFlow Model class, and build a residual network (ResNet) through defining a custom model class
-
Course2: Custom and Distributed Training with TensorFlow
- Learned about the difference between the eager and graph modes in TensorFlow
- Build custom training loops using GradientTape and TensorFlow Datasets to gain more flexibility and visibility with the model training
- Got an overview of various distributed training strategies, and practice working with a strategy that trains on multiple GPU cores, and another that trains on multiple TPU cores
-
Course3: Advanced Computer Vision with TensorFlow
- Build image classification, image segmentation, object localization, and object detection models
- Used object detection models such as R-CNN, customized existing models, and build own models to detect, localize, and label rubber duck images
- Implemented image segmentation using variations of the fully convolutional network (FCN) including U-Net and Mask-RCNN to identify and detect numbers, pets, zombies
- Identified which parts of an image are being used by the model to make its predictions using class activation maps and saliency maps
-
Course4: Generative Deep Learning with TensorFlow
- Generated artwork using neural style transfer: extract the content of an image (eg. swan), and the style of a painting (eg. cubist or impressionist), and combine the content and style into a new image
- Build simple AutoEncoders on the familiar MNIST dataset, and more complex deep and convolutional architectures on the Fashion MNIST dataset
- Identified ways to de-noise noisy images, and build a CNN AutoEncoder using TensorFlow to output a clean image from a noisy one
- Build Variational AutoEncoders (VAEs) to generate entirely new data, and generated anime faces to compare them against reference images
- Learned about GANs, the concept of 2 training phases, the role of introduced noise and build GANs to generate faces
Some results from the programming assignments of this specialization