Skip to content

BurakAhmet/DeepLearning.AI-TensorFlow-Advanced-Techniques-Specialization

Repository files navigation

DeepLearning.AI-TensorFlow-Advanced-Techniques

My Assignment and quiz submissions for DeepLearning.AI TensorFlow: Advanced Techniques specialization

Check Coursera Honor Code before you take a look at the assignments.

For more information you can check course info.

About This Specialization

In this Specialization, you will expand your knowledge of the Functional API and build exotic non-sequential model types. You will learn how to optimize training in different environments with multiple processors and chip types and get introduced to advanced computer vision scenarios such as object detection, image segmentation, and interpreting convolutions. You will also explore generative deep learning including the ways AIs can create new content from Style Transfer to Auto Encoding, VAEs, and GANs.

Show Specialization Certificate

Contents

    • Week 1 - Functional APIs: Compare how the Functional API differs from the Sequential API, and see how the Functional API gives you additional flexibility in designing models. Practice using the functional API and build a Siamese network!

    • Week 2 - Custom Loss Functions: Loss functions help measure how well a model is doing, and are used to help a neural network learn from the training data. Learn how to build custom loss functions, including the contrastive loss function that is used in a Siamese network.

    • Week 3 - Custom Layers: Custom layers give you the flexibility to implement models that use non-standard layers. Practice building off of existing standard layers to create custom layers for your models.

    • Week 4 - Custom Models: You can build off of existing models to add custom functionality. This week, extend the TensorFlow Model Class to build a ResNet model!

    • Week 5 - Callbacks: Custom callbacks allow you to customize what your model outputs or how it behaves during training. This week, implement a custom callback to stop training once the callback detects overfitting.

    Show Certificate

    • Week 1 - Differentiation and Gradients: You will get a detailed look at the fundamental building blocks of TensorFlow - tensor objects. For example, you will be able to describe the difference between eager mode and graph mode in TensorFlow, and explain why eager mode is very user friendly for you as a developer. You will also use TensorFlow tools to calculate gradients so that you don’t have to look for your old calculus textbooks next time you need to get a gradient!

    • Week 2 - Custom Training: You will build custom training loops using GradientTape and TensorFlow Datasets. Being able to write your own training loops will give you more flexibility and visibility with your model training. You will also use a function to calculate the derivatives of functions so that you don’t have to look to your old calculus textbooks to calculate gradients.

    • Week 3 - Graph Mode: You’ll learn about the benefits of generating code that runs in “graph mode”. You’ll take a peek at what graph code looks like, and you’ll practice generating this more efficient code automatically with TensorFlow’s tools, so that you don’t have to write the graph code yourself!

    • Week 4 - Distributed Training: You will harness the power of distributed training to process more data and train larger models, faster. You’ll get an overview of various distributed training strategies and then practice working with two strategies, one that trains on multiple GPU cores, and the other that trains on multiple TPU cores.

    Show Certificate

    • Week 1 - Introduction to Computer Vision: Get a conceptual overview of image classification, object localization, object detection, and image segmentation. Also be able to describe multi-label classification, and distinguish between semantic segmentation and instance segmentation. In the rest of this course, you will apply TensorFlow to build object detection and image segmentation models.

    • Week 2 - Object Detection: You’ll get an overview of some popular object detection models, such as regional-CNN and ResNet-50. You’ll use object detection models that you’ll retrieve from TensorFlow Hub, download your own models and configure them for training, and also build your own models for object detection. By using transfer learning, you will train a model to detect and localize rubber duckies using just five training examples. You’ll also get to manually label your own rubber ducky images!

    • Week 3 - Image Segmentation: This week is all about image segmentation using variations of the fully convolutional neural network. With these networks, you can assign class labels to each pixel, and perform much more detailed identification of objects compared to bounding boxes. You’ll build the fully convolutional neural network, U-Net, and Mask R-CNN this week to identify and detect numbers, pets, and even zombies!

    • Week 4 - Visualization and Interpretability: You’ll learn about the importance of model interpretability, which is the understanding of how your model arrives at its decisions. You’ll also implement class activation maps, saliency maps, and gradient-weighted class activation maps to identify which parts of an image are being used by your model to make its predictions. You’ll also see an example of how visualizing a model’s intermediate layer activations can help to improve the design of a famous network, AlexNet.

    Show Certificate

    • Week 1 - Style Transfer: You will learn how to extract the content of an image (such as a swan), and the style of a painting (such as cubist, or impressionist), and combine the content and style into a new image. This is called neural style transfer, and you'll learn how to extract these kinds of features using transfer learning.

    • Week 2 - AutoEncoders: You’ll get an overview of AutoEncoders and how to build them with TensorFlow. You'll learn how to build a simple AutoEncoder on the familiar MNIST dataset, before diving into more complicated deep and convolutional architectures that you'll build on the Fashion MNIST dataset. You'll get to see the difference in results of the DNN and CNN AutoEncoder models, and then identify ways to denoise noisy images. You'll finish the week building a CNN AutoEncoder using TensorFlow to output a clean image from a noisy one!

    • Week 3 - Variational AutoEncoders: You will explore Variational AutoEncoders (VAEs) to generate entirely new data. In this week’s assignment, you will generate anime faces and compare them against reference images.

    • Week 4 - GANs: You’ll learn about GANs. You'll learn what they are, who invented them, their architecture and how they vary from VAEs. You'll get to see the function of the generator and the discriminator within the model, and the concept of 2 training phases and the role of introduced noise. Then you'll end the week building your own GAN that can generate faces!

    Show Certificate

Instructors


Laurence Moroney

Eddy Shyu
Lead AI Advocate, Google Product Lead, DeepLearning.AI