
These tasks were created as part of Coursera's Deep Learning Specialization course.

The Deep Learning Specialization is a foundational program that teaches the capabilities, challenges, and consequences of deep learning and prepares to participate in the development of leading-edge AI technology. In this Specialization, I built and trained neural network architectures such as Convolutional Neural Networks, Recurrent Neural Networks, LSTMs, Transformers, and learnt how to make them better with strategies such as Dropout, BatchNorm, Xavier/He initialization, and more. Theoretical concepts and their industry applications are taught using Python and TensorFlow. Real-world cases were tackled such as speech recognition, music synthesis, chatbots, machine translation, natural language processing, and more.



Assignments
Building Deep Neural Network Step by Step
Build a deep neural network from scratch. Implementation of all the functions required to build a deep neural network. Use non-linear units like ReLU to improve NN model, build a deeper neural network (with more than 1 hidden layer) and implement an easy-to-use Neural Network class.
Optimization methods
notebook | py file
More advanced optimization methods that can speed up learning and perhaps even get a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. Stochastic Gradient Descent, Mini-Batch Gradient Descent, Momentum, Adam.



Regularization
notebook | py file
Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. the learned network colud do well on the training set, but to don't generalize to new examples that it has never seen. Regularization Methods in deep learning models - L2 Regularization, Dropout
Art Generation with Neural Style Transfer
notebook | py file
This algorithm was created by Gatys et al. (2015). Implement the Neural Style Transfer algorithm. Generate novel artistic images using the algorithm. Most of the algorithms optimize a cost function to get a set of parameter values. In Neural Style Transfer, the algorithm optimizes a cost function to get pixel values.





Character Level Language Model - Dinosaurus Names, Writing like Shakespeare
notebook | py file
build a character level language model to generate new dinosaur names. The algorithm learns the different name patterns, and randomly generates new names. Store text data for processing using an RNN. Synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit. Build a character-level text generation Recurrent Neural Network.
Face Recognition, Face Verification and Triplet Loss Function
notebook | py file
A face recognition system. Face recognition problems commonly fall into two categories:
-
Face Verification - a 1:1 matching problem. "is this the claimed person?". For example: A mobile phone that unlocks using your face is using face verification.
-
Face Recognition - a 1:K matching problem. "who is this person?". Implement the triplet loss function. Use a pretrained model to map face images into 128-dimensional encodings. Use these encodings to perform face verification and face recognition.
Building a Recurrent Neural Network Step by Step
notebook | py file
Implement key components of a Recurrent Neural Network in numpy. Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs x〈t〉 (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a unidirectional RNN to take information from the past to process later inputs. A bidirectional RNN can take context from both the past and the future.
Convolution Model Application - SIGNS Dataset - Hand Signs Images to Numbers Classifications
notebook | py file
Implement a fully functioning ConvNet using TensorFlow. Build and train a ConvNet in TensorFlow for a classification problem. Built a model that recognizes SIGN language, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.

Train



Emojify Word Vectors Representations
notebook | py file
Implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). Start with a baseline model (Emojifier-V1) using word embeddings. Implement a more sophisticated model (Emojifier-V2) that further incorporates an LSTM. Example: Rather than writing: "Congratulations on the promotion! Let's get coffee and talk. Love you!" The emojifier can automatically turn this into: "Congratulations on the promotion! 👍 Let's get coffee and talk. ☕️ Love you! ❤️"
