Building your Deep Neural Network: Step by Step

deep learning

In this notebook:

By the end of this notebook, we’ll learn:

Notation: - Superscript \([l]\) denotes a quantity associated with the \(l^{th}\) layer. - Example: \(a^{[L]}\) is the \(L^{th}\) layer activation. \(W^{[L]}\) and \(b^{[L]}\) are the \(L^{th}\) layer parameters. - Superscript \((i)\) denotes a quantity associated with the \(i^{th}\) example. - Example: \(x^{(i)}\) is the \(i^{th}\) training example. - Lowerscript \(i\) denotes the \(i^{th}\) entry of a vector. - Example: \(a^{[l]}_i\) denotes the \(i^{th}\) entry of the \(l^{th}\) layer’s activations).

Packages

Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases import *
from dnn_utils import sigmoid, sigmoid_backward, relu, relu_backward
from public_tests import *

import copy
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

%load_ext autoreload
%autoreload 2

np.random.seed(1)

Outline

Start with implementing several “helper functions.” These helper functions will be used in the next notebook to build a two-layer neural network and an L-layer neural network.

Each small helper function will have detailed instructions to walk you through the necessary steps. Here’s an outline of the steps in this assignment:

  • Initialize the parameters for a two-layer network and for an \(L\)-layer neural network
  • Implement the forward propagation module
    • Complete the LINEAR part of a layer’s forward propagation step (resulting in \(Z^{[l]}\)).
    • The ACTIVATION function is provided for you (relu/sigmoid)
    • Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
    • Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer \(L\)). This gives you a new L_model_forward function.
  • Compute the loss
  • Implement the backward propagation module (denoted in red in the figure below)
    • Complete the LINEAR part of a layer’s backward propagation step
    • The gradient of the ACTIVATION function is provided for you(relu_backward/sigmoid_backward)
    • Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function
    • Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
  • Finally, update the parameters

Note:

For every forward function, there is a corresponding backward function. This is why at every step of your forward module you will be storing some values in a cache. These cached values are useful for computing gradients.

In the backpropagation module, you can then use the cache to calculate the gradients.

Initialization

layer Neural Network

initialize_parameters

Create and initialize the parameters of the 2-layer neural network.

Instructions:

  • The model’s structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
  • Use this random initialization for the weight matrices: np.random.randn(d0, d1, ..., dn) * 0.01 with the correct shape. The documentation for np.random.randn
  • Use zero initialization for the biases: np.zeros(shape). The documentation for np.zeros
Code
def initialize_parameters(n_x, n_h, n_y):
    """
    Argument:
    n_x -- size of the input layer
    n_h -- size of the hidden layer
    n_y -- size of the output layer
    
    Returns:
    parameters -- python dictionary containing your parameters:
                    W1 -- weight matrix of shape (n_h, n_x)
                    b1 -- bias vector of shape (n_h, 1)
                    W2 -- weight matrix of shape (n_y, n_h)
                    b2 -- bias vector of shape (n_y, 1)
    """
    
    np.random.seed(1)
    W1 = np.random.randn(n_h,n_x) * 0.01
    b1 = np.zeros((n_h,1))
    W2 = np.random.randn(n_y,n_h) * 0.01
    b2 = np.zeros((n_y,1))    
    
    parameters = {"W1": W1,
                  "b1": b1,
                  "W2": W2,
                  "b2": b2}
    
    return parameters    

L-layer Neural Network

The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep function, you should make sure that your dimensions match between each layer. Recall that \(n^{[l]}\) is the number of units in layer \(l\). For example, if the size of your input \(X\) is \((12288, 209)\) (with \(m=209\) examples) then:

Shape of W Shape of b Activation Shape of Activation
Layer 1 \((n^{[1]},12288)\) \((n^{[1]},1)\) $Z^{[1]} = W^{[1]} X + b^{[1]} $ \((n^{[1]},209)\)
Layer 2 \((n^{[2]}, n^{[1]})\) \((n^{[2]},1)\) \(Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}\) \((n^{[2]}, 209)\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
Layer L-1 \((n^{[L-1]}, n^{[L-2]})\) \((n^{[L-1]}, 1)\) \(Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}\) \((n^{[L-1]}, 209)\)
Layer L \((n^{[L]}, n^{[L-1]})\) \((n^{[L]}, 1)\) \(Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}\) \((n^{[L]}, 209)\)

Remember that when you compute \(W X + b\) in python, it carries out broadcasting. For example, if:

\[ W = \begin{bmatrix} w_{00} & w_{01} & w_{02} \\ w_{10} & w_{11} & w_{12} \\ w_{20} & w_{21} & w_{22} \end{bmatrix}\;\;\; X = \begin{bmatrix} x_{00} & x_{01} & x_{02} \\ x_{10} & x_{11} & x_{12} \\ x_{20} & x_{21} & x_{22} \end{bmatrix} \;\;\; b =\begin{bmatrix} b_0 \\ b_1 \\ b_2 \end{bmatrix}\tag{2}\]

Then \(WX + b\) will be:

\[ WX + b = \begin{bmatrix} (w_{00}x_{00} + w_{01}x_{10} + w_{02}x_{20}) + b_0 & (w_{00}x_{01} + w_{01}x_{11} + w_{02}x_{21}) + b_0 & \cdots \\ (w_{10}x_{00} + w_{11}x_{10} + w_{12}x_{20}) + b_1 & (w_{10}x_{01} + w_{11}x_{11} + w_{12}x_{21}) + b_1 & \cdots \\ (w_{20}x_{00} + w_{21}x_{10} + w_{22}x_{20}) + b_2 & (w_{20}x_{01} + w_{21}x_{11} + w_{22}x_{21}) + b_2 & \cdots \end{bmatrix}\tag{3} \]

initialize_parameters_deep

Implement initialization for an L-layer Neural Network.

Instructions: - The model’s structure is [LINEAR -> RELU] $ $ (L-1) -> LINEAR -> SIGMOID. I.e., it has \(L-1\) layers using a ReLU activation function followed by an output layer with a sigmoid activation function. - Use random initialization for the weight matrices. Use np.random.randn(d0, d1, ..., dn) * 0.01. - Use zeros initialization for the biases. Use np.zeros(shape). - You’ll store \(n^{[l]}\), the number of units in different layers, in a variable layer_dims. For example, the layer_dims for last week’s Planar Data classification model would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. This means W1’s shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to \(L\) layers! - Here is the implementation for \(L=1\) (one layer neural network). It should inspire you to implement the general case (L-layer neural network).

    if L == 1:
        parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
        parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
Code

def initialize_parameters_deep(layer_dims):
    """
    Arguments:
    layer_dims -- python array (list) containing the dimensions of each layer in our network
    
    Returns:
    parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
                    Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
                    bl -- bias vector of shape (layer_dims[l], 1)
    """
    
    np.random.seed(3)
    parameters = {}
    L = len(layer_dims) # number of layers in the network

    for l in range(1, L):
        parameters["W" + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
        parameters["b" + str(l)] = np.zeros((layer_dims[l], 1))      
        
        assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
        assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))

        
    return parameters

Forward Propagation Module

Linear Forward

Now that you have initialized your parameters, you can do the forward propagation module. Start by implementing some basic functions that you can use again later when implementing the model. Now, you’ll complete three functions in this order:

  • LINEAR
  • LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
  • [LINEAR -> RELU] \(\times\) (L-1) -> LINEAR -> SIGMOID (whole model)

The linear forward module (vectorized over all the examples) computes the following equations:

\[Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\text{4}\]

where \(A^{[0]} = X\).

linear_forward

Build the linear part of forward propagation.

Reminder: The mathematical representation of this unit is \(Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\). You may also find np.dot() useful. If your dimensions don’t match, printing W.shape may help.

Code
def linear_forward(A, W, b):
    """
    Implement the linear part of a layer's forward propagation.

    Arguments:
    A -- activations from previous layer (or input data): (size of previous layer, number of examples)
    W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
    b -- bias vector, numpy array of shape (size of the current layer, 1)

    Returns:
    Z -- the input of the activation function, also called pre-activation parameter 
    cache -- a python tuple containing "A", "W" and "b" ; stored for computing the backward pass efficiently
    """
    
    Z = np.dot(W,A) + b
    cache = (A, W, b)
    
    return Z, cache

Linear-Activation Forward

In this notebook, we use two activation functions:

  • Sigmoid: \(\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}\). You’ve been provided with the sigmoid function which returns two items: the activation value “a” and a “cache” that contains “Z” (it’s what we will feed in to the corresponding backward function). To use it you could just call:
A, activation_cache = sigmoid(Z)
  • ReLU: The mathematical formula for ReLu is \(A = RELU(Z) = max(0, Z)\). You’ve been provided with the relu function. This function returns two items: the activation value “A” and a “cache” that contains “Z” (it’s what you’ll feed in to the corresponding backward function). To use it you could just call:
A, activation_cache = relu(Z)

For added convenience, group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, implement a function that does the LINEAR forward step, followed by an ACTIVATION forward step.

linear_activation_forward

Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: \(A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})\) where the activation “g” can be sigmoid() or relu(). Use linear_forward() and the correct activation function.

Code

def linear_activation_forward(A_prev, W, b, activation):
    """
    Implement the forward propagation for the LINEAR->ACTIVATION layer

    Arguments:
    A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
    W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
    b -- bias vector, numpy array of shape (size of the current layer, 1)
    activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"

    Returns:
    A -- the output of the activation function, also called the post-activation value 
    cache -- a python tuple containing "linear_cache" and "activation_cache";
             stored for computing the backward pass efficiently
    """
    
    if activation == "sigmoid":
        Z, linear_cache = linear_forward(A_prev, W, b)
        A, activation_cache = sigmoid(Z)
    
    elif activation == "relu":
        Z, linear_cache = linear_forward(A_prev, W, b)
        A, activation_cache = relu(Z)        

    cache = (linear_cache, activation_cache)

    return A, cache

Note: In deep learning, the “[LINEAR->ACTIVATION]” computation is counted as a single layer in the neural network, not two layers.

L-Layer Model

For even more convenience when implementing the \(L\)-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) \(L-1\) times, then follows that with one linear_activation_forward with SIGMOID.

L_model_forward

Implement the forward propagation of the above model.

Instructions: In the code below, the variable AL will denote \(A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})\). (This is sometimes also called Yhat, i.e., this is \(\hat{Y}\).)

Hints: - Use the functions you’ve previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times - Don’t forget to keep track of the caches in the “caches” list. To add a new value c to a list, you can use list.append(c).

Code

def L_model_forward(X, parameters):
    """
    Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
    
    Arguments:
    X -- data, numpy array of shape (input size, number of examples)
    parameters -- output of initialize_parameters_deep()
    
    Returns:
    AL -- activation value from the output (last) layer
    caches -- list of caches containing:
                every cache of linear_activation_forward() (there are L of them, indexed from 0 to L-1)
    """

    caches = []
    A = X
    L = len(parameters) // 2                  # number of layers in the neural network
 
    for l in range(1, L):
        A_prev = A 
        A, cache = linear_activation_forward(A_prev, parameters["W" + str(l)], parameters["b" + str(l)], activation='relu')
        caches.append(cache)
    AL, cache = linear_activation_forward(A, parameters["W" + str(L)], parameters["b" + str(L)], activation='sigmoid')
    caches.append(cache)
          
    return AL, caches

Awesome! We implemented a full forward propagation that takes the input X and outputs a row vector \(A^{[L]}\) containing your predictions. It also records all intermediate values in “caches”. Using \(A^{[L]}\), you can compute the cost of your predictions.

Cost Function

Now you can implement forward and backward propagation! You need to compute the cost, in order to check whether your model is actually learning.

compute_cost

Compute the cross-entropy cost \(J\), using the following formula: \[ -\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right))\]

Code

def compute_cost(AL, Y):
    """
    Implement the cost function defined by equation (7).

    Arguments:
    AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
    Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)

    Returns:
    cost -- cross-entropy cost
    """
    
    m = Y.shape[1]
    cost = (-1/m) * (np.dot(Y, np.log(AL).T) + np.dot((1-Y), np.log(1-AL).T))   
    
    cost = np.squeeze(cost)      # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).

    
    return cost

Backward Propagation Module

As for the forward propagation, implement helper functions for backpropagation. Remember that backpropagation is used to calculate the gradient of the loss function with respect to the parameters.

Reminder:

Now, similarly to forward propagation, you’re going to build the backward propagation in three steps: 1. LINEAR backward 2. LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation 3. [LINEAR -> RELU] \(\times\) (L-1) -> LINEAR -> SIGMOID backward (whole model)

For the next exercise, you will need to remember that:

  • b is a matrix(np.ndarray) with 1 column and n rows, i.e: b = [[1.0], [2.0]] (remember that b is a constant)
  • np.sum performs a sum over the elements of a ndarray
  • axis=1 or axis=0 specify if the sum is carried out by rows or by columns respectively
  • keepdims specifies if the original dimensions of the matrix must be kept.
  • Look at the following example to clarify:
Code
A = np.array([[1, 2], [3, 4]])

print('axis=1 and keepdims=True')
print(np.sum(A, axis=1, keepdims=True))
print('axis=1 and keepdims=False')
print(np.sum(A, axis=1, keepdims=False))
print('axis=0 and keepdims=True')
print(np.sum(A, axis=0, keepdims=True))
print('axis=0 and keepdims=False')
print(np.sum(A, axis=0, keepdims=False))
axis=1 and keepdims=True
[[3]
 [7]]
axis=1 and keepdims=False
[3 7]
axis=0 and keepdims=True
[[4 6]]
axis=0 and keepdims=False
[4 6]

Linear Backward

For layer \(l\), the linear part is: \(Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}\) (followed by an activation).

Suppose you have already calculated the derivative \(dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}\). You want to get \((dW^{[l]}, db^{[l]}, dA^{[l-1]})\).

The three outputs \((dW^{[l]}, db^{[l]}, dA^{[l-1]})\) are computed using the input \(dZ^{[l]}\).

Here are the formulas you need: \[ dW^{[l]} = \frac{\partial \mathcal{J} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}\] \[ db^{[l]} = \frac{\partial \mathcal{J} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}\] \[ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}\]

\(A^{[l-1] T}\) is the transpose of \(A^{[l-1]}\).

linear_backward

Use the 3 formulas above to implement linear_backward().

Hint:

  • In numpy you can get the transpose of an ndarray A using A.T or A.transpose()
Code

def linear_backward(dZ, cache):
    """
    Implement the linear portion of backward propagation for a single layer (layer l)

    Arguments:
    dZ -- Gradient of the cost with respect to the linear output (of current layer l)
    cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer

    Returns:
    dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
    dW -- Gradient of the cost with respect to W (current layer l), same shape as W
    db -- Gradient of the cost with respect to b (current layer l), same shape as b
    """
    A_prev, W, b = cache
    m = A_prev.shape[1]
    dW = np.dot(dZ,A_prev.T)/m
    db = np.sum(dZ, axis=1, keepdims=True)/m
    dA_prev = np.dot(W.T, dZ)
    
    return dA_prev, dW, db

Linear-Activation Backward

Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.

To help you implement linear_activation_backward, two backward functions have been provided: - sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:

dZ = sigmoid_backward(dA, activation_cache)
  • relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
dZ = relu_backward(dA, activation_cache)

If \(g(.)\) is the activation function, sigmoid_backward and relu_backward compute \[dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \]

linear_activation_backward

Implement the backpropagation for the LINEAR->ACTIVATION layer.

Code

def linear_activation_backward(dA, cache, activation):
    """
    Implement the backward propagation for the LINEAR->ACTIVATION layer.
    
    Arguments:
    dA -- post-activation gradient for current layer l 
    cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
    activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
    
    Returns:
    dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
    dW -- Gradient of the cost with respect to W (current layer l), same shape as W
    db -- Gradient of the cost with respect to b (current layer l), same shape as b
    """
    linear_cache, activation_cache = cache
    
    if activation == "relu":
        dZ =  relu_backward(dA, activation_cache)
        dA_prev, dW, db = linear_backward(dZ, linear_cache)
        
    elif activation == "sigmoid":
        dZ =  sigmoid_backward(dA, activation_cache)
        dA_prev, dW, db = linear_backward(dZ, linear_cache)       
    
    return dA_prev, dW, db

L-Model Backward

Now you will implement the backward function for the whole network!

Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you’ll use those variables to compute the gradients. Therefore, in the L_model_backward function, you’ll iterate through all the hidden layers backward, starting from layer \(L\). On each step, you will use the cached values for layer \(l\) to backpropagate through layer \(l\). Figure 5 below shows the backward pass.

Initializing backpropagation:

To backpropagate through this network, you know that the output is: \(A^{[L]} = \sigma(Z^{[L]})\). Your code thus needs to compute dAL \(= \frac{\partial \mathcal{L}}{\partial A^{[L]}}\). To do so, use this formula (derived using calculus which, again, you don’t need in-depth knowledge of!):

dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL

You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function).

After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :

\[grads["dW" + str(l)] = dW^{[l]}\tag{15} \]

For example, for \(l=3\) this would store \(dW^{[l]}\) in grads["dW3"].

### Exercise 9 - L_model_backward

Implement backpropagation for the [LINEAR->RELU] \(\times\) (L-1) -> LINEAR -> SIGMOID model.

Code
def L_model_backward(AL, Y, caches):
    """
    Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
    
    Arguments:
    AL -- probability vector, output of the forward propagation (L_model_forward())
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
    caches -- list of caches containing:
                every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
                the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
    
    Returns:
    grads -- A dictionary with the gradients
             grads["dA" + str(l)] = ... 
             grads["dW" + str(l)] = ...
             grads["db" + str(l)] = ... 
    """
    grads = {}
    L = len(caches) # the number of layers
    m = AL.shape[1]
    Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL

    dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))

    current_cache = caches[L-1] # Last Layer
    grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, "sigmoid")

    for l in reversed(range(L-1)):
        current_cache = caches[l]
        dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 1)], current_cache, activation = "relu")
        grads["dA" + str(l)] = dA_prev_temp
        grads["dW" + str(l + 1)] = dW_temp
        grads["db" + str(l + 1)] = db_temp

    return grads

Update Parameters

In this section, you’ll update the parameters of the model, using gradient descent:

\[ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}\] \[ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}\]

where \(\alpha\) is the learning rate.

After computing the updated parameters, store them in the parameters dictionary.

update_parameters

Implement update_parameters() to update your parameters using gradient descent.

Instructions: Update parameters using gradient descent on every \(W^{[l]}\) and \(b^{[l]}\) for \(l = 1, 2, ..., L\).

Code
def update_parameters(params, grads, learning_rate):
    """
    Update parameters using gradient descent
    
    Arguments:
    params -- python dictionary containing your parameters 
    grads -- python dictionary containing your gradients, output of L_model_backward
    
    Returns:
    parameters -- python dictionary containing your updated parameters 
                  parameters["W" + str(l)] = ... 
                  parameters["b" + str(l)] = ...
    """
    parameters = copy.deepcopy(params)
    L = len(parameters) // 2 # number of layers in the neural network
    for l in range(L):
        parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
        parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]       

    return parameters

Summary

Implemented all the functions required for building a deep neural network, including:

  • Using non-linear units improve your model
  • Building a deeper neural network (with more than 1 hidden layer)
  • Implementing an easy-to-use neural network class

In the next notebook, you’ll be putting all these together to build two models:

  • A two-layer neural network
  • An L-layer neural network

You will in fact use these models to classify cat vs non-cat images.

References: