9.深度学习练习:Optimization Methods

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。

课程链接:https://www.deeplearning.ai/deep-learning-specialization/


目录

1 - Gradient Descent

2 - Mini-Batch Gradient descent

3 - Momentum

4 - Adam

5 - Model with different optimization algorithms

5.1 - Mini-batch Gradient descent

5.2 - Mini-batch gradient descent with momentum

5.3 - Mini-batch with Adam mode


Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.

Gradient descent goes "downhill" on a cost function ?

. Think of it as trying to do this:

At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point.

Notations: As usual, \frac{\partial J}{\partial a } = da for any variable a.

To get started, run the following code to import the libraries you will need.

import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasetsfrom opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

1 - Gradient Descent

A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all ?

examples on each step, it is also called Batch Gradient Descent.

Warm-up exercise: Implement the gradient descent update rule. The gradient descent rule is, for l = 1, ..., L

                                                                     W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}

                                                                         b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]}

where L is the number of layers and ? is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are W^{[l]} and b^{[l]}. You need to shift l to l+1 when coding.

def update_parameters_with_gd(parameters, grads, learning_rate):"""Update parameters using one step of gradient descentArguments:parameters -- python dictionary containing your parameters to be updated:parameters['W' + str(l)] = Wlparameters['b' + str(l)] = blgrads -- python dictionary containing your gradients to update each parameters:grads['dW' + str(l)] = dWlgrads['db' + str(l)] = dbllearning_rate -- the learning rate, scalar.Returns:parameters -- python dictionary containing your updated parameters """L = len(parameters) // 2 # number of layers in the neural networksfor l in range(L):parameters['W' + str(l+1)] -= learning_rate * grads['dW' + str(l+1)]parameters['b' + str(l+1)] -= learning_rate * grads['db' + str(l+1)]return parameters

A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent.

  • (Batch) Gradient Descent:
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):# Forward propagationa, caches = forward_propagation(X, parameters)# Compute cost.cost = compute_cost(a, Y)# Backward propagation.grads = backward_propagation(a, caches, parameters)# Update parameters.parameters = update_parameters(parameters, grads)
  • Stochastic Gradient Descent:
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):for j in range(0, m):# Forward propagationa, caches = forward_propagation(X[:,j], parameters)# Compute costcost = compute_cost(a, Y[:,j])# Backward propagationgrads = backward_propagation(a, caches, parameters)# Update parameters.parameters = update_parameters(parameters, grads)

In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this:

Note also that implementing SGD requires 3 for-loops in total:

  • Over the number of iterations
  • Over the ? training examples
  • Over the layers (to update all parameters, from (W^{[1]},b^{[1]}) to(W^{[L]},b^{[L]})

In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.

**What you should remember**: - The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step. - You have to tune a learning rate hyperparameter ?. - With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).


2 - Mini-Batch Gradient descent

Let's learn how to build mini-batches from the training set (X, Y).

There are two steps:

  • Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the  i^{th}column of X is the example corresponding to the i^{th} label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches.

Partition: Partition the shuffled (X, Y) into mini-batches of size mini_batch_size (here 64). Note that the number of training examples is not always divisible by mini_batch_size. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full mini_batch_size, it will look like this:

Exercise: Implement random_mini_batches. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the 1?? and 2??

mini-batches:

first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
...

Note that the last mini-batch might end up smaller than mini_batch_size=64. Let ⌊?⌋  represents ? rounded down to the nearest integer (this is math.floor(s) in Python). If the total number of examples is not a multiple of mini_batch_size=64 then there will be \lfloor \frac{m}{mini\_batch\_size}\rfloor mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be 

def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):"""Creates a list of random minibatches from (X, Y)Arguments:X -- input data, of shape (input size, number of examples)Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)mini_batch_size -- size of the mini-batches, integerReturns:mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)"""np.random.seed(seed)            # To make your "random" minibatches the same as oursm = X.shape[1]                  # number of training examplesmini_batches = []# Step 1: Shuffle (X, Y)permutation = list(np.random.permutation(m))shuffled_X = X[:, permutation]shuffled_Y = Y[:, permutation].reshape((1,m))# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionningfor k in range(0, num_complete_minibatches):mini_batch_X = shuffled_X[:,k * mini_batch_size : (k+1)*mini_batch_size]mini_batch_Y = shuffled_Y[:,k * mini_batch_size : (k+1)*mini_batch_size]mini_batch = (mini_batch_X, mini_batch_Y)mini_batches.append(mini_batch)# Handling the end case (last mini-batch < mini_batch_size)if m % mini_batch_size != 0:mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size: ]mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size: ]mini_batch = (mini_batch_X, mini_batch_Y)mini_batches.append(mini_batch)return mini_batches

**What you should remember**: - Shuffling and Partitioning are the two steps required to build mini-batches - Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.


3 - Momentum

Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations.

Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable ?. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of ? as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill.

Exercise: Initialize the velocity. The velocity, ?, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the grads dictionary, that is: for l =1,...,L

v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
def initialize_velocity(parameters):"""Initializes the velocity as a python dictionary with:- keys: "dW1", "db1", ..., "dWL", "dbL" - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.Arguments:parameters -- python dictionary containing your parameters.parameters['W' + str(l)] = Wlparameters['b' + str(l)] = blReturns:v -- python dictionary containing the current velocity.(和dW,db维度一致)v['dW' + str(l)] = velocity of dWlv['db' + str(l)] = velocity of dbl"""L = len(parameters) // 2 # number of layers in the neural networksv = {}# Initialize velocityfor l in range(L):v['dW' + str(l+1)] = np.zeros((parameters['W' + str(l+1)].shape[0], parameters['W' + str(l+1)].shape[1]))v['db' + str(l+1)] = np.zeros((parameters['b' + str(l+1)].shape[0], parameters['b' + str(l+1)].shape[1]))return v

Exercise: Now, implement the parameters update with momentum. The momentum update rule is, for l = 1, ..., L

                                          $$ \begin{cases} v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\ W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}} \end{cases}\tag{3}$$ $$\begin{cases} v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\ b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}} \end{cases}\tag{4}$$

def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):"""Update parameters using MomentumArguments:parameters -- python dictionary containing your parameters:parameters['W' + str(l)] = Wlparameters['b' + str(l)] = blgrads -- python dictionary containing your gradients for each parameters:grads['dW' + str(l)] = dWlgrads['db' + str(l)] = dblv -- python dictionary containing the current velocity:v['dW' + str(l)] = ...v['db' + str(l)] = ...beta -- the momentum hyperparameter, scalarlearning_rate -- the learning rate, scalarReturns:parameters -- python dictionary containing your updated parameters v -- python dictionary containing your updated velocities"""L = len(parameters) // 2 # number of layers in the neural networks# Momentum update for each parameterfor l in range(L):v['dW' + str(l+1)] = beta * v['dW' + str(l+1)] + (1 - beta) * grads['dW' + str(l+1)]v['db' + str(l+1)] = beta * v['db' + str(l+1)] + (1 - beta) * grads['db' + str(l+1)]parameters['W' + str(l+1)] = parameters['W' + str(l+1)] - learning_rate * v['dW' + str(l+1)]parameters['b' + str(l+1)] = parameters['b' + str(l+1)] - learning_rate * v['db' + str(l+1)]return parameters, v

Note that:

  • The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.
  • If ?=0, then this just becomes standard gradient descent without momentum.

How do you choose ? ?

  • The larger the momentum ?  is, the smoother the update because the more we take the past gradients into account. But if ? is too big, it could also smooth out the updates too much.
  • Common values for ? range from 0.8 to 0.999. If you don't feel inclined to tune this, ?=0.9 is often a reasonable default.
  • Tuning the optimal ? for your model might need trying several values to see what works best in term of reducing the value of the cost function ?.

**What you should remember**: - Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent. - You have to tune a momentum hyperparameter ? and a learning rate ?.


4 - Adam

Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum.

How does Adam work?

  1. It calculates an exponentially weighted average of past gradients, and stores it in variables ? (before bias correction) and v^{corrected} (with bias correction)..
  2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables ? (before bias correction) and s^{corrected} (with bias correction).
  3. It updates parameters in a direction based on combining information from "1" and "2".

The update rule is, for $l = 1, ..., L$:

                                                                   $$\begin{cases} v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\ v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\ s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\ s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_1)^t} \\ W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon} \end{cases}$$

where:

  • t counts the number of steps taken of Adam
  • L is the number of layers
  • ?1 and ?2 are hyperparameters that control the two exponentially weighted averages.
  • ? is the learning rate
  • ? is a very small number to avoid dividing by zero

As usual, we will store all parameters in the parameters dictionary.

Exercise: Initialize the Adam variables ?,? which keep track of the past information.

Instruction: The variables ?,?  are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for grads, that is: for ?=1,...,?:

def initialize_adam(parameters) :"""Initializes v and s as two python dictionaries with:- keys: "dW1", "db1", ..., "dWL", "dbL" - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.Arguments:parameters -- python dictionary containing your parameters.parameters["W" + str(l)] = Wlparameters["b" + str(l)] = blReturns: v -- python dictionary that will contain the exponentially weighted average of the gradient.v["dW" + str(l)] = ...v["db" + str(l)] = ...s -- python dictionary that will contain the exponentially weighted average of the squared gradient.s["dW" + str(l)] = ...s["db" + str(l)] = ..."""L = len(parameters) // 2 # number of layers in the neural networksv = {}s = {}# Initialize v, s. Input: "parameters". Outputs: "v, s".for l in range(L):]v["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)].shape[0], parameters["W" + str(l+1)].shape[1]))v["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)].shape[0], parameters["b" + str(l+1)].shape[1]))s["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)].shape[0], parameters["W" + str(l+1)].shape[1]))s["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)].shape[0], parameters["b" + str(l+1)].shape[1]))return v, s

Exercise: Now, implement the parameters update with Adam. Recall the general update rule is, for ?=1,...,?:

                                                       $$\begin{cases} v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\ v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\ s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\ s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\ W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon} \end{cases}$$

def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,beta1 = 0.9, beta2 = 0.999,  epsilon = 1e-8):"""Update parameters using AdamArguments:parameters -- python dictionary containing your parameters:parameters['W' + str(l)] = Wlparameters['b' + str(l)] = blgrads -- python dictionary containing your gradients for each parameters:grads['dW' + str(l)] = dWlgrads['db' + str(l)] = dblv -- Adam variable, moving average of the first gradient, python dictionarys -- Adam variable, moving average of the squared gradient, python dictionarylearning_rate -- the learning rate, scalar.beta1 -- Exponential decay hyperparameter for the first moment estimates beta2 -- Exponential decay hyperparameter for the second moment estimates epsilon -- hyperparameter preventing division by zero in Adam updatesReturns:parameters -- python dictionary containing your updated parameters v -- Adam variable, moving average of the first gradient, python dictionarys -- Adam variable, moving average of the squared gradient, python dictionary"""L = len(parameters) // 2                 # number of layers in the neural networksv_corrected = {}                         # Initializing first moment estimate, python dictionarys_corrected = {}                         # Initializing second moment estimate, python dictionary# Perform Adam update on all parametersfor l in range(L):# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1 - beta1) * grads['dW' + str(l+1)]v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1 - beta1) * grads['db' + str(l+1)]# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)] / (1 - beta1 ** t)v_corrected["db" + str(l+1)] = v["db" + str(l+1)] / (1 - beta1 ** t)# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1 - beta2) * (grads['dW' + str(l+1)] ** 2)s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1 - beta2) * (grads['db' + str(l+1)] ** 2)# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)] / (1 - beta2 ** t)s_corrected["db" + str(l+1)] = s["db" + str(l+1)] / (1 - beta2 ** t)# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v_corrected["dW" + str(l+1)] / (np.sqrt(s_corrected["dW" + str(l+1)]) + epsilon)parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v_corrected["db" + str(l+1)] / (np.sqrt(s_corrected["db" + str(l+1)]) + epsilon)return parameters, v, s

5 - Model with different optimization algorithms

Lets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)

train_X, train_Y = load_dataset()

We have already implemented a 3-layer neural network. You will train it with:

  • Mini-batch Gradient Descent: it will call your function:
    • update_parameters_with_gd()
  • Mini-batch Momentum: it will call your functions:
    • initialize_velocity() and update_parameters_with_momentum()
  • Mini-batch Adam: it will call your functions:
    • initialize_adam() and update_parameters_with_adam()

 

def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,beta1 = 0.9, beta2 = 0.999,  epsilon = 1e-8, num_epochs = 10000, print_cost = True):"""3-layer neural network model which can be run in different optimizer modes.Arguments:X -- input data, of shape (2, number of examples)Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)layers_dims -- python list, containing the size of each layerlearning_rate -- the learning rate, scalar.mini_batch_size -- the size of a mini batchbeta -- Momentum hyperparameterbeta1 -- Exponential decay hyperparameter for the past gradients estimates beta2 -- Exponential decay hyperparameter for the past squared gradients estimates epsilon -- hyperparameter preventing division by zero in Adam updatesnum_epochs -- number of epochsprint_cost -- True to print the cost every 1000 epochsReturns:parameters -- python dictionary containing your updated parameters """L = len(layers_dims)             # number of layers in the neural networkscosts = []                       # to keep track of the costt = 0                            # initializing the counter required for Adam updateseed = 10                        # For grading purposes, so that your "random" minibatches are the same as ours# Initialize parametersparameters = initialize_parameters(layers_dims)# Initialize the optimizerif optimizer == "gd":pass # no initialization required for gradient descentelif optimizer == "momentum":v = initialize_velocity(parameters)elif optimizer == "adam":v, s = initialize_adam(parameters)# Optimization loopfor i in range(num_epochs):# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epochseed = seed + 1minibatches = random_mini_batches(X, Y, mini_batch_size, seed)for minibatch in minibatches:# Select a minibatch(minibatch_X, minibatch_Y) = minibatch# Forward propagationa3, caches = forward_propagation(minibatch_X, parameters)# Compute costcost = compute_cost(a3, minibatch_Y)# Backward propagationgrads = backward_propagation(minibatch_X, minibatch_Y, caches)# Update parametersif optimizer == "gd":parameters = update_parameters_with_gd(parameters, grads, learning_rate)elif optimizer == "momentum":parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)elif optimizer == "adam":t = t + 1 # Adam counterparameters, v, s = update_parameters_with_adam(parameters, grads, v, s,t, learning_rate, beta1, beta2,  epsilon)# Print the cost every 1000 epochif print_cost and i % 1000 == 0:print ("Cost after epoch %i: %f" %(i, cost))if print_cost and i % 100 == 0:costs.append(cost)# plot the costplt.plot(costs)plt.ylabel('cost')plt.xlabel('epochs (per 100)')plt.title("Learning rate = " + str(learning_rate))plt.show()return parameters

5.1 - Mini-batch Gradient descent

# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")# Predict
predictions = predict(train_X, train_Y, parameters)# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)


5.2 - Mini-batch gradient descent with momentum

Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.

# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")# Predict
predictions = predict(train_X, train_Y, parameters)# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)

5.3 - Mini-batch with Adam mode

# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")# Predict
predictions = predict(train_X, train_Y, parameters)# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)


Momentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm.

Adam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you've seen that Adam converges a lot faster.

Some advantages of Adam include:

  • Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum)
  • Usually works well even with little tuning of hyperparameters (except ?)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/439845.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

一步步编写操作系统 22 硬盘操作方法

硬盘中的指令很多&#xff0c;各指令的用法也不同。有的指令直接往command寄存器中写就行了&#xff0c;有的还要在feature寄存器中写入参数&#xff0c;最权威的方法还是要去参考ATA手册。由于本书中用到的都是简单的指令&#xff0c;所以对此抽象出一些公共的步骤仅供参考之用…

10.深度学习练习:Convolutional Neural Networks: Step by Step(强烈推荐)

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Packages 2 - Outline of the Assignment 3 - Convolutional Neural Networks 3.1 - Zero-Paddin…

一步步编写操作系统 23 重写主引导记录mbr

本节我们在之前MBR的基础上&#xff0c;做个稍微大一点的改进&#xff0c;经过这个改进后&#xff0c;我们的MBR可以读取硬盘。听上去这可是个大“手术”呢&#xff0c;我们要将之前学过的知识都用上啦。其实没那么大啦&#xff0c;就是加了个读写磁盘的函数而已&#xff0c;哈…

11.深度学习练习:Keras tutorial - the Happy House(推荐)

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ Welcome to the first assignment of week 2. In this assignment, you will: Learn to use Keras, a high-lev…

一步步编写操作系统 24 编写内核加载器

这一节的内容并不长&#xff0c;因为在进入保护模式之前&#xff0c;我们能做的不多&#xff0c;loader是要经过实模式到保护模式的过渡&#xff0c;并最终在保护模式下加载内核。本节只实现一个简单的loader&#xff0c;本loader只在实模式下工作&#xff0c;等学习了保护模式…

12.深度学习练习:Residual Networks(注定成为经典)

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - The problem of very deep neural networks 2 - Building a Residual Network 2.1 - The identity…

13.深度学习练习:Autonomous driving - Car detection(YOLO实战)

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ Welcome to your week 3 programming assignment. You will learn about object detection using the very pow…

14.深度学习练习:Face Recognition for the Happy House

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the i…

java Integer 源码学习

转载自http://www.hollischuang.com/archives/1058 Integer 类在对象中包装了一个基本类型 int 的值。Integer 类型的对象包含一个 int 类型的字段。 此外&#xff0c;该类提供了多个方法&#xff0c;能在 int 类型和 String 类型之间互相转换&#xff0c;还提供了处理 int 类…

15.深度学习练习:Deep Learning Art: Neural Style Transfer

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Problem Statement 2 - Transfer Learning 3 - Neural Style Transfer 3.1 - Computing the cont…

【2018icpc宁夏邀请赛现场赛】【Gym - 102222F】Moving On(Floyd变形,思维,离线处理)

https://nanti.jisuanke.com/t/41290 题干&#xff1a; Firdaws and Fatinah are living in a country with nn cities, numbered from 11 to nn. Each city has a risk of kidnapping or robbery. Firdawss home locates in the city uu, and Fatinahs home locates in the…

动手学PaddlePaddle(1):线性回归

你将学会&#xff1a; 机器学习的基本概念&#xff1a;假设函数、损失函数、优化算法数据怎么进行归一化处理paddlepaddle深度学习框架的一些基本知识如何用paddlepaddle深度学习框架搭建全连接神经网络参考资料&#xff1a;https://www.paddlepaddle.org.cn/documentation/doc…

Apollo进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving

原文链接&#xff1a;进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving 自动驾驶车辆的规划决策模块负责生成车辆的行驶行为&#xff0c;是体现车辆智慧水平的关键。规划决策模块中的运动规划环节负责生成车辆的局部运动轨迹&#xff0c;是决定车辆…

JVM核心之JVM运行和类加载全过程

为什么研究类加载全过程&#xff1f; 有助于连接JVM运行过程更深入了解java动态性&#xff08;解热部署&#xff0c;动态加载&#xff09;&#xff0c;提高程序的灵活性类加载机制 JVM把class文件加载到内存&#xff0c;并对数据进行校验、解析和初始化&#xff0c;最终形成J…

【2018icpc宁夏邀请赛现场赛】【Gym - 102222A】Maximum Element In A Stack(动态的栈中查找最大元素)

https://nanti.jisuanke.com/t/41285 题干&#xff1a; As an ACM-ICPC newbie, Aishah is learning data structures in computer science. She has already known that a stack, as a data structure, can serve as a collection of elements with two operations: push, …

动手学PaddlePaddle(2):房价预测

通过这个练习可以了解到&#xff1a; 机器学习的典型过程&#xff1a; 获取数据 数据预处理 -训练模型 -应用模型 fluid训练模型的基本步骤: 配置网络结构&#xff1a; 定义成本函数avg_cost 定义优化器optimizer 获取训练数据 定义运算场所(place)和执行器(exe) 提供数…

JAVA 堆栈 堆 方法区 解析

基础数据类型直接在栈空间分配&#xff0c; 方法的形式参数&#xff0c;直接在栈空间分配&#xff0c;当方法调用完成后从栈空间回收。 引用数据类型&#xff0c;需要用new来创建&#xff0c;既在栈空间分配一个地址空间&#xff0c;又在堆空间分配对象的类变量 。 方法的引用…

动手学PaddlePaddle(3):猫脸识别

你将学会&#xff1a; 预处理图片数据 利用PaddlePaddle框架实现Logistic回归模型&#xff1a; 在开始练习之前&#xff0c;简单介绍一下图片处理的相关知识&#xff1a; 图片处理 由于识别猫问题涉及到图片处理知识&#xff0c;这里对计算机如何保存图片做一个简单的介绍。在…

Java对象分配原理

Java对象模型: OOP-Klass模型 在正式探讨JVM对象的创建前&#xff0c;先简单地介绍一下hotspot中实现的Java的对象模型。在JVM中&#xff0c;并没有直接将Java对象映射成C对象&#xff0c;而是采用了oop-klass模型&#xff0c;主要是不希望每个对象中都包含有一份虚函数表&…

动手学PaddlePaddle(4):MNIST(手写数字识别)

本次练习将使用 PaddlePaddle 来实现三种不同的分类器&#xff0c;用于识别手写数字。三种分类器所实现的模型分别为 Softmax 回归、多层感知器、卷积神经网络。 您将学会 实现一个基于Softmax回归的分类器&#xff0c;用于识别手写数字 实现一个基于多层感知器的分类器&#…