5.深度学习练习:Deep Neural Network for Image Classification: Application

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。

课程链接:https://www.deeplearning.ai/deep-learning-specialization/

After this assignment you will be able to:

  • Build and apply a deep neural network to supervised learning.

目录

1 - Packages

2 - Dataset

3 - Architecture of your model

3.1 - 2-layer neural network

3.2 - L-layer deep neural network

3.3 - General methodology

4 - Two-layer neural network

5 - L-layer Neural Network

6) Results Analysis

7) Test with your own image (optional/ungraded exercise)


1 - Packages

Let's first import all the packages that you will need during this assignment.

  • numpy is the fundamental package for scientific computing with Python.
  • matplotlib is a library to plot graphs in Python.
  • h5py is a common package to interact with a dataset that is stored on an H5 file.
  • PIL and scipy are used here to test your model with your own picture at the end.
  • dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v2 import *%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'%load_ext autoreload
%autoreload 2np.random.seed(1)

2 - Dataset

You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!

Problem Statement: You are given a dataset ("data.h5") containing:

- a training set of m_train images labelled as cat (1) or non-cat (0)
- a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).

Let's get more familiar with the dataset. Load the data by running the cell belo

train_x_orig, train_y, test_x_orig, test_y, classes = load_data()# Example of a picture
index = 9
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") +  " picture.")
print(train_x_orig.shape)

As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.

# Reshape the training and test examples 
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T   # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))


3 - Architecture of your model

Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.

You will build two different models:

  • A 2-layer neural network
  • An L-layer deep neural network

You will then compare the performance of these models, and also try out different values for ?L.

Let's look at the two architectures.

3.1 - 2-layer neural network

The model can be summarized as: ***INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT***.

3.2 - L-layer deep neural network

The model can be summarized as: ***[LINEAR -> RELU] ×× (L-1) -> LINEAR -> SIGMOID***

3.3 - General methodology

As usual you will follow the Deep Learning methodology to build the model:

1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:a. Forward propagationb. Compute cost functionc. Backward propagationd. Update parameters (using parameters, and grads from backprop) 
4. Use trained parameters to predict labels

4 - Two-layer neural network

Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are:

def initialize_parameters(n_x, n_h, n_y):...return parameters 
def linear_activation_forward(A_prev, W, b, activation):...return A, cache
def compute_cost(AL, Y):...return cost
def linear_activation_backward(dA, cache, activation):...return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):...return parameters
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288     # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):"""Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.Arguments:X -- input data, of shape (n_x, number of examples)Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)layers_dims -- dimensions of the layers (n_x, n_h, n_y)num_iterations -- number of iterations of the optimization looplearning_rate -- learning rate of the gradient descent update ruleprint_cost -- If set to True, this will print the cost every 100 iterations Returns:parameters -- a dictionary containing W1, W2, b1, and b2"""np.random.seed(1)grads = {}costs = []                              # to keep track of the costm = X.shape[1]                           # number of examples(n_x, n_h, n_y) = layers_dims# Initialize parameters dictionary, by calling one of the functions you'd previously implementedparameters = initialize_parameters(n_x, n_h, n_y)# Get W1, b1, W2 and b2 from the dictionary parameters.W1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]# Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".A1, cache1 = linear_activation_forward(X, W1, b1, 'relu')A2, cache2 = linear_activation_forward(A1, W2, b2, 'sigmoid')cost = compute_cost(A2, Y)# Initializing backward propagationdA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".dA1, dW2, db2 = linear_activation_backward(dA2, cache2, 'sigmoid')dA0, dW1, db1 = linear_activation_backward(dA1, cache1, 'relu')# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2grads['dW1'] = dW1grads['db1'] = db1grads['dW2'] = dW2grads['db2'] = db2parameters = update_parameters(parameters, grads, learning_rate = 0.0075)# Retrieve W1, b1, W2, b2 from parametersW1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]# Print the cost every 100 training exampleif print_cost and i % 100 == 0:print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))if print_cost and i % 100 == 0:costs.append(cost)# plot the costplt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()return parameters
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)predictions_train = predict(train_x, train_y, parameters)

5 - L-layer Neural Network

Question: Use the helper functions you have implemented previously to build an ?L-layer neural network with the following structure: [LINEAR -> RELU]××(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are:

def initialize_parameters_deep(layer_dims):...return parameters 
def L_model_forward(X, parameters):...return AL, caches
def compute_cost(AL, Y):...return cost
def L_model_backward(AL, Y, caches):...return grads
def update_parameters(parameters, grads, learning_rate):...return parameters
layers_dims = [12288, 20, 7, 5, 1] #  5-layer modeldef L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009"""Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.Arguments:X -- data, numpy array of shape (number of examples, num_px * num_px * 3)Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).learning_rate -- learning rate of the gradient descent update rulenum_iterations -- number of iterations of the optimization loopprint_cost -- if True, it prints the cost every 100 stepsReturns:parameters -- parameters learnt by the model. They can then be used to predict."""np.random.seed(1)costs = []                         # keep track of costparameters = initialize_parameters_deep(layers_dims)# Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.AL, caches = L_model_forward(X, parameters)# Compute cost.cost = compute_cost(AL, Y)# Backward propagation.grads = L_model_backward(AL, Y, caches)# Update parameters.parameters = update_parameters(parameters, grads, learning_rate = 0.0075)# Print the cost every 100 training exampleif print_cost and i % 100 == 0:print ("Cost after iteration %i: %f" %(i, cost))if print_cost and i % 100 == 0:costs.append(cost)# plot the costplt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()return parameters

6) Results Analysis

First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.

A few type of images the model tends to do poorly on include:

  • Cat body in an unusual position
  • Cat appears against a background of a similar color
  • Unusual cat color and species
  • Camera Angle
  • Brightness of the picture
  • Scale variation (cat is very large or small in image)

7) Test with your own image (optional/ungraded exercise)

Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:

my_image = "my_image.jpg" # change this to the name of your image file 
my_label_y = [] # the true class of your image (1 -> cat, 0 -> non-cat)fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") +  "\" picture.")

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/439857.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【CodeForces - 689D】Friends and Subsequences(RMQ,二分 或单调队列)

题干: Mike and !Mike are old childhood rivals, they are opposite in everything they do, except programming. Today they have a problem they cannot solve on their own, but together (with you) — who knows? Every one of them has an integer seque…

6.深度学习练习:Initialization

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Neural Network model 2 - Zero initialization 3 - Random initialization(掌握&…

【CodeForces - 602D】Lipshitz Sequence(思维,单调栈,斜率单调性)

题干: A function is called Lipschitz continuous if there is a real constant Ksuch that the inequality |f(x) - f(y)| ≤ K|x - y| holds for all . Well deal with a more... discrete version of this term. For an array , we define its Lipschi…

7.深度学习练习:Regularization

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1-Package 2 - Non-regularized model 3 - L2 Regularization(掌握) 4-Dropou…

深入详解JVM内存模型与JVM参数详细配置

本系列会持续更新。 JVM基本是BAT面试必考的内容,今天我们先从JVM内存模型开启详解整个JVM系列,希望看完整个系列后,可以轻松通过BAT关于JVM的考核。 BAT必考JVM系列专题 1.JVM内存模型 2.JVM垃圾回收算法 3.JVM垃圾回收器 4.JVM参数详解 5…

8.深度学习练习:Gradient Checking

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1) How does gradient checking work? 2) 1-dimensional gradient checking 3) N-dimensional gradie…

9.深度学习练习:Optimization Methods

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Gradient Descent 2 - Mini-Batch Gradient descent 3 - Momentum 4 - Adam 5 - Model with dif…

一步步编写操作系统 22 硬盘操作方法

硬盘中的指令很多,各指令的用法也不同。有的指令直接往command寄存器中写就行了,有的还要在feature寄存器中写入参数,最权威的方法还是要去参考ATA手册。由于本书中用到的都是简单的指令,所以对此抽象出一些公共的步骤仅供参考之用…

10.深度学习练习:Convolutional Neural Networks: Step by Step(强烈推荐)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Packages 2 - Outline of the Assignment 3 - Convolutional Neural Networks 3.1 - Zero-Paddin…

一步步编写操作系统 23 重写主引导记录mbr

本节我们在之前MBR的基础上,做个稍微大一点的改进,经过这个改进后,我们的MBR可以读取硬盘。听上去这可是个大“手术”呢,我们要将之前学过的知识都用上啦。其实没那么大啦,就是加了个读写磁盘的函数而已,哈…

11.深度学习练习:Keras tutorial - the Happy House(推荐)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ Welcome to the first assignment of week 2. In this assignment, you will: Learn to use Keras, a high-lev…

一步步编写操作系统 24 编写内核加载器

这一节的内容并不长,因为在进入保护模式之前,我们能做的不多,loader是要经过实模式到保护模式的过渡,并最终在保护模式下加载内核。本节只实现一个简单的loader,本loader只在实模式下工作,等学习了保护模式…

12.深度学习练习:Residual Networks(注定成为经典)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - The problem of very deep neural networks 2 - Building a Residual Network 2.1 - The identity…

13.深度学习练习:Autonomous driving - Car detection(YOLO实战)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ Welcome to your week 3 programming assignment. You will learn about object detection using the very pow…

14.深度学习练习:Face Recognition for the Happy House

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the i…

java Integer 源码学习

转载自http://www.hollischuang.com/archives/1058 Integer 类在对象中包装了一个基本类型 int 的值。Integer 类型的对象包含一个 int 类型的字段。 此外,该类提供了多个方法,能在 int 类型和 String 类型之间互相转换,还提供了处理 int 类…

15.深度学习练习:Deep Learning Art: Neural Style Transfer

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Problem Statement 2 - Transfer Learning 3 - Neural Style Transfer 3.1 - Computing the cont…

【2018icpc宁夏邀请赛现场赛】【Gym - 102222F】Moving On(Floyd变形,思维,离线处理)

https://nanti.jisuanke.com/t/41290 题干: Firdaws and Fatinah are living in a country with nn cities, numbered from 11 to nn. Each city has a risk of kidnapping or robbery. Firdawss home locates in the city uu, and Fatinahs home locates in the…

动手学PaddlePaddle(1):线性回归

你将学会: 机器学习的基本概念:假设函数、损失函数、优化算法数据怎么进行归一化处理paddlepaddle深度学习框架的一些基本知识如何用paddlepaddle深度学习框架搭建全连接神经网络参考资料:https://www.paddlepaddle.org.cn/documentation/doc…

Apollo进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving

原文链接:进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving 自动驾驶车辆的规划决策模块负责生成车辆的行驶行为,是体现车辆智慧水平的关键。规划决策模块中的运动规划环节负责生成车辆的局部运动轨迹,是决定车辆…