《吴恩达深度学习》第一课第四周任意层的神经网络实现及BUG处理

目录

  • 一、实现
    • 1、吴恩达提供的工具函数
      • sigmoid
      • sigmoid求导
      • relu
      • relu求导
    • 2、实现代码
      • 导包和配置
      • 初始化参数
      • 前向运算
      • 计算损失
      • 后向运算
      • 更新参数
      • 组装模型
    • 3、问题及思考

一、实现

1、吴恩达提供的工具函数

这几个函数这里只是展示一下,这是吴恩达写好的工具类,在实现的部分会导入;具体查看提供的附件

sigmoid

def sigmoid(Z):A = 1/(1+np.exp(-Z))cache = Zreturn A, cache

sigmoid求导

def sigmoid_backward(dA, cache):Z = caches = 1/(1+np.exp(-Z))dZ = dA * s * (1-s)return dZ

relu

def relu(Z):A = np.maximum(0,Z)cache = Z return A, cache

relu求导

def relu_backward(dA, cache):Z = cachedZ = np.array(dA, copy=True)dZ[Z <= 0] = 0return dZ

2、实现代码

导包和配置

import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'%load_ext autoreload
%autoreload 2np.random.seed(1)

初始化参数

def initialize_parameters_deep(layer_dims):"""Arguments:layer_dims -- python array (list) containing thedimensions of each layer in our networkReturns:parameters -- python dictionary containing yourparameters "W1", "b1", ..., "WL", "bL":Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])bl -- bias vector of shape (layer_dims[l], 1)"""np.random.seed(3)parameters = {}L = len(layer_dims)for l in range(1, L):parameters['W%d' % l] = np.random.randn(layer_dims[l],layer_dims[l-1]) * 0.01parameters['b%d' % l] = np.zeros((layer_dims[l], 1))return parameters  

前向运算

def linear_forward(A, W, b):"""Implement the linear part of a layer's forward propagation.Arguments:A -- activations from previous layer (or input data): (size of previous layer, number of examples)W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)b -- bias vector, numpy array of shape (size of the current layer, 1)Returns:Z -- the input of the activation function,also called pre-activation parameter cache -- a python dictionary containing "A", "W" and "b" ;stored for computing the backward pass efficiently"""Z = np.dot(W, A) + bcache = (A, W, b)return Z, cache
def linear_activation_forward(A_prev, W, b, activation):"""Implement the forward propagation for the LINEAR->ACTIVATION layerArguments:A_prev -- activations from previous layer(or input data): (size of previous layer, number of examples)W -- weights matrix: numpy array of shape(size of current layer, size of previous layer)b -- bias vector, numpy array of shape (size of the current layer, 1)activation -- the activation to be used in this layer,stored as a text string: "sigmoid" or "relu"Returns:A -- the output of the activation function,also called the post-activation value cache -- a python dictionary containing "linear_cache"and "activation_cache";stored for computing the backward pass efficiently"""if activation == "sigmoid":Z, linear_cache = linear_forward(A_prev, W, b)A, activation_cache = sigmoid(Z)elif activation == "relu":Z, linear_cache = linear_forward(A_prev, W, b)A, activation_cache = relu(Z)cache = (linear_cache, activation_cache)return A, cache
def L_model_forward(X, parameters):"""Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computationArguments:X -- data, numpy array of shape (input size, number of examples)parameters -- output of initialize_parameters_deep()Returns:AL -- last post-activation valuecaches -- list of caches containing:every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)the cache of linear_sigmoid_forward() (there is one, indexed L-1)"""caches = []A = XL = len(parameters) // 2  # number of layers in the neural network# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.for l in range(1, L):A_prev = A ### START CODE HERE ### (≈ 2 lines of code)A, linear_activation_cache = linear_activation_forward(A_prev,parameters['W%s' % l],  parameters['b%s' % l], activation = "relu")caches.append(linear_activation_cache)### END CODE HERE #### Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.### START CODE HERE ### (≈ 2 lines of code)AL, linear_activation_cache = linear_activation_forward(A,parameters['W%s' % L],  parameters['b%s' % L], activation = "sigmoid")caches.append(linear_activation_cache)### END CODE HERE ###return AL, caches

计算损失

def compute_cost(AL, Y):m = Y.shape[1]# Compute loss from aL and y.### START CODE HERE ### (≈ 1 lines of code)cost = -1./ m * (np.dot(np.log(AL), Y.T) + np.dot(np.log(1-AL), (1-Y).T))### END CODE HERE ###cost = np.squeeze(cost)      return cost

后向运算

def linear_backward(dZ, cache):"""Implement the linear portion of backward propagation fora single layer (layer l)Arguments:dZ -- Gradient of the cost with respect to the linear output (of current layer l)cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layerReturns:dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prevdW -- Gradient of the cost with respect to W (current layer l), same shape as Wdb -- Gradient of the cost with respect to b (current layer l), same shape as b"""A_prev, W, b = cachem = A_prev.shape[1]dA_prev = np.dot(W.T, dZ)dW = 1./ m * np.dot(dZ, A_prev.T)db = 1./m * np.sum(dZ, axis=1, keepdims=True)### END CODE HERE ###return dA_prev, dW, db
def linear_activation_backward(dA, cache, activation):"""Implement the backward propagation for the LINEAR->ACTIVATION layer.Arguments:dA -- post-activation gradient for current layer l cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficientlyactivation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"Returns:dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prevdW -- Gradient of the cost with respect to W (current layer l), same shape as Wdb -- Gradient of the cost with respect to b (current layer l), same shape as b"""linear_cache, activation_cache = cacheif activation == "relu":### START CODE HERE ### (≈ 2 lines of code)dZ = sigmoid_backward(dA, activation_cache)dA_prev, dW, db = linear_backward(dZ, linear_cache)### END CODE HERE ###elif activation == "sigmoid":### START CODE HERE ### (≈ 2 lines of code)dZ = relu_backward(dA, activation_cache)dA_prev, dW, db = linear_backward(dZ, linear_cache)### END CODE HERE ###return dA_prev, dW, db
def L_model_backward(AL, Y, caches):"""Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID groupArguments:AL -- probability vector, output of the forward propagation (L_model_forward())Y -- true "label" vector (containing 0 if non-cat, 1 if cat)caches -- list of caches containing:every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])Returns:grads -- A dictionary with the gradientsgrads["dA" + str(l)] = ...grads["dW" + str(l)] = ...grads["db" + str(l)] = ..."""grads = {}L = len(caches) # the number of layersm = AL.shape[1]Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL# Initializing the backpropagation### START CODE HERE ### (1 line of code)grads['dA'+str(L)] = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))### END CODE HERE ###layer = Lgrads['dA'+str(layer-1)], grads['dW'+str(layer)],grads['db'+str(layer)]= linear_activation_backward(grads['dA'+str(layer)], caches[layer-1],activation = "sigmoid")for l in reversed(range(L - 1)):layer = l + 1grads['dA'+str(layer-1)],grads['dW'+str(layer)],grads['db'+str(layer)] = linear_activation_backward(grads['dA'+str(layer)], caches[layer-1], activation = "relu")return grads

更新参数

def update_parameters(parameters, grads, learning_rate):"""Update parameters using gradient descentArguments:parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients, output of L_model_backwardReturns:parameters -- python dictionary containing your updated parameters parameters["W" + str(l)] = ... parameters["b" + str(l)] = ..."""L = len(parameters) // 2 # number of layers in the neural networkfor l in range(1, L+1):parameters['W'+str(l)] = parameters['W'+str(l)] - learning_rate * grads['dW'+str(l)]parameters['b'+str(l)] = parameters['b'+str(l)] - learning_rate * grads['db'+str(l)]return parameters

组装模型

def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075,num_iterations = 3000, print_cost=False):#lr was 0.009"""Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.Arguments:X -- data, numpy array of shape (number of examples, num_px * num_px * 3)Y -- true "label" vector (containing 0 if cat, 1 if non-cat),of shape (1, number of examples)layers_dims -- list containing the input size and eachlayer size, of length (number of layers + 1).learning_rate -- learning rate of the gradient descent update rulenum_iterations -- number of iterations of the optimization loopprint_cost -- if True, it prints the cost every 100 stepsReturns:parameters -- parameters learnt by the model.They can then be used to predict."""np.random.seed(1)costs = []# Parameters initialization.### START CODE HERE ###parameters = initialize_parameters_deep(layers_dims)### END CODE HERE #### Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.### START CODE HERE ### (≈ 1 line of code)AL, caches = L_model_forward(X, parameters)### END CODE HERE #### Compute cost.### START CODE HERE ### (≈ 1 line of code)cost = compute_cost(AL, Y)### END CODE HERE #### Backward propagation.### START CODE HERE ### (≈ 1 line of code)grads = L_model_backward(AL, Y, caches)### END CODE HERE #### Update parameters.### START CODE HERE ### (≈ 1 line of code)parameters = update_parameters(parameters, grads, learning_rate)### END CODE HERE #### Print the cost every 100 training exampleif print_cost and i % 100 == 0:print ("Cost after iteration %i: %f" %(i, cost))if print_cost and i % 100 == 0:costs.append(cost)# plot the costplt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()return parameters

3、问题及思考

除了L层的后向运算的测试用例外,其余各个环节及最后的结果都是正确的。
我的代码的运行结果和测试用例的对比如下图所示:
在这里插入图片描述
可以看到运行结果是完全对不上的!
所以网上找了很多答案,他们的代码与我的代码的不同之处都在L层的反向传播处:
(1)来源
在这里插入图片描述
(2)来源
在这里插入图片描述
就这两个答案来看,他们的写法在我看来是错误的;但是他们能对上答案,而对不上;我改成了它们的样子也对不上~所以对不上答案的问题可能在于我的测试用例?我也没有去看他们的测试用例和我的是否一样!反正就是这一个测试用例过不去,后面的全对。证明我的实现是没有问题的。
然后要说为什么它们的代码在我看来是不正确的,我的代码如下
在这里插入图片描述
很明显,1处的公式得到的就是输出层(L)的激活值的导数,而2和3每次求导,都应该得到前一层的激活值的导数与当前层的W和b的导数!如下图所示
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/569145.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

球形坐标和Cartesian 坐标的转换 spherical coordinate

spherical coordinate 和cartesian坐标的转换&#xff0c; 个人认为在控制camera的时候最为有用&#xff0c;比如CS中的操作方式&#xff0c; 鼠标负责方向的改变&#xff0c;其恰恰就是球形坐标的改变。而camera的位置改变就是cartesian的改变&#xff0c;所以这两者的转换就必…

【HANA系列】SAP HANA Studio出现Fetching Children...问题

公众号&#xff1a;SAP Technical本文作者&#xff1a;matinal原文出处&#xff1a;http://www.cnblogs.com/SAPmatinal/ 原文链接&#xff1a;【ABAP系列】SAP HANA Studio出现"Fetching Children..."问题前言部分 大家可以关注我的公众号&#xff0c;公众号里的排版…

朴素Bayse新闻分类实践

目录1、信息增益&#xff08;互信息&#xff09;介绍&#xff08;1&#xff09;西瓜书中的信息增益[^1]&#xff08;2&#xff09;PRML中的互信息[^2]&#xff08;3&#xff09; 其实他们是一个东西2、朴素Bayse新闻分类[^3]&#xff08;1&#xff09;常量及辅助函数&#xff0…

【数据仓库】OLTP系统和OLAP系统区别

OLTP&#xff1a;联机事务处理系统(OnLine Transaction Processing) OLAP&#xff1a;联机分析处理系统(OnLine Analytical Processing) 参考文档&#xff1a; 操作数据库系统(OLTP)和联机分析处理系统(OLAP)的区别转载于:https://www.cnblogs.com/badboy200800/p/11189478.htm…

Good Numbers(HDU5447+唯一分解)

题目链接 传送门 题面 题意 首先定义对于\(k\)的好数\(u\)&#xff1a;如果\(u\leq k\)且\(u\)的所有质因子与\(k\)的质因子一样则称\(u\)对于\(k\)是一个好数。 现给你两个数\(k1,k2(1\leq k1,k2\leq 10^{24})\)&#xff0c;要你求\(k1,k2\)的好数个数&#xff0c;对于\(k1,k2…

从机器码到面向对象

1.从机器码到面向对象 本章节主要探讨是什么驱动着编程从机器码发展到了汇编语言&#xff0c;又从汇编语言发展到了面向过程编程&#xff0c;最后从面向过程编程发展到面向对象编程。通过这些探讨最终明确多年来的软件工程发展我们都解决了哪些棘手的问题。 1.1机器码 在真正…

spfa_队列

spfa:1.当给定的图存在负权边时&#xff0c;Dijkstra等算法便没有了用武之地&#xff0c;而Bellman-Ford算法的复杂度又过高&#xff0c;SPFA算法便派上用场了.2.我们约定有向加权图G不存在负权回路&#xff0c;即最短路径一定存在3.思路&#xff1a;用数组d记录每个结点的最短…

Tomcat配置解析

Tomcat文件配置 tomcat解压后目录 bin&#xff1a;可执行文件&#xff08;startup.bat shutdown.bat) conf&#xff1a;配置文件&#xff08;server.xml&#xff09; lib&#xff1a;tomcat依赖的jar文件 log&#xff1a;日志文件&#xff08;记录出错等信息&#xff09; temp&…

教你配置安全的ProFTPD服务器(中)

二、 基本加固ProFTPD服务器步骤 1.升级版本 注&#xff1a;如果当前版本已经是最新版本&#xff0c;可以跳过第一步。 升级陈旧的ProFTPD版本&#xff0c;因为早期的ProFTPD版本存在的安全漏洞。对于一个新配置的ProFTPD服务器来说使用最新稳定版本是最明智的选择&#xff0c;…

Java 将Word转为PDF、PNG、SVG、RTF、XPS、TXT、XML

同一文档在不同的编译或阅读环境中&#xff0c;需要使用特定的文档格式来打开&#xff0c;通常需要通过转换文档格式的方式来实现。下面将介绍在Java程序中如何来转换Word文档为其他几种常见文档格式&#xff0c;如PDF、图片png、svg、xps、rtf、txt、xml等。 使用工具&#xf…

CentOS7上GitLab的使用

生成SSH Keys 生成root账号的ssh key # ssh-keygen -t rsa -C "adminexample.com" 显示pub key的值 # cat ~/.ssh/id_rsa.pub 复制显示出来的 pub key 以root账号登陆gitlab&#xff0c;点击 "profile settings" 然后点击 "SSH Keys" 将复制的pu…

数据库:除运算

除运算 设关系R除以关系S的结果为关系T&#xff0c;则T包含所有在R但不在S中的属性及其值&#xff0c;则T的原则与S的元组的所有组合都在R中。用象集来定义除法&#xff1a;给定关系R&#xff08;X&#xff0c;Y&#xff09;和S&#xff08;Y&#xff0c;Z&#xff09;。其中X&…

[图解tensorflow源码] 入门准备工作附常用的矩阵计算工具[转]

[图解tensorflow源码] 入门准备工作附常用的矩阵计算工具[转] Link: https://www.cnblogs.com/yao62995/p/5773142.html tensorflow使用了自动化构建工具bazel、脚本语言调用c或cpp的包裹工具swig、使用EIGEN作为矩阵处理工具、Nvidia-cuBLAS GPU加速计算库、结构化数据存储格式…

现共收到 5 个分组,其目的地址分别为: (1) 128.96.40.10 (2) 128.96.41.12 (3) 128.96.41.151 (4) 192.4.123.17 (5) 192.4.

计算目的地址的下一跳&#xff1a; 设某路由器建立了如表 1 所示路由表。现共收到 5 个分组&#xff0c;其目的地址分别为&#xff1a;(1) 128.96.40.10(2) 128.96.41.12(3) 128.96.41.151(4) 192.4.123.17(5) 192.4.123.90试分别计算下一跳解答&#xff1a; 用目的IP地址和路由…

【转】Docker学习_本地/容器文件互传(5)

1、查找所有容器 #docker ps a 2、找出我们想要的容器名字并查找容器长ID #docker inspect -f {{.ID}} python 3、拷贝本地文件到容器 docker cp 本地路径 容器长ID:容器路径docker cp /Users/xubowen/Desktop/auto-post-advance.py 38ef22f922704b32cf2650407e16b146bf61c221…

数据流图典型例题

数据流图典型例题 1.假设一家工厂的采购部每天需要一张订货报表&#xff0c;报表按零件编号排序&#xff0c;表中列出所有需要再次订货的零件。对于每个需要再次订货的零件应该列出下列数据&#xff1a;零件编号、零件名称、订货数量、目前价格、主要供应商、次要供应商。零件…

白盒测试的逻辑覆盖辨析(语句覆盖、判定覆盖、条件覆盖、判定/条件覆盖、条件组合覆盖)

白盒测试逻辑覆盖&#xff08;语句覆盖、判定覆盖、条件覆盖、判定/条件覆盖、条件组合覆盖&#xff09; 逻辑覆盖测试&#xff1a; 语句覆盖&#xff1a;每条语句至少执行一次判定覆盖&#xff1a;每一判定的每个分支至少执行一次条件覆盖&#xff1a;每一判定中的每个条件&…

03 CSS听课笔记

CSS&#xff1a;页面美化和布局控制 1. 概念&#xff1a; Cascading Style Sheets 层叠样式表层叠&#xff1a;多个样式可以作用在同一个html的元素上&#xff0c;同时生效 2. 好处&#xff1a;(1)功能强大(2)将内容展示和样式控制分离   * 降低耦合度。解耦   * 让分工协作…

安装MySQL时出现“服务没有响应控制功能。请键入 NET HELPMSG 2186 以获得更多的帮助。”的问题解决

安装MySQL时出现“服务没有响应控制功能”的问题解决第一步&#xff1a;设置环境变量第二步&#xff1a;初始化my.ini第三步&#xff1a;添加文件第一步&#xff1a;设置环境变量 安装MySQL时&#xff0c;运行“net start mysql”时出现“服务没有响应控制功能。请键入 NET HE…

时间序列模型——ARIMA模型实现预测

ARIMA模型和因子预测 文章目录ARIMA模型和因子预测一、ARIMA模型&#xff08;整个周期&#xff09;1.数据预处理2.展示时序图2.数据建模&#xff08;1&#xff09;差分d&#xff08;2&#xff09;p和q&#xff08;3&#xff09;选择模型&#xff08;4&#xff09;检验残差序列&…