1.深度学习练习:Python Basics with Numpy(选修)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。

课程链接:https://www.deeplearning.ai/deep-learning-specialization


目录

1 - Building basic functions with numpy

1.1 - np.exp(), sigmoid function

1.2 - Sigmoid gradient

1.3 - Reshaping arrays(常用)

1.4 - Normalizing rows(常用)

1.5 - Broadcasting and the softmax function(理解)

2 - Vectorization

2.1 - Implement the L1 and L2 loss functions

3 - 推荐阅读:


1 - Building basic functions with numpy

1.1 - np.exp(), sigmoid function

import numpy as npx = np.array([1, 2, 3])
print(np.exp(x))x = np.array([1, 2, 3])
print (x + 3)

Exercise: Implement the sigmoid function using numpy.

Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.

提示:x可能代表一个实数、向量或者矩阵。在numpy中使用数组来表示x的数据类型。

import numpy as np 
def sigmoid(x):s = 1 / (1 + np.exp(-x))    return s

1.2 - Sigmoid gradient

Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:

实现sigmoid_grad()函数来计算sigmoid函数的梯度,sigmoid函数求导公式如下:

sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))

You often code this function in two steps:

  1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
  2. Compute sigmoid gradient
def sigmoid_derivative(x):s = 1 / (1 + np.exp(-x))ds = s *(1-s)return dsx = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))

1.3 - Reshaping arrays(常用)

Two common numpy functions used in deep learning are np.shape and np.reshape().

  • X.shape is used to get the shape (dimension) of a matrix/vector X
  • X.reshape(...) is used to reshape X into some other dimension

For example, in computer science, an image is represented by a 3D array of shape (length,height,depth=3). However, when you read an image as the input of an algorithm you convert it to a vector of shape (length∗height∗3,1). In other words, you "unroll", or reshape, the 3D array into a 1D vector.

Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:

def image2vector(image):v = image.reshape((image.shape[0] * image.shape[1] * image.shape[2], 1))return v

1.4 - Normalizing rows(常用)

Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to \frac{x}{\| x\|} (dividing each row vector of x by its norm).

Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).

def normalizeRows(x):"""Argument:x -- A numpy matrix of shape (n, m)Returns:x -- The normalized (by row) numpy matrix. """# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)x_norm = np.linalg.norm(x, ord=2, axis=1, keepdims=True) # Divide x by its norm.x = x / x_normreturn x

1.5 - Broadcasting and the softmax function(理解)

A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.

Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.

Instructions:

def softmax(x):"""Argument:x -- A numpy matrix of shape (n,m)Returns:s -- A numpy matrix equal to the softmax of x, of shape (n,m)"""x_exp = np.exp(x)# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).x_sum = np.sum(x_exp, axis=1, keepdims=True)s = x_exp / x_sumreturn sx = np.array([[9, 2, 5, 0, 0],[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))

**What you need to remember:** - np.exp(x) works for any np.array x and applies the exponential function to every coordinate - the sigmoid function and its gradient - image2vector is commonly used in deep learning - np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. - numpy has efficient built-in functions - broadcasting is extremely useful.

你需要记住的:

  • np.exp(x)对任意数组都有效并且对每个元素都进行指数计算;
  • sigmoid和其梯度函数以及image2vector函数以及np.reshape函数是深度学习中最常使用的函数;
  • numpy的广播机制是最常使用到的。

2 - Vectorization

2.1 - Implement the L1 and L2 loss functions

Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.

Reminder:

  • The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions  are from the true values . In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
  • L1 loss is defined as:

\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}

def L1(yhat, y):loss = np.sum(abs(yhat-y))return lossyhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))

L2 loss is defined as:

L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2

def L2(yhat, y):loss = np.dot((y-yhat),(y-yhat))  return lossyhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))

**What to remember:** - Vectorization is very important in deep learning. It provides computational efficiency and clarity. - You have reviewed the L1 and L2 loss. - You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc.

需要记住的:

  • 向量化是深度学习中最重要的,使得计算更加高效和清晰;
  • L1和 L2损失函数公式;
  • numpy中其它有用的函数:np.sum,np.dot,np.multiply,np.maximum。

3 - 推荐阅读:

  • Python之Numpy入门实战教程(1):基础篇
  • Python之Numpy入门实战教程(2):进阶篇之线性代数
  • numpy手册

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/439870.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

一步步编写操作系统 20 x86虚拟bochs一般用法 上

bochs一般用法 bochs是一个开源x86 虚拟机软件。在它的实现中定义了各种数据结构来模拟硬件,用软件模拟硬件缺点是速度比较慢,毕竟全是软件来模拟,您想,虚拟机还要在软件中模拟各种中断,能不慢吗。不过它的功能非常强…

2.3)深度学习笔记:超参数调试、Batch正则化和程序框架

目录 1)Tuning Process 2)Using an appropriate scale to pick hyperparameters 3)Hyperparameters tuning in practice: Pandas vs. Caviar 4)Normalizing activations in a network(重点) 5&#xf…

2.深度学习练习:Logistic Regression with a Neural Network mindset

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ You will learn to: Build the general architecture of a learning algorithm, including: Initializing para…

JVM内存区域详解

Java中虚拟机在执行Java程序的过程中会将它所管理的内存区域划分为若干不同的数据区域。下面来介绍几个运行时数据区域。 一、程序计数器 1.1 简述 程序计数器(Program Counter Register)是一块较小的内存空间,它的作用可以看做是当前线程所…

3.深度学习练习:Planar data classification with one hidden layer

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ You will learn to: Implement a 2-class classification neural network with a single hidden layerUse unit…

一步步编写操作系统 11 实模式下程序分段的原因

cpu中本来是没有实模式这一称呼的,是因为有了保护模式后,为了将老的模式区别开来,所以称老的模式为实模式。这情况就像所有同学坐在同一个教室里,本来没有老同学这一概念,但某天老师领着一个陌生人进入教室并和大家宣布…

4.深度学习练习:Building your Deep Neural Network: Step by Step(强烈推荐)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ fter this assignment you will be able to: Use non-linear units like ReLU to improve your modelBuild a d…

一步步编写操作系统21 x86虚拟机bochs 跟踪bios

为了让大家更好的理解bios是怎样被执行的,也就是计算机中第一个软件是怎样开始的,咱们还是先看下图3-17。在图的上面第5行,显示的是下一条待执行的指令,这是程序计数器(PC)中的值,在x86上的程序…

【CodeForces - 361D】Levko and Array (二分,dp)

题干: Levko has an array that consists of integers: a1, a2, ... , an. But he doesn’t like this array at all. Levko thinks that the beauty of the array a directly depends on value c(a), which can be calculated by the formula: The less value…

5.深度学习练习:Deep Neural Network for Image Classification: Application

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ After this assignment you will be able to: Build and apply a deep neural network to supervised learning…

【CodeForces - 689D】Friends and Subsequences(RMQ,二分 或单调队列)

题干: Mike and !Mike are old childhood rivals, they are opposite in everything they do, except programming. Today they have a problem they cannot solve on their own, but together (with you) — who knows? Every one of them has an integer seque…

6.深度学习练习:Initialization

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Neural Network model 2 - Zero initialization 3 - Random initialization(掌握&…

【CodeForces - 602D】Lipshitz Sequence(思维,单调栈,斜率单调性)

题干: A function is called Lipschitz continuous if there is a real constant Ksuch that the inequality |f(x) - f(y)| ≤ K|x - y| holds for all . Well deal with a more... discrete version of this term. For an array , we define its Lipschi…

7.深度学习练习:Regularization

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1-Package 2 - Non-regularized model 3 - L2 Regularization(掌握) 4-Dropou…

深入详解JVM内存模型与JVM参数详细配置

本系列会持续更新。 JVM基本是BAT面试必考的内容,今天我们先从JVM内存模型开启详解整个JVM系列,希望看完整个系列后,可以轻松通过BAT关于JVM的考核。 BAT必考JVM系列专题 1.JVM内存模型 2.JVM垃圾回收算法 3.JVM垃圾回收器 4.JVM参数详解 5…

8.深度学习练习:Gradient Checking

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1) How does gradient checking work? 2) 1-dimensional gradient checking 3) N-dimensional gradie…

9.深度学习练习:Optimization Methods

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Gradient Descent 2 - Mini-Batch Gradient descent 3 - Momentum 4 - Adam 5 - Model with dif…

一步步编写操作系统 22 硬盘操作方法

硬盘中的指令很多,各指令的用法也不同。有的指令直接往command寄存器中写就行了,有的还要在feature寄存器中写入参数,最权威的方法还是要去参考ATA手册。由于本书中用到的都是简单的指令,所以对此抽象出一些公共的步骤仅供参考之用…

10.深度学习练习:Convolutional Neural Networks: Step by Step(强烈推荐)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Packages 2 - Outline of the Assignment 3 - Convolutional Neural Networks 3.1 - Zero-Paddin…

一步步编写操作系统 23 重写主引导记录mbr

本节我们在之前MBR的基础上,做个稍微大一点的改进,经过这个改进后,我们的MBR可以读取硬盘。听上去这可是个大“手术”呢,我们要将之前学过的知识都用上啦。其实没那么大啦,就是加了个读写磁盘的函数而已,哈…