12.深度学习练习:Residual Networks(注定成为经典)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。

课程链接:https://www.deeplearning.ai/deep-learning-specialization/

目录

1 - The problem of very deep neural networks

2 - Building a Residual Network

2.1 - The identity block

2.2 - The convolutional block

3 - Building your first ResNet model (50 layers)

References


Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by He et al., allow you to train much deeper networks than were previously practically feasible.

In this assignment, you will:

  • Implement the basic building blocks of ResNets.
  • Put together these building blocks to implement and train a state-of-the-art neural network for image classification.

This assignment will be done in Keras.

Before jumping into the problem, let's run the cell below to load the required packages.

import numpy as np
import tensorflow as tf
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inlineimport keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)

1 - The problem of very deep neural networks

Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.

The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values).

During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:


2 - Building a Residual Network

In ResNets, a "shortcut" or a "skip connection" allows the gradient to be directly backpropagated to earlier layers:

The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.

We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function--even more than skip connections helping with vanishing gradients--accounts for ResNets' remarkable performance.)

Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.


2.1 - The identity block

The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say a^{[l]}) has the same dimension as the output activation (say a^{[l+2]}). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:

The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!

In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:

Here're the individual steps.

First component of main path:

  • The first CONV2D has F_1 filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be conv_name_base + '2a'. Use 0 as the seed for the random initialization.
  • The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a'.
  • Then apply the ReLU activation function. This has no name and no hyperparameters.

Second component of main path:

  • The second CONV2D has F_2 filters of shape (?,?) and a stride of (1,1). Its padding is "same" and its name should be conv_name_base + '2b'. Use 0 as the seed for the random initialization.
  • The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b'.
  • Then apply the ReLU activation function. This has no name and no hyperparameters.

Third component of main path:

  • The third CONV2D has F_3 filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be conv_name_base + '2c'. Use 0 as the seed for the random initialization.
  • The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c'. Note that there is no ReLU activation function in this component.

Final step:

  • The shortcut and the input are added together.
  • Then apply the ReLU activation function. This has no name and no hyperparameters.

Exercise: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.

  • To implement the Conv2D step: See reference
  • To implement BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the channels axis))
  • For the activation, use: Activation('relu')(X)
  • To add the value passed forward by the shortcut: See reference
def identity_block(X, f, filters, stage, block):"""Implementation of the identity block as defined in Figure 4Arguments:X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)f -- integer, specifying the shape of the middle CONV's window for the main pathfilters -- python list of integers, defining the number of filters in the CONV layers of the main pathstage -- integer, used to name the layers, depending on their position in the networkblock -- string/character, used to name the layers, depending on their position in the networkReturns:X -- output of the identity block, tensor of shape (n_H, n_W, n_C)"""# defining name basisconv_name_base = 'res' + str(stage) + block + '_branch'bn_name_base = 'bn' + str(stage) + block + '_branch'# Retrieve FiltersF1, F2, F3 = filters# Save the input value. You'll need this later to add back to the main path. X_shortcut = X# First component of main pathX = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)X = Activation('relu')(X)# Second component of main path (≈3 lines)X = Conv2D(filters=F2, kernel_size = (f, f), strides=(1,1), padding='same', name=conv_name_base+'2b', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)X= Activation('relu')(X)# Third component of main path (≈2 lines)X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name = bn_name_base + '2c')(X)# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)X = layers.add([X, X_shortcut])X = Activation('relu')(X)return X

2.2 - The convolutional block

You've implemented the ResNet identity block. Next, the ResNet "convolutional block" is the other type of block. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:

The CONV2D layer in the shortcut path is used to resize the input ? to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix ??

discussed in lecture.) For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.

The details of the convolutional block are as follows.

First component of main path:

  • The first CONV2D has F_1 filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be conv_name_base + '2a'.
  • The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a'.
  • Then apply the ReLU activation function. This has no name and no hyperparameters.

Second component of main path:

  • The second CONV2D has F_2 filters of (f,f) and a stride of (1,1). Its padding is "same" and it's name should be conv_name_base + '2b'.
  • The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b'.
  • Then apply the ReLU activation function. This has no name and no hyperparameters.

Third component of main path:

  • The third CONV2D has F_3 filters of (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be conv_name_base + '2c'.
  • The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c'. Note that there is no ReLU activation function in this component.

Shortcut path:

  • The CONV2D has F_3 filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be conv_name_base + '1'.
  • The BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '1'.

Final step:

  • The shortcut and the main path values are added together.
  • Then apply the ReLU activation function. This has no name and no hyperparameters.

Exercise: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.

  • Conv Hint
  • BatchNorm Hint (axis: Integer, the axis that should be normalized (typically the features axis))
  • For the activation, use: Activation('relu')(X)
  • Addition Hint
def convolutional_block(X, f, filters, stage, block, s = 2):"""Implementation of the convolutional block as defined in Figure 4Arguments:X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)f -- integer, specifying the shape of the middle CONV's window for the main pathfilters -- python list of integers, defining the number of filters in the CONV layers of the main pathstage -- integer, used to name the layers, depending on their position in the networkblock -- string/character, used to name the layers, depending on their position in the networks -- Integer, specifying the stride to be usedReturns:X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)"""# defining name basisconv_name_base = 'res' + str(stage) + block + '_branch'bn_name_base = 'bn' + str(stage) + block + '_branch'# Retrieve FiltersF1, F2, F3 = filters# Save the input valueX_shortcut = X##### MAIN PATH ###### First component of main path X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', padding='valid', kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)X = Activation('relu')(X)# Second component of main path (≈3 lines)X = Conv2D(F2, (f, f), strides = (1, 1), name = conv_name_base + '2b',padding='same', kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)X = Activation('relu')(X)# Third component of main path (≈2 lines)X = Conv2D(F3, (1, 1), strides = (1, 1), name = conv_name_base + '2c',padding='valid', kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)##### SHORTCUT PATH #### (≈2 lines)X_shortcut = Conv2D(F3, (1,1), strides=(s,s), name=conv_name_base+'1', padding='valid', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)X_shortcut = BatchNormalization(axis=3, name = bn_name_base + '1')(X_shortcut)# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)X = layers.add([X, X_shortcut])X = Activation('relu')(X)return X

3 - Building your first ResNet model (50 layers)

You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.

The details of this ResNet-50 model are:

  • Zero-padding pads the input with a pad of (3,3)
  • Stage 1:
    • The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1".
    • BatchNorm is applied to the channels axis of the input.
    • MaxPooling uses a (3,3) window and a (2,2) stride.
  • Stage 2:
    • The convolutional block uses three set of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a".
    • The 2 identity blocks use three set of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".
  • Stage 3:
    • The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
    • The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
  • Stage 4:
    • The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
    • The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
  • Stage 5:
    • The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
    • The 2 identity blocks use three set of filters of size [256, 256, 2048], "f" is 3 and the blocks are "b" and "c".
  • The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
  • The flatten doesn't have any hyperparameters or name.
  • The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be 'fc' + str(classes).

Exercise: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.

You'll need to use this function:

  • Average pooling see reference

Here're some other functions we used in the code below:

  • Conv2D: See reference
  • BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the features axis))
  • Zero padding: See reference
  • Max pooling: See reference
  • Fully conected layer: See reference
  • Addition: See reference
def ResNet50(input_shape = (64, 64, 3), classes = 6):"""Implementation of the popular ResNet50 the following architecture:CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYERArguments:input_shape -- shape of the images of the datasetclasses -- integer, number of classesReturns:model -- a Model() instance in Keras"""# Define the input as a tensor with shape input_shapeX_input = Input(input_shape)# Zero-PaddingX = ZeroPadding2D((3, 3))(X_input)# Stage 1X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)X = Activation('relu')(X)X = MaxPooling2D((3, 3), strides=(2, 2))(X)# Stage 2X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')### START CODE HERE #### Stage 3 (≈4 lines)# The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".# The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".X = convolutional_block(X, f=3, filters=[128,128,512], stage=3, block='a', s=2)X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')# Stage 4 (≈6 lines)# The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".# The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".X = convolutional_block(X, f = 3, filters=[256, 256, 1024], block='a', stage=4, s = 2)X = identity_block(X, f = 3, filters=[256, 256, 1024], stage=4, block='b')X = identity_block(X, f = 3, filters=[256, 256, 1024], stage=4, block='c')X = identity_block(X, f = 3, filters=[256, 256, 1024], stage=4, block='d')X = identity_block(X, f = 3, filters=[256, 256, 1024], stage=4, block='e')X = identity_block(X, f = 3, filters=[256, 256, 1024], stage=4, block='f')# Stage 5 (≈3 lines)# The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".# The 2 identity blocks use three set of filters of size [256, 256, 2048], "f" is 3 and the blocks are "b" and "c".X = convolutional_block(X, f = 3, filters=[512, 512, 2048], stage=5, block='a', s = 2)# filters should be [256, 256, 2048], but it fail to be graded. Use [512, 512, 2048] to pass the gradingX = identity_block(X, f = 3, filters=[256, 256, 2048], stage=5, block='b')X = identity_block(X, f = 3, filters=[256, 256, 2048], stage=5, block='c')# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"# The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".X = AveragePooling2D(pool_size=(2,2))(X)### END CODE HERE #### output layerX = Flatten()(X)X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)# Create modelmodel = Model(inputs = X_input, outputs = X, name='ResNet50')return model
model = ResNet50(input_shape = (64, 64, 3), classes = 6)model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.(注意耗时)

model.fit(X_train, Y_train, epochs = 2, batch_size = 32)

**What you should remember:** - Very deep "plain" networks don't work in practice because they are hard to train due to vanishing gradients. - The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function. - There are two main type of blocks: The identity block and the convolutional block. - Very deep Residual Networks are built by stacking these blocks together.

References

This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the github repository of Francois Chollet:

  • Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - Deep Residual Learning for Image Recognition (2015)
  • Francois Chollet's github repository: deep-learning-models/resnet50.py at master · fchollet/deep-learning-models · GitHub

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/439837.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

13.深度学习练习:Autonomous driving - Car detection(YOLO实战)

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ Welcome to your week 3 programming assignment. You will learn about object detection using the very pow…

14.深度学习练习:Face Recognition for the Happy House

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the i…

java Integer 源码学习

转载自http://www.hollischuang.com/archives/1058 Integer 类在对象中包装了一个基本类型 int 的值。Integer 类型的对象包含一个 int 类型的字段。 此外,该类提供了多个方法,能在 int 类型和 String 类型之间互相转换,还提供了处理 int 类…

15.深度学习练习:Deep Learning Art: Neural Style Transfer

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Problem Statement 2 - Transfer Learning 3 - Neural Style Transfer 3.1 - Computing the cont…

【2018icpc宁夏邀请赛现场赛】【Gym - 102222F】Moving On(Floyd变形,思维,离线处理)

https://nanti.jisuanke.com/t/41290 题干: Firdaws and Fatinah are living in a country with nn cities, numbered from 11 to nn. Each city has a risk of kidnapping or robbery. Firdawss home locates in the city uu, and Fatinahs home locates in the…

动手学PaddlePaddle(1):线性回归

你将学会: 机器学习的基本概念:假设函数、损失函数、优化算法数据怎么进行归一化处理paddlepaddle深度学习框架的一些基本知识如何用paddlepaddle深度学习框架搭建全连接神经网络参考资料:https://www.paddlepaddle.org.cn/documentation/doc…

Apollo进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving

原文链接:进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving 自动驾驶车辆的规划决策模块负责生成车辆的行驶行为,是体现车辆智慧水平的关键。规划决策模块中的运动规划环节负责生成车辆的局部运动轨迹,是决定车辆…

JVM核心之JVM运行和类加载全过程

为什么研究类加载全过程? 有助于连接JVM运行过程更深入了解java动态性(解热部署,动态加载),提高程序的灵活性类加载机制 JVM把class文件加载到内存,并对数据进行校验、解析和初始化,最终形成J…

【2018icpc宁夏邀请赛现场赛】【Gym - 102222A】Maximum Element In A Stack(动态的栈中查找最大元素)

https://nanti.jisuanke.com/t/41285 题干: As an ACM-ICPC newbie, Aishah is learning data structures in computer science. She has already known that a stack, as a data structure, can serve as a collection of elements with two operations: push, …

动手学PaddlePaddle(2):房价预测

通过这个练习可以了解到: 机器学习的典型过程: 获取数据 数据预处理 -训练模型 -应用模型 fluid训练模型的基本步骤: 配置网络结构: 定义成本函数avg_cost 定义优化器optimizer 获取训练数据 定义运算场所(place)和执行器(exe) 提供数…

JAVA 堆栈 堆 方法区 解析

基础数据类型直接在栈空间分配, 方法的形式参数,直接在栈空间分配,当方法调用完成后从栈空间回收。 引用数据类型,需要用new来创建,既在栈空间分配一个地址空间,又在堆空间分配对象的类变量 。 方法的引用…

动手学PaddlePaddle(3):猫脸识别

你将学会: 预处理图片数据 利用PaddlePaddle框架实现Logistic回归模型: 在开始练习之前,简单介绍一下图片处理的相关知识: 图片处理 由于识别猫问题涉及到图片处理知识,这里对计算机如何保存图片做一个简单的介绍。在…

Java对象分配原理

Java对象模型: OOP-Klass模型 在正式探讨JVM对象的创建前,先简单地介绍一下hotspot中实现的Java的对象模型。在JVM中,并没有直接将Java对象映射成C对象,而是采用了oop-klass模型,主要是不希望每个对象中都包含有一份虚函数表&…

动手学PaddlePaddle(4):MNIST(手写数字识别)

本次练习将使用 PaddlePaddle 来实现三种不同的分类器,用于识别手写数字。三种分类器所实现的模型分别为 Softmax 回归、多层感知器、卷积神经网络。 您将学会 实现一个基于Softmax回归的分类器,用于识别手写数字 实现一个基于多层感知器的分类器&#…

动手学PaddlePaddle(5):迁移学习

本次练习,用迁移学习思想,结合paddle框架,来实现图像的分类。 相关理论: 1. 原有模型作为一个特征提取器: 使用一个用ImageNet数据集提前训练(pre-trained)好的CNN,再除去最后一层全连接层(fully-connecte…

Apollo进阶课程㉓丨Apollo规划技术详解——Motion Planning with Environment

原文链接:进阶课程㉓丨Apollo规划技术详解——Motion Planning with Environment 当行为层决定要在当前环境中执行的驾驶行为时,其可以是例如巡航-车道,改变车道或右转,所选择的行为必须被转换成路径或轨迹,可由低级反…

Java对象模型-oop和klass

oop-klass模型 Hotspot 虚拟机在内部使用两组类来表示Java的对象和类。 oop(ordinary object pointer),用来描述对象实例信息。klass,用来描述 Java 类,是虚拟机内部Java类型结构的对等体 。 JVM内部定义了各种oop-klass,在JV…

Apollo进阶课程㉔丨Apollo 规划技术详解——Motion Planning Environment

原文链接:进阶课程㉔丨Apollo 规划技术详解——Motion Planning Environment 自动驾驶汽车核心技术包括环境感知、行为决策、运动规划与控制等方面。其中,行为决策系统、运动规划与控制系统作为无人驾驶汽车的“大脑”,决定了其在不同交通驾…

一步步编写操作系统 26 打开A20地址线

打开A20地址线 还记得实模式下的wrap-around吗?也就是地址回绕。咱们一起来复习一下。实模式下内存访问是采取“段基址:段内偏移地址”的形式,段基址要乘以16后再加上段内偏移地址。实模式下寄存器都是16位的,如果段基址和段内偏移地址都为1…

一步步编写操作系统 27 处理器微架构之流水线简介

了解处理器内部硬件架构,有助于理解软件运行原理,因为这两者本身相辅相成,相互依存。就像枪和狙击手,枪的操作和外形设计都是要根据人体工学,让人不仅操作容易,而且携带也要轻便,做到能随时射出…