16.深度学习练习:Building your Recurrent Neural Network - Step by Step

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。课程链接:https://www.deeplearning.ai/deep-learning-specialization/

Building your Recurrent Neural Network - Step by Step

    • 1 - Forward propagation for the basic Recurrent Neural Network
      • 1.1 - RNN cell
      • 1.2 - RNN forward pass
    • 2 - Long Short-Term Memory (LSTM) network
        • - Forget gate
        • - Update gate
        • - Updating the cell
        • - Output gate
      • 2.1 - LSTM cell
      • 2.2 - Forward pass for LSTM

Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have “memory”. They can read inputs x⟨t⟩x^{\langle t \rangle}xt (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future.
Notation:

  • Superscript [l][l][l] denotes an object associated with the lthl^{th}lth layer.

    • Example: a[4]a^{[4]}a[4] is the 4th4^{th}4th layer activation. W[5]W^{[5]}W[5] and b[5]b^{[5]}b[5] are the 5th5^{th}5th layer parameters.
  • Superscript (i)(i)(i) denotes an object associated with the ithi^{th}ith example.

    • Example: x(i)x^{(i)}x(i) is the ithi^{th}ith training example input.
  • Superscript ⟨t⟩\langle t \ranglet denotes an object at the ttht^{th}tth time-step.

    • Example: x⟨t⟩x^{\langle t \rangle}xt is the input x at the ttht^{th}tth time-step. x(i)⟨t⟩x^{(i)\langle t \rangle}x(i)t is the input at the ttht^{th}tth
      timestep of example iii.
  • Lowerscript iii denotes the ithi^{th}ith entry of a vector.

    • Example: ai[l]a^{[l]}_iai[l] denotes the ithi^{th}ith entry of the activations in layer lll.

Let’s first import all the packages that you will need during this assignment.

import numpy as np
from rnn_utils import *

1 - Forward propagation for the basic Recurrent Neural Network

Later this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, Tx=TyT_x = T_yTx=Ty.
l Figure 1: Basic RNN mode
Here’s how you can implement an RNN:
Steps:

  1. Implement the calculations needed for one time-step of the RNN.
  2. Implement a loop over TxT_xTx time-steps in order to process all the inputs, one at a time.

1.1 - RNN cell

A Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell.
Figure 2: Basic RNN cell. Takes as input x⟨t⟩x^{\langle t \rangle}xt (current input) and a⟨t−1⟩a^{\langle t - 1\rangle}at1 (previous hidden state containing information from the past), and outputs a⟨t⟩a^{\langle t \rangle}at which is given to the next RNN cell and also used to predict y⟨t⟩y^{\langle t \rangle}yt
Exercise: Implement the RNN-cell described in Figure (2).

Instructions:

  1. Compute the hidden state with tanh activation: a⟨t⟩=tanh⁡(Waaa⟨t−1⟩+Waxx⟨t⟩+ba)a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)at=tanh(Waaat1+Waxxt+ba).
  2. Using your new hidden state a⟨t⟩a^{\langle t \rangle}at, compute the prediction y^⟨t⟩=softmax(Wyaa⟨t⟩+by)\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)y^t=softmax(Wyaat+by). We provided you a function: softmax.
  3. Store (a⟨t⟩,a⟨t−1⟩,x⟨t⟩,parameters)(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)(at,at1,xt,parameters) in cache
  4. Return a⟨t⟩a^{\langle t \rangle}at , y⟨t⟩y^{\langle t \rangle}yt and cache

We will vectorize over mmm examples. Thus, x⟨t⟩x^{\langle t \rangle}xt will have dimension (nx,m)(n_x,m)(nx,m), and a⟨t⟩a^{\langle t \rangle}at will have dimension (na,m)(n_a,m)(na,m).

def rnn_cell_forward(xt, a_prev, parameters):"""Implements a single forward step of the RNN-cell as described in Figure (2)参数:xt -- your input data at timestep "t", numpy array of shape (n_x, m).a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)parameters -- python dictionary containing:Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)ba --  Bias, numpy array of shape (n_a, 1)by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)返回值:a_next -- next hidden state, of shape (n_a, m)yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)"""# Retrieve parameters from "parameters"Wax = parameters["Wax"]Waa = parameters["Waa"]Wya = parameters["Wya"]ba = parameters["ba"]by = parameters["by"]# compute next activation state using the formula given abovea_next = np.tanh(np.dot(Wax, xt) + np.dot(Waa, a_prev) + ba)# compute output of the current cell using the formula given aboveyt_pred = softmax(np.dot(Wya, a_next) + by)# store values you need for backward propagation in cachecache = (a_next, a_prev, xt, parameters)return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)

1.2 - RNN forward pass

You can see an RNN as the repetition of the cell you’ve just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell (a⟨t−1⟩a^{\langle t-1 \rangle}at1) and the current time-step’s input data (x⟨t⟩x^{\langle t \rangle}xt). It outputs a hidden state (a⟨t⟩a^{\langle t \rangle}at) and a prediction (y⟨t⟩y^{\langle t \rangle}yt) for this time-step.
在这里插入图片描述Figure 3: Basic RNN. The input sequence x=(x⟨1⟩,x⟨2⟩,...,x⟨Tx⟩)x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})x=(x1,x2,...,xTx) is carried over TxT_xTx time steps. The network outputs y=(y⟨1⟩,y⟨2⟩,...,y⟨Tx⟩)y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})y=(y1,y2,...,yTx).
Exercise: Code the forward propagation of the RNN described in Figure (3).

Instructions:

  1. Create a vector of zeros (aaa) that will store all the hidden states computed by the RNN.
  2. Initialize the “next” hidden state as a0a_0a0 (initial hidden state).
  3. Start looping over each time step, your incremental index is ttt :
    • Update the “next” hidden state and the cache by running rnn_step_forward
    • Store the “next” hidden state in aaa (ttht^{th}tth position)
    • Store the prediction in y
    • Add the cache to the list of caches
  4. Return aaa, yyy and caches
def rnn_forward(x, a0, parameters):"""Implement the forward propagation of the recurrent neural network described in Figure (3).参数:x -- Input data for every time-step, of shape (n_x, m, T_x).a0 -- Initial hidden state, of shape (n_a, m)parameters -- python dictionary containing:Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)ba --  Bias numpy array of shape (n_a, 1)by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)返回值:a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)caches -- tuple of values needed for the backward pass, contains (list of caches, x)"""# Initialize "caches" which will contain the list of all cachescaches = []# Retrieve dimensions from shapes of x and Wyn_x, m, T_x = x.shapen_y, n_a = parameters["Wya"].shape# initialize "a" and "y" with zeros (≈2 lines)a = np.zeros([n_a, m, T_x])y_pred = np.zeros([n_y, m, T_x])# Initialize a_next (≈1 line)a_next = a0# loop over all time-stepsfor t in range(T_x):# Update next hidden state, compute the prediction, get the cache (≈1 line)a_next, yt_pred, cache = rnn_cell_forward(x[:, :, t], a_next, parameters)# Save the value of the new "next" hidden state in a (≈1 line)a[:, :, t]# Save the value of the prediction in y (≈1 line)y_pred[:,:,t] = yt_pred# Append "cache" to "caches" (≈1 line)caches.append(cache)# store values needed for backward propagation in cachecaches = (caches, x)return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))

2 - Long Short-Term Memory (LSTM) network

This following figure shows the operations of an LSTM-cell.
在这里插入图片描述Figure 4: LSTM-cell. This tracks and updates a “cell state” or memory variable c⟨t⟩c^{\langle t \rangle}ct at every time-step, which can be different from a⟨t⟩a^{\langle t \rangle}at.
Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with TxT_xTx time-steps.

About the gates

- Forget gate

For the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this:

Γf⟨t⟩=σ(Wf[a⟨t−1⟩,x⟨t⟩]+bf)(1)\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} Γft=σ(Wf[at1,xt]+bf)(1)

Here, WfW_fWf are weights that govern the forget gate’s behavior. We concatenate [a⟨t−1⟩,x⟨t⟩][a^{\langle t-1 \rangle}, x^{\langle t \rangle}][at1,xt] and multiply by WfW_fWf. The equation above results in a vector Γf⟨t⟩\Gamma_f^{\langle t \rangle}Γft with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state c⟨t−1⟩c^{\langle t-1 \rangle}ct1. So if one of the values of Γf⟨t⟩\Gamma_f^{\langle t \rangle}Γft is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of c⟨t−1⟩c^{\langle t-1 \rangle}ct1. If one of the values is 1, then it will keep the information.

- Update gate

Once we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate:

Γu⟨t⟩=σ(Wu[a⟨t−1⟩,x{t}]+bu)(2)\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} Γut=σ(Wu[at1,x{t}]+bu)(2)

Similar to the forget gate, here Γu⟨t⟩\Gamma_u^{\langle t \rangle}Γut is again a vector of values between 0 and 1. This will be multiplied element-wise with c~⟨t⟩\tilde{c}^{\langle t \rangle}c~t, in order to compute c⟨t⟩c^{\langle t \rangle}ct.

- Updating the cell

To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is:

c~⟨t⟩=tanh⁡(Wc[a⟨t−1⟩,x⟨t⟩]+bc)(3)\tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} c~t=tanh(Wc[at1,xt]+bc)(3)

Finally, the new cell state is:

c⟨t⟩=Γf⟨t⟩∗c⟨t−1⟩+Γu⟨t⟩∗c~⟨t⟩(4)c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} ct=Γftct1+Γutc~t(4)

- Output gate

To decide which outputs we will use, we will use the following two formulas:

Γo⟨t⟩=σ(Wo[a⟨t−1⟩,x⟨t⟩]+bo)(5)\Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}Γot=σ(Wo[at1,xt]+bo)(5)
a⟨t⟩=Γo⟨t⟩∗tanh⁡(c⟨t⟩)(6)a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} at=Γottanh(ct)(6)

Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the tanh⁡\tanhtanh of the previous state.


2.1 - LSTM cell

Exercise: Implement the LSTM cell described in the Figure (3).

Instructions:

  1. Concatenate a⟨t−1⟩a^{\langle t-1 \rangle}at1 and x⟨t⟩x^{\langle t \rangle}xt in a single matrix: concat=[a⟨t−1⟩x⟨t⟩]concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}concat=[at1xt]
  2. Compute all the formulas 2-6. You can use sigmoid() (provided) and np.tanh().
  3. Compute the prediction y⟨t⟩y^{\langle t \rangle}yt. You can use softmax() (provided).
def lstm_cell_forward(xt, a_prev, c_prev, parameters):"""Implement a single forward step of the LSTM-cell as described in Figure (4)参数:xt -- your input data at timestep "t", numpy array of shape (n_x, m).a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)parameters -- python dictionary containing:Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)bf -- Bias of the forget gate, numpy array of shape (n_a, 1)Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)bi -- Bias of the save gate, numpy array of shape (n_a, 1)Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)bc --  Bias of the first "tanh", numpy array of shape (n_a, 1)Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)bo --  Bias of the focus gate, numpy array of shape (n_a, 1)Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)返回值:a_next -- next hidden state, of shape (n_a, m)c_next -- next memory state, of shape (n_a, m)yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilda),c stands for the memory value"""# Retrieve parameters from "parameters"Wf = parameters["Wf"]bf = parameters["bf"]Wi = parameters["Wi"]bi = parameters["bi"]Wc = parameters["Wc"]bc = parameters["bc"]Wo = parameters["Wo"]bo = parameters["bo"]Wy = parameters["Wy"]by = parameters["by"]# Retrieve dimensions from shapes of xt and Wyn_x, m = xt.shapen_y, n_a = Wy.shape# Concatenate a_prev and xt (≈3 lines)concat = np.zeros([n_a + n_x, m])concat[: n_a,:] = a_prev    #a(t-1)concat[n_a:,:] = xt         #xt# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)ft = sigmoid(np.dot(Wf, concat) + bf)it = sigmoid(np.dot(Wi, concat) + bi)cct = np.tanh(np.dot(Wc, concat) + bc)c_next = ft*c_prev + it * cctot = sigmoid(np.dot(Wo, concat) + bo)a_next = ot * np.tanh(c_next)# Compute prediction of the LSTM cell (≈1 line)yt_pred = softmax(np.dot(Wy, a_next) + by)# store values needed for backward propagation in cachecache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))

2.2 - Forward pass for LSTM

Now that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of TxT_xTx inputs.
在这里插入图片描述Figure 4: LSTM over multiple time-steps.
Exercise: Implement lstm_forward() to run an LSTM over TxT_xTx time-steps.
Note: c⟨0⟩c^{\langle 0 \rangle}c0 is initialized with zeros.

def lstm_forward(x, a0, parameters):"""Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).参数:x -- Input data for every time-step, of shape (n_x, m, T_x).a0 -- Initial hidden state, of shape (n_a, m)parameters -- python dictionary containing:Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)bf -- Bias of the forget gate, numpy array of shape (n_a, 1)Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)bi -- Bias of the save gate, numpy array of shape (n_a, 1)Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)bo -- Bias of the focus gate, numpy array of shape (n_a, 1)Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)返回值:a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)"""# Initialize "caches", which will track the list of all the cachescaches = []# Retrieve dimensions from shapes of xt and Wy (≈2 lines)n_x, m, T_x = x.shapen_y, n_a = parameters['Wy'].shape# initialize "a", "c" and "y" with zeros (≈3 lines)a = np.zeros([n_a, m, T_x])c = np.zeros([n_a, m, T_x])y = np.zeros([n_y, m, T_x])# Initialize a_next and c_next (≈2 lines)a_next = a0c_next = np.zeros([n_a, m])# loop over all time-stepsfor t in range(T_x):# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)a_next, c_next, yt, cache = lstm_cell_forward(x[:, :, t], a_next, c_next, parameters)# Save the value of the new "next" hidden state in a (≈1 line)a[:,:,t] = a_next# Save the value of the prediction in y (≈1 line)y[:,:,t] = yt# Save the value of the next cell state (≈1 line)c[:,:,t] = c_next# Append the cache into caches (≈1 line)caches.append(cache)# store values needed for backward propagation in cachecaches = (caches, x)return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/439770.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【2019icpc徐州站】Random Access Iterator(概率dp,有坑,tricks)

题干: Recently Kumiko learns to use containers in C standard template library. She likes to use the std::vector very much. It is very convenient for her to do operations like an ordinary array. However, she is concerned about the random-access…

一步步编写操作系统 34 内核利用bios中断获取物理内存大小

接上文,另一个获取内存容量的方法是bios 0x15中断的子功能0xE801。 此方法虽然简单,但功能也不强大,最大只能识别4G内存,不过这对咱们32位地址总线足够了。稍微有点不便的是,此方法检测到的内存是分别存放到两组寄存器…

17.深度学习练习:Character level language model - Dinosaurus land

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。 课程链接:https://www.deeplearning.ai/deep-learning-specialization/ 文章目录1 - Problem Statement1.1 - Dataset and Preprocessing1.2 - Overview of the model2 - Building blo…

Apollo进阶课程㊴丨Apollo安装过程概述

原文链接:进阶课程㊴丨Apollo安装过程概述 Apollo是一个自动驾驶的平台,推荐的参考运行环境为:ThinkPAD X240、CPU:i5 、四核 、内存 8G、 硬盘容量40G以上。 上周阿波君为大家详细介绍了「进阶课程㊳丨Apollo平台的快速入门」。 …

UML类图关系(泛化 、继承、实现、依赖、关联、聚合、组合)

继承、实现、依赖、关联、聚合、组合的联系与区别 分别介绍这几种关系: 继承 指的是一个类(称为子类、子接口)继承另外的一个类(称为父类、父接口)的功能,并可以增加它自己的新功能的能力,继…

CS231n(1):图片分类笔记与KNN编程作业

声明:本博客笔记部分为CS231n官网笔记,这里对其进行引用,在此表示感谢。 课程官网地址为:http://vision.stanford.edu/teaching/cs231n/syllabus.html 本次课程对应B站教学视频为: https://www.bilibili.com/video/av5…

【HDU - 6557】Justice(思维,模拟,套路,SETset)

题干: On the table there are n weights. On the body of the i-th weight carved a positive integer kiki , indicating that its weight is 12ki12ki gram. Is it possible to divide the n weights into two groups and make sure that the sum of the weight…

Apollo进阶课程㊵丨Azure仿真平台使用

原文链接:进阶课程㊵丨Azure仿真平台使用 Azure是一种灵活和支持互操作的平台,它可以被用来创建云中运行的应用或者通过基于云的特性来加强现有应用。它开放式的架构给开发者提供了Web应用、互联设备的应用、个人电脑、服务器、或者提供最优在线复杂解决…

java 泛型详解-绝对是对泛型方法讲解最详细的,没有之一

对java的泛型特性的了解仅限于表面的浅浅一层,直到在学习设计模式时发现有不了解的用法,才想起详细的记录一下。本文参考java 泛型详解、Java中的泛型方法、 java泛型详解 1. 概述 泛型在java中有很重要的地位,在面向对象编程及各种设计模式…

动手学无人驾驶(3):基于激光雷达3D多目标追踪

上一篇博客介绍了无人驾驶中的车辆检测算法(YOLO模型),该检测是基于图像进行的2D目标检测。在无人驾驶环境感知传感器中还有另一种重要的传感器:激光雷达。今天就介绍一篇无人驾驶中基于激光雷达目标检测的3D多目标追踪论文&#…

换种方法学操作系统,轻松入门Linux内核

计算机已成为现代人日常工作、学习和生活中必不可少的工具。操作系统是计算机之魂,作为用户使用计算机的接口,它负责调度执行各个用户程序,使计算机完成特定的任务;作为计算机硬件资源的管理者,它负责协调计算机中各类…

Apollo进阶课程㊶丨Apollo实战——本机演示实战

原文链接:进阶课程㊶丨Apollo实战——本机演示实战 Apollo是一个开放的、完整的、安全的平台,将帮助汽车行业及自动驾驶领域的合作伙伴结合车辆和硬件系统,快速搭建一套属于自己的自动驾驶系统。 上周阿波君为大家详细介绍了「进阶课程㊵丨A…

java常见异常类图(分类了Error/RuntimeExecption、check Exception)

Error:表示由JVM所侦测到的无法预期的错误,由于这是属于JVM层次的严重错误,导致JVM无法继续执行,因此,这是不可捕捉到的,无法采取任何恢复的操作,顶多只能显示错误信息。Exception:表…

Apollo进阶课程㊷丨Apollo实战——车辆与循迹驾驶能力实战

原文链接:进阶课程㊷丨Apollo实战——车辆与循迹驾驶能力实战 循迹自动驾驶是指让车辆按照录制好的轨迹线进行自动驾驶,其涉及到自动驾驶中最基本的底盘线控能力、定位能力、控制能力,是自动驾驶系统的一个最小子集。 上周阿波君为大家详细介…

【HDU - 5961】传递(图,思维,暴力,枚举点)

题干: 我们称一个有向图G是传递的,当且仅当对任意三个不同的顶点a,,若G中有 一条边从a到b且有一条边从b到c ,则G中同样有一条边从a到c。 我们称图G是一个竞赛图,当且仅当它是一个有向图且它的基图是完全图。换句 话说,将完全图每…

Java--对象内存布局

在HotSpot虚拟机中,对象在内存中的存储布局可以分为3块区域:对象头部、实例数据、对齐填充。 一、对象头部Header的布局 Mark WordClass 指针在32位系统下,上面两部分各占4B; 在64位系统中,Mark Work占4B,class指针在…

Apollo进阶课程㊸丨Apollo实战——障碍物感知和路径规划能力实战

原文链接;进阶课程㊸丨Apollo实战——障碍物感知和路径规划能力实战 环境感知在自动驾驶汽车应用中占据了核心地位。一辆车要实现自动驾驶,障碍物感知是最基础也是最核心的功能。 上周阿波君为大家详细介绍了「进阶课程㊷丨Apollo实战——车辆与循迹驾驶能力实战」…

3.1)深度学习笔记:机器学习策略(1)

目录 1)Why ML Strategy 2)Orthogonalization 3)Single number evaluation metric 4)Satisficing and optimizing metrics 5)训练/开发/测试集划分(Train/dev/test distributions) 6&…

接口和抽象类是否继承了Object

我们先看一下Java的帮助文档对于Object的描述: Class Object is the root of the class hierarchy. Every class has Object as a superclass. All objects, including arrays, implement the methods of this class. Object 类是类层次结构的根类。每个类都使用 …

3.2)深度学习笔记:机器学习策略(2)

目录 1)Carrying out error analysis 2)Cleaning up Incorrectly labeled data 3)Build your first system quickly then iterate 4)Training and testing on different distributios 5)Bias and Variance with m…