本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。课程链接:https://www.deeplearning.ai/deep-learning-specialization/
Building your Recurrent Neural Network - Step by Step
- 1 - Forward propagation for the basic Recurrent Neural Network
- 1.1 - RNN cell
- 1.2 - RNN forward pass
- 2 - Long Short-Term Memory (LSTM) network
- - Forget gate
- - Update gate
- - Updating the cell
- - Output gate
- 2.1 - LSTM cell
- 2.2 - Forward pass for LSTM
Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have “memory”. They can read inputs x⟨t⟩x^{\langle t \rangle}x⟨t⟩ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future.
Notation:
Superscript [l][l][l] denotes an object associated with the lthl^{th}lth layer.
- Example: a[4]a^{[4]}a[4] is the 4th4^{th}4th layer activation. W[5]W^{[5]}W[5] and b[5]b^{[5]}b[5] are the 5th5^{th}5th layer parameters.
Superscript (i)(i)(i) denotes an object associated with the ithi^{th}ith example.
- Example: x(i)x^{(i)}x(i) is the ithi^{th}ith training example input.
Superscript ⟨t⟩\langle t \rangle⟨t⟩ denotes an object at the ttht^{th}tth time-step.
- Example: x⟨t⟩x^{\langle t \rangle}x⟨t⟩ is the input x at the ttht^{th}tth time-step. x(i)⟨t⟩x^{(i)\langle t \rangle}x(i)⟨t⟩ is the input at the ttht^{th}tth
timestep of example iii.Lowerscript iii denotes the ithi^{th}ith entry of a vector.
- Example: ai[l]a^{[l]}_iai[l] denotes the ithi^{th}ith entry of the activations in layer lll.
Let’s first import all the packages that you will need during this assignment.
import numpy as np
from rnn_utils import *
1 - Forward propagation for the basic Recurrent Neural Network
Later this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, Tx=TyT_x = T_yTx=Ty.
Figure 1: Basic RNN mode
Here’s how you can implement an RNN:
Steps:
- Implement the calculations needed for one time-step of the RNN.
- Implement a loop over TxT_xTx time-steps in order to process all the inputs, one at a time.
1.1 - RNN cell
A Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell.
Figure 2: Basic RNN cell. Takes as input x⟨t⟩x^{\langle t \rangle}x⟨t⟩ (current input) and a⟨t−1⟩a^{\langle t - 1\rangle}a⟨t−1⟩ (previous hidden state containing information from the past), and outputs a⟨t⟩a^{\langle t \rangle}a⟨t⟩ which is given to the next RNN cell and also used to predict y⟨t⟩y^{\langle t \rangle}y⟨t⟩
Exercise: Implement the RNN-cell described in Figure (2).
Instructions:
- Compute the hidden state with tanh activation: a⟨t⟩=tanh(Waaa⟨t−1⟩+Waxx⟨t⟩+ba)a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)a⟨t⟩=tanh(Waaa⟨t−1⟩+Waxx⟨t⟩+ba).
- Using your new hidden state a⟨t⟩a^{\langle t \rangle}a⟨t⟩, compute the prediction y^⟨t⟩=softmax(Wyaa⟨t⟩+by)\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)y^⟨t⟩=softmax(Wyaa⟨t⟩+by). We provided you a function:
softmax
. - Store (a⟨t⟩,a⟨t−1⟩,x⟨t⟩,parameters)(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)(a⟨t⟩,a⟨t−1⟩,x⟨t⟩,parameters) in cache
- Return a⟨t⟩a^{\langle t \rangle}a⟨t⟩ , y⟨t⟩y^{\langle t \rangle}y⟨t⟩ and cache
We will vectorize over mmm examples. Thus, x⟨t⟩x^{\langle t \rangle}x⟨t⟩ will have dimension (nx,m)(n_x,m)(nx,m), and a⟨t⟩a^{\langle t \rangle}a⟨t⟩ will have dimension (na,m)(n_a,m)(na,m).
def rnn_cell_forward(xt, a_prev, parameters):"""Implements a single forward step of the RNN-cell as described in Figure (2)参数:xt -- your input data at timestep "t", numpy array of shape (n_x, m).a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)parameters -- python dictionary containing:Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)ba -- Bias, numpy array of shape (n_a, 1)by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)返回值:a_next -- next hidden state, of shape (n_a, m)yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)"""# Retrieve parameters from "parameters"Wax = parameters["Wax"]Waa = parameters["Waa"]Wya = parameters["Wya"]ba = parameters["ba"]by = parameters["by"]# compute next activation state using the formula given abovea_next = np.tanh(np.dot(Wax, xt) + np.dot(Waa, a_prev) + ba)# compute output of the current cell using the formula given aboveyt_pred = softmax(np.dot(Wya, a_next) + by)# store values you need for backward propagation in cachecache = (a_next, a_prev, xt, parameters)return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
1.2 - RNN forward pass
You can see an RNN as the repetition of the cell you’ve just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell (a⟨t−1⟩a^{\langle t-1 \rangle}a⟨t−1⟩) and the current time-step’s input data (x⟨t⟩x^{\langle t \rangle}x⟨t⟩). It outputs a hidden state (a⟨t⟩a^{\langle t \rangle}a⟨t⟩) and a prediction (y⟨t⟩y^{\langle t \rangle}y⟨t⟩) for this time-step.
Figure 3: Basic RNN. The input sequence x=(x⟨1⟩,x⟨2⟩,...,x⟨Tx⟩)x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})x=(x⟨1⟩,x⟨2⟩,...,x⟨Tx⟩) is carried over TxT_xTx time steps. The network outputs y=(y⟨1⟩,y⟨2⟩,...,y⟨Tx⟩)y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})y=(y⟨1⟩,y⟨2⟩,...,y⟨Tx⟩).
Exercise: Code the forward propagation of the RNN described in Figure (3).
Instructions:
- Create a vector of zeros (aaa) that will store all the hidden states computed by the RNN.
- Initialize the “next” hidden state as a0a_0a0 (initial hidden state).
- Start looping over each time step, your incremental index is ttt :
- Update the “next” hidden state and the cache by running
rnn_step_forward
- Store the “next” hidden state in aaa (ttht^{th}tth position)
- Store the prediction in y
- Add the cache to the list of caches
- Update the “next” hidden state and the cache by running
- Return aaa, yyy and caches
def rnn_forward(x, a0, parameters):"""Implement the forward propagation of the recurrent neural network described in Figure (3).参数:x -- Input data for every time-step, of shape (n_x, m, T_x).a0 -- Initial hidden state, of shape (n_a, m)parameters -- python dictionary containing:Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)ba -- Bias numpy array of shape (n_a, 1)by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)返回值:a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)caches -- tuple of values needed for the backward pass, contains (list of caches, x)"""# Initialize "caches" which will contain the list of all cachescaches = []# Retrieve dimensions from shapes of x and Wyn_x, m, T_x = x.shapen_y, n_a = parameters["Wya"].shape# initialize "a" and "y" with zeros (≈2 lines)a = np.zeros([n_a, m, T_x])y_pred = np.zeros([n_y, m, T_x])# Initialize a_next (≈1 line)a_next = a0# loop over all time-stepsfor t in range(T_x):# Update next hidden state, compute the prediction, get the cache (≈1 line)a_next, yt_pred, cache = rnn_cell_forward(x[:, :, t], a_next, parameters)# Save the value of the new "next" hidden state in a (≈1 line)a[:, :, t]# Save the value of the prediction in y (≈1 line)y_pred[:,:,t] = yt_pred# Append "cache" to "caches" (≈1 line)caches.append(cache)# store values needed for backward propagation in cachecaches = (caches, x)return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
2 - Long Short-Term Memory (LSTM) network
This following figure shows the operations of an LSTM-cell.
Figure 4: LSTM-cell. This tracks and updates a “cell state” or memory variable c⟨t⟩c^{\langle t \rangle}c⟨t⟩ at every time-step, which can be different from a⟨t⟩a^{\langle t \rangle}a⟨t⟩.
Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with TxT_xTx time-steps.
About the gates
- Forget gate
For the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this:
Γf⟨t⟩=σ(Wf[a⟨t−1⟩,x⟨t⟩]+bf)(1)\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} Γf⟨t⟩=σ(Wf[a⟨t−1⟩,x⟨t⟩]+bf)(1)
Here, WfW_fWf are weights that govern the forget gate’s behavior. We concatenate [a⟨t−1⟩,x⟨t⟩][a^{\langle t-1 \rangle}, x^{\langle t \rangle}][a⟨t−1⟩,x⟨t⟩] and multiply by WfW_fWf. The equation above results in a vector Γf⟨t⟩\Gamma_f^{\langle t \rangle}Γf⟨t⟩ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state c⟨t−1⟩c^{\langle t-1 \rangle}c⟨t−1⟩. So if one of the values of Γf⟨t⟩\Gamma_f^{\langle t \rangle}Γf⟨t⟩ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of c⟨t−1⟩c^{\langle t-1 \rangle}c⟨t−1⟩. If one of the values is 1, then it will keep the information.
- Update gate
Once we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate:
Γu⟨t⟩=σ(Wu[a⟨t−1⟩,x{t}]+bu)(2)\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} Γu⟨t⟩=σ(Wu[a⟨t−1⟩,x{t}]+bu)(2)
Similar to the forget gate, here Γu⟨t⟩\Gamma_u^{\langle t \rangle}Γu⟨t⟩ is again a vector of values between 0 and 1. This will be multiplied element-wise with c~⟨t⟩\tilde{c}^{\langle t \rangle}c~⟨t⟩, in order to compute c⟨t⟩c^{\langle t \rangle}c⟨t⟩.
- Updating the cell
To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is:
c~⟨t⟩=tanh(Wc[a⟨t−1⟩,x⟨t⟩]+bc)(3)\tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} c~⟨t⟩=tanh(Wc[a⟨t−1⟩,x⟨t⟩]+bc)(3)
Finally, the new cell state is:
c⟨t⟩=Γf⟨t⟩∗c⟨t−1⟩+Γu⟨t⟩∗c~⟨t⟩(4)c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} c⟨t⟩=Γf⟨t⟩∗c⟨t−1⟩+Γu⟨t⟩∗c~⟨t⟩(4)
- Output gate
To decide which outputs we will use, we will use the following two formulas:
Γo⟨t⟩=σ(Wo[a⟨t−1⟩,x⟨t⟩]+bo)(5)\Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}Γo⟨t⟩=σ(Wo[a⟨t−1⟩,x⟨t⟩]+bo)(5)
a⟨t⟩=Γo⟨t⟩∗tanh(c⟨t⟩)(6)a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} a⟨t⟩=Γo⟨t⟩∗tanh(c⟨t⟩)(6)
Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the tanh\tanhtanh of the previous state.
2.1 - LSTM cell
Exercise: Implement the LSTM cell described in the Figure (3).
Instructions:
- Concatenate a⟨t−1⟩a^{\langle t-1 \rangle}a⟨t−1⟩ and x⟨t⟩x^{\langle t \rangle}x⟨t⟩ in a single matrix: concat=[a⟨t−1⟩x⟨t⟩]concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}concat=[a⟨t−1⟩x⟨t⟩]
- Compute all the formulas 2-6. You can use
sigmoid()
(provided) andnp.tanh()
. - Compute the prediction y⟨t⟩y^{\langle t \rangle}y⟨t⟩. You can use
softmax()
(provided).
def lstm_cell_forward(xt, a_prev, c_prev, parameters):"""Implement a single forward step of the LSTM-cell as described in Figure (4)参数:xt -- your input data at timestep "t", numpy array of shape (n_x, m).a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)parameters -- python dictionary containing:Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)bf -- Bias of the forget gate, numpy array of shape (n_a, 1)Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)bi -- Bias of the save gate, numpy array of shape (n_a, 1)Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)bo -- Bias of the focus gate, numpy array of shape (n_a, 1)Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)返回值:a_next -- next hidden state, of shape (n_a, m)c_next -- next memory state, of shape (n_a, m)yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilda),c stands for the memory value"""# Retrieve parameters from "parameters"Wf = parameters["Wf"]bf = parameters["bf"]Wi = parameters["Wi"]bi = parameters["bi"]Wc = parameters["Wc"]bc = parameters["bc"]Wo = parameters["Wo"]bo = parameters["bo"]Wy = parameters["Wy"]by = parameters["by"]# Retrieve dimensions from shapes of xt and Wyn_x, m = xt.shapen_y, n_a = Wy.shape# Concatenate a_prev and xt (≈3 lines)concat = np.zeros([n_a + n_x, m])concat[: n_a,:] = a_prev #a(t-1)concat[n_a:,:] = xt #xt# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)ft = sigmoid(np.dot(Wf, concat) + bf)it = sigmoid(np.dot(Wi, concat) + bi)cct = np.tanh(np.dot(Wc, concat) + bc)c_next = ft*c_prev + it * cctot = sigmoid(np.dot(Wo, concat) + bo)a_next = ot * np.tanh(c_next)# Compute prediction of the LSTM cell (≈1 line)yt_pred = softmax(np.dot(Wy, a_next) + by)# store values needed for backward propagation in cachecache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))
2.2 - Forward pass for LSTM
Now that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of TxT_xTx inputs.
Figure 4: LSTM over multiple time-steps.
Exercise: Implement lstm_forward()
to run an LSTM over TxT_xTx time-steps.
Note: c⟨0⟩c^{\langle 0 \rangle}c⟨0⟩ is initialized with zeros.
def lstm_forward(x, a0, parameters):"""Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).参数:x -- Input data for every time-step, of shape (n_x, m, T_x).a0 -- Initial hidden state, of shape (n_a, m)parameters -- python dictionary containing:Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)bf -- Bias of the forget gate, numpy array of shape (n_a, 1)Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)bi -- Bias of the save gate, numpy array of shape (n_a, 1)Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)bo -- Bias of the focus gate, numpy array of shape (n_a, 1)Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)返回值:a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)"""# Initialize "caches", which will track the list of all the cachescaches = []# Retrieve dimensions from shapes of xt and Wy (≈2 lines)n_x, m, T_x = x.shapen_y, n_a = parameters['Wy'].shape# initialize "a", "c" and "y" with zeros (≈3 lines)a = np.zeros([n_a, m, T_x])c = np.zeros([n_a, m, T_x])y = np.zeros([n_y, m, T_x])# Initialize a_next and c_next (≈2 lines)a_next = a0c_next = np.zeros([n_a, m])# loop over all time-stepsfor t in range(T_x):# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)a_next, c_next, yt, cache = lstm_cell_forward(x[:, :, t], a_next, c_next, parameters)# Save the value of the new "next" hidden state in a (≈1 line)a[:,:,t] = a_next# Save the value of the prediction in y (≈1 line)y[:,:,t] = yt# Save the value of the next cell state (≈1 line)c[:,:,t] = c_next# Append the cache into caches (≈1 line)caches.append(cache)# store values needed for backward propagation in cachecaches = (caches, x)return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))