7.深度学习练习:Regularization

本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。

课程链接:https://www.deeplearning.ai/deep-learning-specialization/

目录

1-Package

2 - Non-regularized model

3 - L2 Regularization(掌握)

4-Dropout(掌握)

4.1 - Forward propagation with dropout

4.2 - Backward propagation with dropout


1-Package

import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'train_X, train_Y, test_X, test_Y = load_2D_dataset()

Problem Statement: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.

Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.

  • If the dot is blue, it means the French player managed to hit the ball with his/her head(蓝色)
  • If the dot is red, it means the other team's player hit the ball with their head

Your goal: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.


2 - Non-regularized model

You will use the following neural network (already implemented for you below). This model can be used:

  • in regularization mode -- by setting the lambd input to a non-zero value. We use "lambd" instead of "lambda" because "lambda" is a reserved keyword in Python.
  • in dropout mode -- by setting the keep_prob to a value less than one

You will first try the model without any regularization. Then, you will implement:

  • L2 regularization -- functions: "compute_cost_with_regularization()" and "backward_propagation_with_regularization()"
  • Dropout -- functions: "forward_propagation_with_dropout()" and "backward_propagation_with_dropout()"

In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.

def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):"""Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.Arguments:X -- input data, of shape (input size, number of examples)Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)learning_rate -- learning rate of the optimizationnum_iterations -- number of iterations of the optimization loopprint_cost -- If True, print the cost every 10000 iterationslambd -- regularization hyperparameter, scalarkeep_prob - probability of keeping a neuron active during drop-out, scalar.Returns:parameters -- parameters learned by the model. They can then be used to predict."""grads = {}costs = []                            # to keep track of the costm = X.shape[1]                        # number of exampleslayers_dims = [X.shape[0], 20, 3, 1]# Initialize parameters dictionary.parameters = initialize_parameters(layers_dims)# Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.if keep_prob == 1:a3, cache = forward_propagation(X, parameters)elif keep_prob < 1:a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)# Cost functionif lambd == 0:cost = compute_cost(a3, Y)else:cost = compute_cost_with_regularization(a3, Y, parameters, lambd)# Backward propagation.assert(lambd==0 or keep_prob==1)    # it is possible to use both L2 regularization and dropout, # but this assignment will only explore one at a timeif lambd == 0 and keep_prob == 1:grads = backward_propagation(X, Y, cache)elif lambd != 0:grads = backward_propagation_with_regularization(X, Y, cache, lambd)elif keep_prob < 1:grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)# Update parameters.parameters = update_parameters(parameters, grads, learning_rate)# Print the loss every 10000 iterationsif print_cost and i % 10000 == 0:print("Cost after iteration {}: {}".format(i, cost))if print_cost and i % 1000 == 0:costs.append(cost)# plot the costplt.plot(costs)plt.ylabel('cost')plt.xlabel('iterations (x1,000)')plt.title("Learning rate =" + str(learning_rate))plt.show()return parameters


3 - L2 Regularization(掌握)

The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from:

                                      J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)}

To:

                        J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost}

Let's modify your cost and observe the consequences.

Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate

$\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$uses np.sum(np.squeeze(W1))。

Note that you have to do this for$W^{[1]}$, $W^{[2]}$,$W^{[3]}$,then sum the three terms and multiply by\frac{1}{m} \frac{\lambda}{2}

def compute_cost_with_regularization(A3, Y, parameters, lambd):"""Implement the cost function with L2 regularization. See formula (2) above.Arguments:A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)Y -- "true" labels vector, of shape (output size, number of examples)parameters -- python dictionary containing parameters of the modelReturns:cost - value of the regularized loss function (formula (2))"""m = Y.shape[1]W1 = parameters["W1"]W2 = parameters["W2"]W3 = parameters["W3"]cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the costL2_regularization_cost = 1./m * lambd/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))cost = cross_entropy_cost + L2_regularization_costreturn cost

Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.

Exercise: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient \frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2)= \frac{\lambda}{m}W

# GRADED FUNCTION: backward_propagation_with_regularizationdef backward_propagation_with_regularization(X, Y, cache, lambd):"""Implements the backward propagation of our baseline model to which we added an L2 regularization.Arguments:X -- input dataset, of shape (input size, number of examples)Y -- "true" labels vector, of shape (output size, number of examples)cache -- cache output from forward_propagation()lambd -- regularization hyperparameter, scalarReturns:gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables"""m = X.shape[1](Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cachedZ3 = A3 - YdW3 = 1./m * np.dot(dZ3, A2.T) + lambd / m * W3db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)dA2 = np.dot(W3.T, dZ3)dZ2 = np.multiply(dA2, np.int64(A2 > 0))dW2 = 1./m * np.dot(dZ2, A1.T) + lambd / m * W2db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)dA1 = np.dot(W2.T, dZ2)dZ1 = np.multiply(dA1, np.int64(A1 > 0))dW1 = 1./m * np.dot(dZ1, X.T) + lambd / m * W1db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}return gradients

What is L2-regularization actually doing?:

L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.

**What you should remember** -- the implications of L2-regularization on: - The cost computation: - A regularization term is added to the cost - The backpropagation function: - There are extra terms in the gradients with respect to weight matrices - Weights end up smaller ("weight decay"): - Weights are pushed to smaller values.


4-Dropout(掌握)

Finally, dropout is a widely used regularization technique that is specific to deep learning. It randomly shuts down some neurons in each iteration. Watch these two videos to see what this means!

4.1 - Forward propagation with dropout

Exercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.

Instructions: You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:

  1. In lecture, we dicussed creating a variable d^{[l]} with the same shape as a^{[l]}using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as A^{[l]}].
  2. Set each entry of $D^{[1]}$to be 0 with probability (1-keep_prob) or 1 with probability (keep_prob), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: X = (X < 0.5). Note that 0 and 1 are respectively equivalent to False and True.
  3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
  4. Divide $A^{[1]}$by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
# GRADED FUNCTION: forward_propagation_with_dropoutdef forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):"""Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.Arguments:X -- input dataset, of shape (2, number of examples)parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":W1 -- weight matrix of shape (20, 2)b1 -- bias vector of shape (20, 1)W2 -- weight matrix of shape (3, 20)b2 -- bias vector of shape (3, 1)W3 -- weight matrix of shape (1, 3)b3 -- bias vector of shape (1, 1)keep_prob - probability of keeping a neuron active during drop-out, scalarReturns:A3 -- last activation value, output of the forward propagation, of shape (1,1)cache -- tuple, information stored for computing the backward propagation"""np.random.seed(1)# retrieve parametersW1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]W3 = parameters["W3"]b3 = parameters["b3"]# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOIDZ1 = np.dot(W1, X) + b1A1 = relu(Z1)### START CODE HERE ### (approx. 4 lines)                      # Steps 1-4 below correspond to the Steps 1-4 described above. #D1 = np.random.rand(A1.shape[0], A1.shape[1])           # Step 1: initialize matrix D1 = np.random.rand(..., ...)#D1 = (D1 < keep_prob)                                          # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)#A1 = A1 * D1                                                   # Step 3: shut down some neurons of A1#A1 = A1 / keep_prob                                            # Step 4: scale the value of neurons that haven't been shut downD1 = np.random.randn(A1.shape[0], A1.shape[1])D1 = (D1 < keep_prob)A1 = A1 * D1A1 = A1 / keep_prob### END CODE HERE ###Z2 = np.dot(W2, A1) + b2A2 = relu(Z2)### START CODE HERE ### (approx. 4 lines)D2 =  np.random.rand(A2.shape[0], A2.shape[1])          # Step 1: initialize matrix D2 = np.random.rand(..., ...)D2 =  (D2 < keep_prob)                                         # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)A2 = A2 * D2                                                   # Step 3: shut down some neurons of A2A2 = A2 / keep_prob                                            # Step 4: scale the value of neurons that haven't been shut down### END CODE HERE ###Z3 = np.dot(W3, A2) + b3A3 = sigmoid(Z3)cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)return A3, cache

4.2 - Backward propagation with dropout

Exercise: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks D^{[1]}and D^{[2]}stored in the cache.

Instruction: Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:

  1. You had previously shut down some neurons during forward propagation, by applying a maskD^{[1]} to A^{[1]}. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask D^{[1]} to A^{[1]}.
  2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if A^{[1]} is scaled by keep_prob, then its derivative dA^{[1]} is also scaled by the same keep_prob).
# GRADED FUNCTION: backward_propagation_with_dropoutdef backward_propagation_with_dropout(X, Y, cache, keep_prob):"""Implements the backward propagation of our baseline model to which we added dropout.Arguments:X -- input dataset, of shape (2, number of examples)Y -- "true" labels vector, of shape (output size, number of examples)cache -- cache output from forward_propagation_with_dropout()keep_prob - probability of keeping a neuron active during drop-out, scalarReturns:gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables"""m = X.shape[1](Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cachedZ3 = A3 - YdW3 = 1./m * np.dot(dZ3, A2.T)db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)dA2 = np.dot(W3.T, dZ3)### START CODE HERE ### (≈ 2 lines of code)dA2 = dA2 * D2              # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagationdA2 = dA2 / keep_prob            # Step 2: Scale the value of neurons that haven't been shut down### END CODE HERE ###dZ2 = np.multiply(dA2, np.int64(A2 > 0))dW2 = 1./m * np.dot(dZ2, A1.T)db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)dA1 = np.dot(W2.T, dZ2)### START CODE HERE ### (≈ 2 lines of code)dA1 = dA1 * D1              # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagationdA1 = dA1 / keep_prob              # Step 2: Scale the value of neurons that haven't been shut down### END CODE HERE ###dZ1 = np.multiply(dA1, np.int64(A1 > 0))dW1 = 1./m * np.dot(dZ1, X.T)db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}return gradients

Note:

  • common mistake when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
  • Deep learning frameworks like tensorflow, PaddlePaddle, keras or caffe come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.

**What you should remember about dropout:** - Dropout is a regularization technique. - You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time. - Apply dropout both during forward and backward propagation. - During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/439851.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

深入详解JVM内存模型与JVM参数详细配置

本系列会持续更新。 JVM基本是BAT面试必考的内容&#xff0c;今天我们先从JVM内存模型开启详解整个JVM系列&#xff0c;希望看完整个系列后&#xff0c;可以轻松通过BAT关于JVM的考核。 BAT必考JVM系列专题 1.JVM内存模型 2.JVM垃圾回收算法 3.JVM垃圾回收器 4.JVM参数详解 5…

8.深度学习练习:Gradient Checking

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1) How does gradient checking work? 2) 1-dimensional gradient checking 3) N-dimensional gradie…

9.深度学习练习:Optimization Methods

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Gradient Descent 2 - Mini-Batch Gradient descent 3 - Momentum 4 - Adam 5 - Model with dif…

一步步编写操作系统 22 硬盘操作方法

硬盘中的指令很多&#xff0c;各指令的用法也不同。有的指令直接往command寄存器中写就行了&#xff0c;有的还要在feature寄存器中写入参数&#xff0c;最权威的方法还是要去参考ATA手册。由于本书中用到的都是简单的指令&#xff0c;所以对此抽象出一些公共的步骤仅供参考之用…

10.深度学习练习:Convolutional Neural Networks: Step by Step(强烈推荐)

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Packages 2 - Outline of the Assignment 3 - Convolutional Neural Networks 3.1 - Zero-Paddin…

一步步编写操作系统 23 重写主引导记录mbr

本节我们在之前MBR的基础上&#xff0c;做个稍微大一点的改进&#xff0c;经过这个改进后&#xff0c;我们的MBR可以读取硬盘。听上去这可是个大“手术”呢&#xff0c;我们要将之前学过的知识都用上啦。其实没那么大啦&#xff0c;就是加了个读写磁盘的函数而已&#xff0c;哈…

11.深度学习练习:Keras tutorial - the Happy House(推荐)

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ Welcome to the first assignment of week 2. In this assignment, you will: Learn to use Keras, a high-lev…

一步步编写操作系统 24 编写内核加载器

这一节的内容并不长&#xff0c;因为在进入保护模式之前&#xff0c;我们能做的不多&#xff0c;loader是要经过实模式到保护模式的过渡&#xff0c;并最终在保护模式下加载内核。本节只实现一个简单的loader&#xff0c;本loader只在实模式下工作&#xff0c;等学习了保护模式…

12.深度学习练习:Residual Networks(注定成为经典)

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - The problem of very deep neural networks 2 - Building a Residual Network 2.1 - The identity…

13.深度学习练习:Autonomous driving - Car detection(YOLO实战)

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ Welcome to your week 3 programming assignment. You will learn about object detection using the very pow…

14.深度学习练习:Face Recognition for the Happy House

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the i…

java Integer 源码学习

转载自http://www.hollischuang.com/archives/1058 Integer 类在对象中包装了一个基本类型 int 的值。Integer 类型的对象包含一个 int 类型的字段。 此外&#xff0c;该类提供了多个方法&#xff0c;能在 int 类型和 String 类型之间互相转换&#xff0c;还提供了处理 int 类…

15.深度学习练习:Deep Learning Art: Neural Style Transfer

本文节选自吴恩达老师《深度学习专项课程》编程作业&#xff0c;在此表示感谢。 课程链接&#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Problem Statement 2 - Transfer Learning 3 - Neural Style Transfer 3.1 - Computing the cont…

【2018icpc宁夏邀请赛现场赛】【Gym - 102222F】Moving On(Floyd变形,思维,离线处理)

https://nanti.jisuanke.com/t/41290 题干&#xff1a; Firdaws and Fatinah are living in a country with nn cities, numbered from 11 to nn. Each city has a risk of kidnapping or robbery. Firdawss home locates in the city uu, and Fatinahs home locates in the…

动手学PaddlePaddle(1):线性回归

你将学会&#xff1a; 机器学习的基本概念&#xff1a;假设函数、损失函数、优化算法数据怎么进行归一化处理paddlepaddle深度学习框架的一些基本知识如何用paddlepaddle深度学习框架搭建全连接神经网络参考资料&#xff1a;https://www.paddlepaddle.org.cn/documentation/doc…

Apollo进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving

原文链接&#xff1a;进阶课程㉒丨Apollo规划技术详解——Motion Planning with Autonomous Driving 自动驾驶车辆的规划决策模块负责生成车辆的行驶行为&#xff0c;是体现车辆智慧水平的关键。规划决策模块中的运动规划环节负责生成车辆的局部运动轨迹&#xff0c;是决定车辆…

JVM核心之JVM运行和类加载全过程

为什么研究类加载全过程&#xff1f; 有助于连接JVM运行过程更深入了解java动态性&#xff08;解热部署&#xff0c;动态加载&#xff09;&#xff0c;提高程序的灵活性类加载机制 JVM把class文件加载到内存&#xff0c;并对数据进行校验、解析和初始化&#xff0c;最终形成J…

【2018icpc宁夏邀请赛现场赛】【Gym - 102222A】Maximum Element In A Stack(动态的栈中查找最大元素)

https://nanti.jisuanke.com/t/41285 题干&#xff1a; As an ACM-ICPC newbie, Aishah is learning data structures in computer science. She has already known that a stack, as a data structure, can serve as a collection of elements with two operations: push, …

动手学PaddlePaddle(2):房价预测

通过这个练习可以了解到&#xff1a; 机器学习的典型过程&#xff1a; 获取数据 数据预处理 -训练模型 -应用模型 fluid训练模型的基本步骤: 配置网络结构&#xff1a; 定义成本函数avg_cost 定义优化器optimizer 获取训练数据 定义运算场所(place)和执行器(exe) 提供数…

JAVA 堆栈 堆 方法区 解析

基础数据类型直接在栈空间分配&#xff0c; 方法的形式参数&#xff0c;直接在栈空间分配&#xff0c;当方法调用完成后从栈空间回收。 引用数据类型&#xff0c;需要用new来创建&#xff0c;既在栈空间分配一个地址空间&#xff0c;又在堆空间分配对象的类变量 。 方法的引用…