Andrew Ng 吴恩达的深度学习课程作业 TensorFlow Tutorial (TF2兼容)

使用TensorFlow 2.6.0版本改写TensorFlow 1的代码,使用TF2兼容TF1的API。

1 - Exploring the Tensorflow Library

1.1 导入相关库
import math
import h5py
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import convert_to_one_hot, load_dataset, predict, random_mini_batches%matplotlib inline
np.random.seed(1)
1.2 计算 Loss
# 禁用 eager execution 以使用 TensorFlow 1.x API 风格
tf.compat.v1.disable_eager_execution()
# 定义变量
y_hat = tf.constant(36, name="y_hat")
y = tf.constant(39, name="y")# 计算损失
loss = tf.Variable((y - y_hat) ** 2, name='loss')# 初始化变量
init = tf.compat.v1.global_variables_initializer()# 创建会话
with tf.compat.v1.Session() as session:# 运行初始化操作session.run(init)# 计算损失值loss_value = session.run(loss)# 打印损失值print(loss_value)# Output输出
# 9
1.3 初始化变量
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a, b)
print(c)# Output 输出
# Tensor("Mul:0", shape=(), dtype=int32)
1.4 运行Session
# 使用tf.compat.v1.Session()来运行静态图
with tf.compat.v1.Session() as sess:# 由于tf.function已经将操作包装成了一个图,我们可以直接运行它result = sess.run(c)print(result)# Output 输出
# 20
1.5 变量占位并赋值
# Change the value of x in the feed_dict
x = tf.compat.v1.placeholder(tf.int64, name='x')with tf.compat.v1.Session() as sess:# 运行图并获取结果print(sess.run(2 * x, feed_dict={x: 3}))# Output 输出
# 6
1.6 Linear function 线性函数
# GRADED FUNCTION: linear_functiondef linear_function():"""Implements a linear function:Initializes W to be a random tensor of shape (4,3)Initializes X to be a random tensor of shape (3,1)Initializes b to be a random tensor of shape (4,1)Returns:result -- runs the session for Y = WX + b"""np.random.seed(1)### START CODE HERE ### (4 lines of code)X = tf.constant(np.random.randn(3, 1), name="X")W = tf.constant(np.random.randn(4, 3), name="W")b = tf.constant(np.random.randn(4, 1), name="b")Y = tf.add(tf.matmul(W, X), b)### END CODE HERE #### Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate### START CODE HERE ###sess = tf.compat.v1.Session()result = sess.run(Y)### END CODE HERE #### close the sessionsess.close()return resultprint("result = " + str(linear_function()))# Output 输出
'''
result = [[-2.15657382][ 2.95891446][-1.08926781][-0.84538042]]
'''
1.7 计算 Sigmoid
# GRADED FUNCTION: sigmoiddef sigmoid(z):"""Computes the sigmoid of zArguments:z -- input value, scalar or vectorReturns:results -- the sigmoid of z"""x = tf.compat.v1.placeholder(tf.float32, name="x")sigmoid = tf.sigmoid(x)with tf.compat.v1.Session() as sess:# Initialize global variablesresult = sess.run(sigmoid, feed_dict={x: z})return resultprint("sigmoid(0) = " + str(sigmoid(0)))
print("sigmoid(12) = " + str(sigmoid(12)))# Output 输出
'''
sigmoid(0) = 0.5
sigmoid(12) = 0.9999939
'''
1.8 计算成本Cost
# GRADED FUNCTION: costdef cost(logits, labels):"""Computes the cost using the sigmoid cross entropyArguments:logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)labels -- vector of labels y (1 or 0)Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"in the TensorFlow documentation. So logits will feed into z, and labels into y.Returns:cost -- runs the session of the cost (formula (2))"""# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)z = tf.compat.v1.placeholder(tf.float32, name="z")y = tf.compat.v1.placeholder(tf.float32, name="y")# Use the loss function (approx. 1 line)cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)# Create a session (approx. 1 line). See method 1 above.with tf.compat.v1.Session() as sess:# Initialize variablessess.run(tf.compat.v1.global_variables_initializer())# Run the session (approx. 1 line).cost_value = sess.run(cost, feed_dict={z: logits, y: labels})### END CODE HERE ###return cost_valuecost = cost(logits, labels)
print("cost = " + str(cost))# Output 输出
# cost = [1.0053872  1.0366409  0.41385433 0.39956614]
1.9 独热编码 One Hot Encoding
# GRADED FUNCTION: one_hot_matrixdef one_hot_matrix(labels, C):"""Creates a matrix where the i-th row corresponds to the ith class number and the jth columncorresponds to the jth training example. So if example j had a label i. Then entry (i,j)will be 1.Arguments:labels -- vector containing the labelsC -- number of classes, the depth of the one hot dimensionReturns:one_hot -- one hot matrix"""### START CODE HERE #### Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)C = tf.constant(value=C, name="C")# Use tf.one_hot, be careful with the axis (approx. 1 line)one_hot_matrix = tf.one_hot(labels, C, axis=0)with tf.compat.v1.Session() as sess:# Initialize variablesone_hot = sess.run(one_hot_matrix)### END CODE HERE ###return one_hotlabels = np.array([1, 2, 3, 0, 2, 1])
one_hot = one_hot_matrix(labels, C=4)
print("one_hot = " + str(one_hot))# Output 输出
'''
one_hot = [[0. 0. 0. 1. 0. 0.][1. 0. 0. 0. 0. 1.][0. 1. 0. 0. 1. 0.][0. 0. 1. 0. 0. 0.]]
'''
1.10 初始化向量
# GRADED FUNCTION: onesdef ones(shape):"""Creates an array of ones of dimension shapeArguments:shape -- shape of the array you want to createReturns:ones -- array containing only ones"""### START CODE HERE #### Create "ones" tensor using tf.ones(...). (approx. 1 line)ones = tf.ones(shape)with tf.compat.v1.Session() as sess:# Run the session (approx. 1 line).ones = sess.run(ones)### END CODE HERE ###return onesprint("ones = " + str(ones([3])))# Output 输出
# ones = [1. 1. 1.]

2 - Building your first neural network in tensorflow

2.1 读取数据
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()# Example of a picture
index = 11
plt.imshow(X_train_orig[index])
print("y = " + str(np.squeeze(Y_train_orig[:, index])))# Output 输出
# y = 1
2.2 数据预处理
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten / 255.0
X_test = X_test_flatten / 255.0
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)print("number of training examples = " + str(X_train.shape[1]))
print("number of test examples = " + str(X_test.shape[1]))
print("X_train shape: " + str(X_train.shape))
print("Y_train shape: " + str(Y_train.shape))
print("X_test shape: " + str(X_test.shape))
print("Y_test shape: " + str(Y_test.shape))# Output 输出
'''
number of training examples = 1080
number of test examples = 120
X_train shape: (12288, 1080)
Y_train shape: (6, 1080)
X_test shape: (12288, 120)
Y_test shape: (6, 120)
'''
2.3 创建占位变量 Placeholders
# GRADED FUNCTION: create_placeholdersdef create_placeholders(n_x, n_y):"""Creates the placeholders for the tensorflow session.Arguments:n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)n_y -- scalar, number of classes (from 0 to 5, so -> 6)Returns:X -- placeholder for the data input, of shape [n_x, None] and dtype "float"Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"Tips:- You will use None because it let's us be flexible on the number of examples you will for the placeholders.In fact, the number of examples during test/train is different."""### START CODE HERE ### (approx. 2 lines)X = tf.compat.v1.placeholder(tf.float32, shape=[n_x, None], name='X')Y = tf.compat.v1.placeholder(tf.float32, shape=[n_y, None], name='Y')### END CODE HERE ###return X, YX, Y = create_placeholders(12288, 6)
print("X = " + str(X))
print("Y = " + str(Y))# Output 输出
'''
X = Tensor("X_5:0", shape=(12288, None), dtype=float32)
Y = Tensor("Y_2:0", shape=(6, None), dtype=float32)
'''
2.4 初始化参数
# GRADED FUNCTION: initialize_parametersdef initialize_parameters():"""Initializes parameters to build a neural network with tensorflow. The shapes are:W1 : [25, 12288]b1 : [25, 1]W2 : [12, 25]b2 : [12, 1]W3 : [6, 12]b3 : [6, 1]Returns:parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3"""tf.random.set_seed(1)  # so that your "random" numbers match ours### START CODE HERE ### (approx. 6 lines of code)W1 = tf.compat.v1.get_variable("W1",[25,12288],initializer = tf.initializers.GlorotUniform(seed=1))b1 = tf.compat.v1.get_variable("b1",[25,1],initializer=tf.zeros_initializer())W2 = tf.compat.v1.get_variable("W2", [12, 25], initializer = tf.initializers.GlorotUniform(seed=1))b2 = tf.compat.v1.get_variable("b2", [12, 1], initializer = tf.zeros_initializer())W3 = tf.compat.v1.get_variable("W3", [6, 12], initializer = tf.initializers.GlorotUniform(seed=1))b3 = tf.compat.v1.get_variable("b3", [6, 1], initializer = tf.zeros_initializer())### END CODE HERE ###parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3}return parametersops.reset_default_graph()
with tf.compat.v1.Session() as sess:parameters = initialize_parameters()print("W1 = " + str(parameters["W1"]))print("b1 = " + str(parameters["b1"]))print("W2 = " + str(parameters["W2"]))print("b2 = " + str(parameters["b2"]))# Output 输出
'''
W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32>
b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32>
W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32>
b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32>
'''
2.5 前向传播
# GRADED FUNCTION: forward_propagationdef forward_propagation(X, parameters):"""Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAXArguments:X -- input dataset placeholder, of shape (input size, number of examples)parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"the shapes are given in initialize_parametersReturns:Z3 -- the output of the last LINEAR unit"""# Retrieve the parameters from the dictionary "parameters"W1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]W3 = parameters["W3"]b3 = parameters["b3"]### START CODE HERE ### (approx. 5 lines)              # Numpy Equivalents:Z1 = tf.add(tf.matmul(W1, X), b1)  # Z1 = np.dot(W1, X) + b1A1 = tf.nn.relu(Z1)  # A1 = relu(Z1)Z2 = tf.add(tf.matmul(W2, A1), b2)  # Z2 = np.dot(W2, a1) + b2A2 = tf.nn.relu(Z2)  # A2 = relu(Z2)Z3 = tf.add(tf.matmul(W3, A2), b3)  # Z3 = np.dot(W3,Z2) + b3### END CODE HERE ###return Z3ops.reset_default_graph()
with tf.compat.v1.Session() as sess:X, Y = create_placeholders(12288, 6)parameters = initialize_parameters()Z3 = forward_propagation(X, parameters)print("Z3 = " + str(Z3))# Output 输出
# Z3 = Tensor("Add_2:0", shape=(6, None), dtype=float32)
2.6 计算成本 Cost
# GRADED FUNCTION: compute_costdef compute_cost(Z3, Y):"""Computes the costArguments:Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)Y -- "true" labels vector placeholder, same shape as Z3Returns:cost - Tensor of the cost function"""# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)logits = tf.transpose(Z3)labels = tf.transpose(Y)### START CODE HERE ### (1 line of code)cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))### END CODE HERE ###return costops.reset_default_graph()
with tf.compat.v1.Session() as sess:X, Y = create_placeholders(12288, 6)parameters = initialize_parameters()Z3 = forward_propagation(X, parameters)cost = compute_cost(Z3, Y)print("cost = " + str(cost))# Output 输出
# cost = Tensor("Mean:0", shape=(), dtype=float32)
2.7 构建模型
def model(X_train,Y_train,X_test,Y_test,learning_rate=0.0001,num_epochs=1500,minibatch_size=32,print_cost=True,
):"""Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.Arguments:X_train -- training set, of shape (input size = 12288, number of training examples = 1080)Y_train -- test set, of shape (output size = 6, number of training examples = 1080)X_test -- training set, of shape (input size = 12288, number of training examples = 120)Y_test -- test set, of shape (output size = 6, number of test examples = 120)learning_rate -- learning rate of the optimizationnum_epochs -- number of epochs of the optimization loopminibatch_size -- size of a minibatchprint_cost -- True to print the cost every 100 epochsReturns:parameters -- parameters learnt by the model. They can then be used to predict."""#ops.reset_default_graph()  # to be able to rerun the model without overwriting tf variablesops.reset_default_graph()tf.random.set_seed(1)  # to keep consistent resultsseed = 3  # to keep consistent results(n_x, m) = X_train.shape  # (n_x: input size, m : number of examples in the train set)n_y = Y_train.shape[0]  # n_y : output sizecosts = []  # To keep track of the cost# Create Placeholders of shape (n_x, n_y)### START CODE HERE ### (1 line)X, Y = create_placeholders(n_x, n_y)### END CODE HERE #### Initialize parameters### START CODE HERE ### (1 line)parameters = initialize_parameters()### END CODE HERE ####print(parameters)# Forward propagation: Build the forward propagation in the tensorflow graph### START CODE HERE ### (1 line)Z3 = forward_propagation(X, parameters)### END CODE HERE #### Cost function: Add cost function to tensorflow graph### START CODE HERE ### (1 line)cost = compute_cost(Z3, Y)### END CODE HERE #### Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.### START CODE HERE ### (1 line)optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)### END CODE HERE #### Initialize all the variablesinit = tf.compat.v1.global_variables_initializer()#print(init)# Start the session to compute the tensorflow graphwith tf.compat.v1.Session() as sess:# Run the initializationsess.run(init)# Do the training loopfor epoch in range(num_epochs):epoch_cost = 0.0  # Defines a cost related to an epochnum_minibatches = int(m / minibatch_size)  # number of minibatches of size minibatch_size in the train setseed = seed + 1minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)for minibatch in minibatches:# Select a minibatch(minibatch_X, minibatch_Y) = minibatch# IMPORTANT: The line that runs the graph on a minibatch.# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).### START CODE HERE ### (1 line)_, minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})### END CODE HERE ###epoch_cost += minibatch_cost / num_minibatches# Print the cost every epochif print_cost == True and epoch % 100 == 0:print("Cost after epoch %i: %f" % (epoch, epoch_cost))if print_cost == True and epoch % 5 == 0:costs.append(epoch_cost)# plot the costplt.plot(np.squeeze(costs))plt.ylabel("cost")plt.xlabel("iterations (per tens)")plt.title("Learning rate =" + str(learning_rate))plt.show()# lets save the parameters in a variableparameters = sess.run(parameters)print("Parameters have been trained!")# Calculate the correct predictionscorrect_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))# Calculate accuracy on the test setaccuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))print("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))print("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))return parametersparameters = model(X_train, Y_train, X_test, Y_test, minibatch_size=64)# Output 输出
'''
Cost after epoch 0: 1.969848
Cost after epoch 100: 0.945937
Cost after epoch 200: 0.701537
Cost after epoch 300: 0.575044
Cost after epoch 400: 0.486772
Cost after epoch 500: 0.422395
Cost after epoch 600: 0.355608
Cost after epoch 700: 0.304337
Cost after epoch 800: 0.253242
Cost after epoch 900: 0.204498
Cost after epoch 1000: 0.166818
Cost after epoch 1100: 0.132297
Cost after epoch 1200: 0.098340
Cost after epoch 1300: 0.076905
Cost after epoch 1400: 0.057851
Parameters have been trained!
Train Accuracy: 0.99722224
Test Accuracy: 0.80833334
'''
2.8 测试自己的图片
import scipy
from PIL import Imageimport imageio.v2 as im## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ### We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(np.array(im.imread(fname)))
#my_image = scipy.misc.imresize(image, size=(64, 64)).reshape((1, 64 * 64 * 3)).T
my_image = np.array(Image.fromarray(image).resize((64, 64))).reshape((1, 64 * 64 * 3)).T
my_image_prediction = predict(my_image, parameters)plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))# Output 输出
# Your algorithm predicts: y = 3

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/838036.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

物联网促进信息化——​青创智通工业物联网解决方案​

随着传感器网络&#xff08;WSN)、无线射频识别&#xff08;RFID&#xff09;以及微电子机械系统(MEIVIS&#xff09;等技术的不断成熟,扩展了人们对信息获取和使用的能力&#xff0c;并将提高制造效率、改善产品质量、降低产品成本和资源消耗、为用户提供更加透明和个性化的服…

玩转Matlab-Simscape(初级)- 05 - 基于Solidworks、Matlab Simulink、COMSOL的协同仿真(理论部分1)

** 玩转Matlab-Simscape&#xff08;初级&#xff09;- 05 - 基于Solidworks、Matlab Simulink、COMSOL的协同仿真&#xff08;理论部分1&#xff09; ** 目录 玩转Matlab-Simscape&#xff08;初级&#xff09;- 05 - 基于Solidworks、Matlab Simulink、COMSOL的协同仿真&am…

AutoX.js入门教程

要使用AutoX.js创建一个自动化脚本&#xff0c;自动打开网易云音乐&#xff0c;搜索张学友的歌曲&#xff0c;并随机循环播放&#xff0c;然后将该脚本打包成APK文件&#xff0c;以下是实现这一过程的大致步骤&#xff1a; ### 步骤1&#xff1a;编写AutoX.js脚本 首先&#…

网关过滤器实现接口签名检验

背景 往往项目中的开放接口可能被别有用心者对其进行抓包然后对请求参数进行篡改&#xff0c;或者重复请求占用系统资源为此我们行业内使用比较多的策略是接口签名校验。签名校验的实现可以用注解aop的形式实现&#xff0c;也可以使用过滤器统一拦截校验实现&#xff0c;此篇文…

图形化编程桌面 跨部门协作的新工具

中午&#xff0c;阳光洒满大地&#xff0c;微风轻拂&#xff0c;给人带来温暖和活力。卧龙和凤雏在享用过午餐后&#xff0c;决定到公司附近的湖边散步&#xff0c;享受这难得的宁静时光。 卧龙望着湖面泛起的波光&#xff0c;顺手折下一根嫩绿的柳枝&#xff0c;在手中不停地摆…

运维别卷系列 - 云原生监控平台 之 04.prometheus 查询语句 promql 实践

文章目录 [toc]PromQL 简介什么是时间序列 PromQL 数据类型即时向量 Instant vector范围向量 Range vectorTime DurationsOffset modifier modifier 浮点值 Scalar字符串 String PromQL FUNCTIONSfloor()irate()rate()round()sort()sort_desc() PromQL 运算符算术运算符比较运算…

TortoiseGit的安装

TortoiseSvn和TortoiseGit都是针对代码进行版本管理的工具&#xff0c;又俗称小乌龟&#xff0c;简洁而可视化的操作界面&#xff0c;免去繁琐的命令行输入。只需要记住常用的几个操作步骤就能快速上手。 TortoiseGit安装 1、TortoiseGit作为git的版本管理工具 &#xff0c;但…

【PMP各章节易错的知识点记录】

PMP各章节易错的知识点记录 文章目录 PMP各章节易错的知识点记录前言第1-3章第4章 整合第5章 范围第6章 进度第7章 成本第8章 质量第9章 资源第10章 沟通第11章 风险第12章 采购第13章 相关方 前言 PMP各章节易错的知识点记录 第1-3章 一、商业论证、效益管理计划、成本效益分…

无感自动透明加密系统

无感透明加密系统是一种高级的数据安全技术&#xff0c;它在不影响用户正常操作的前提下&#xff0c;对指定的文件或数据进行自动加密和解密。这种系统设计的目的是为了提高信息安全级别&#xff0c;确保敏感信息在存储和传输过程中的保密性&#xff0c;同时又不会给合法用户的…

5.9网络协议

由网卡发送数据通过网线进行发送&#xff0c;当网卡接收到信号以后将数据传给内核数据区&#xff0c;然后由操作系统交给相应的进程。 将数据进行发送的时候需要借助于网线实现&#xff0c;这个时候会出现当传输的数据比较远的时候就借助于中继器将信号进行再生扩大&#xff0…

FedDML:Federated Mutual Learning

这篇把DML运用到FL上 论文地址:arvix code: 作者git 贡献 我们针对三种异质性(DOM)提出了一种新颖的联邦学习范式,称为联邦相互学习(FML)。 首先,FML 处理数据和目标通过使每个客户能够训练个性化模型来实现异质性。 从OH的意义上来说,DH对服务器有害,但对客户端有…

STL—string类(1)

一、string类 1、为什么要学习string&#xff1f; C语言中&#xff0c;字符串是以\0结尾的一些字符的集合&#xff0c;为了操作方便&#xff0c;C标准库中提供了一些str系列的库函数&#xff0c;但是这些库函数与字符串是分离开的&#xff0c;不太符合OOP&#xff08;面向对象…

怎么选靠谱的短信验证码API平台?

随着移动互联网的发展&#xff0c;越来越多人会选择通过短信验证码API来验证信息&#xff0c;现在平台那么多&#xff0c;像聚合数据、数据宝等都有这个API接口&#xff0c;商家在选择短信验证码API平台时应该注意些什么呢&#xff1f; 什么是短信验证码API&#xff1f; 短信…

爱校对新功能上线:领导人讲话和职务排序校对

我们很高兴地宣布&#xff0c;爱校对网站正式推出两项新功能&#xff1a;领导人讲话校对和领导人职务排序校对。这些新功能旨在帮助用户更准确地引用和整理领导人讲话内容&#xff0c;以及正确排列领导人的职务顺序。 领导人讲话校对 在撰写报告或文章时&#xff0c;引用领导…

气膜建筑使用寿命由什么决定—轻空间

气膜建筑作为一种新型建筑结构&#xff0c;以其独特的优点在全球范围内逐渐普及。其使用寿命是投资者和用户关注的关键问题。气膜建筑的使用寿命主要由以下几个方面决定&#xff1a; 1. 膜材 膜材是气膜建筑的核心组成部分&#xff0c;其质量直接影响到建筑的使用寿命。以下是影…

智能网红主播直播手机:助您轻松卖货、卖团购卷、拓客利器!

在当下快速发展的电商行业中&#xff0c;直播销售已经成为无可忽视的一大趋势。智能网红主播直播手机的出现&#xff0c;让人们无需拥有专业设备和经验&#xff0c;便可轻松参与直播销售&#xff0c;享受销售乐趣。本文将介绍智能网红主播直播手机的操作简单、易上手以及其在卖…

ESP-IDF中vTaskDelay函数的解析

本文内容参考&#xff1a; https://www.freertos.org/zh-cn-cmn-s/a00127.html 特此致谢&#xff01; 在ESP-IDF项目中用到了延时。如果是在Keil、尤其是STM32接口函数中&#xff0c;笔者很熟悉&#xff0c;是delay_ms函数或者在HAL库中是HAL_Delay函数&#xff1b;如果是Lin…

后台返回文件流,前端下载为excal

1.添加ajax请求&#xff08;responseType:blob,&#xff09; const exportExcelApi (options)>{ return fetchPost(IP/airport/downExcel, {body:options,responseType:blob,}); } 2.请求到流下载到本地 const down (type,response)>{// 创建隐藏的可下载链接 respo…

JAVA二手车交易二手车市场系统源码支持微信小程序+微信公众号+H5+APP

二手车交易二手车市场系统&#xff1a;重塑购车新体验 随着汽车消费市场的日益成熟&#xff0c;二手车交易逐渐成为消费者购车的新选择。为了提供更加便捷、透明、安全的二手车交易环境&#xff0c;我们推出了“二手车交易二手车市场系统”&#xff0c;旨在为买卖双方搭建一个…

新书速览|Django 5 Web应用开发实战

构建未来&#xff0c;用Django 5打造全新Web应用 本书内容 《Django 5 Web应用开发实战》集Django架站基础、项目实践、开发经验于一体&#xff0c;是一本从零基础到精通Django Web企业级开发技术的实战指南。《Django 5 Web应用开发实战》内容以Python 3.x和Django 5版本为基础…