经典卷积网络

放假回家了,感觉快坚持不下去了,目前还没有找到关于无监督学习实现分类的课程,普通数据当然肯定不会给你实现分类的啊 给些建议吧。

LeNet

通过共享卷积核,减少网络参数。

一般只统计卷积计算层和全连接计算层,其余操作是卷积层的附属,

LeNet有五层网络,c1卷积,c3卷积,lenet没有BN操作,没有D

Flatten拉直,连续三层全连接网络,前两层使用sigmod激活函数,左后一层使用softmax。

 class LeNer5(Model)

        def __init__(self):

                super.(LeNet5, eslf).__init__()

                # 6个5*5各卷积核激活函数sigmoid

                self.c1 = Conv2D(filters=6, kernel_size=(5,5), activation='sigmoid')

                #选择最大池化,池化是2*2的尺寸步长为2

                self.p1 = MaxPool2D(pool+size=(2, 2), strides=2)

        

                self.c2 = Conv2D(filters=16, kernel_size=(5, 5), activation'sigmoid')

                self.p2 = MaxPool2D(pool.size=(2, 2), strides=2)

                #拉直

                self.flatten = Flatten()

                # 全连接网络

                self.f1 = Dense(120, activation='sigmod')

                self.f2 = Dense(84, activation='sigmoid')

                #使输出符合概率分布,10个神经元

                self.f3=Dense(10, activation='softmax')

AlexNet

   使用Relu激活函数,提升训练速度,dropout缓解过拟合

8层,第一层使用96个3*3个卷积核,步长为1,不适用填充,使用BN操作,实现特征标准化,使用relu激活函数,用3*3的池化核做最大池化,步长为2

 

import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense
from tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0class AlexNet8(Model):def __init__(self):super(AlexNet8, self).__init__()self.c1 = Conv2D(filters=96, kernel_size=(3, 3))self.b1 = BatchNormalization()self.a1 = Activation('relu')self.p1 = MaxPool2D(pool_size=(3, 3), strides=2)self.c2 = Conv2D(filters=256, kernel_size=(3, 3))self.b2 = BatchNormalization()self.a2 = Activation('relu')self.p2 = MaxPool2D(pool_size=(3, 3), strides=2)self.c3 = Conv2D(filters=384, kernel_size=(3, 3), padding='same',activation='relu')self.c4 = Conv2D(filters=384, kernel_size=(3, 3), padding='same',activation='relu')self.c5 = Conv2D(filters=256, kernel_size=(3, 3), padding='same',activation='relu')self.p3 = MaxPool2D(pool_size=(3, 3), strides=2)self.flatten = Flatten()self.f1 = Dense(2048, activation='relu')self.d1 = Dropout(0.5)self.f2 = Dense(2048, activation='relu')self.d2 = Dropout(0.5)self.f3 = Dense(10, activation='softmax')def call(self, x):x = self.c1(x)x = self.b1(x)x = self.a1(x)x = self.p1(x)x = self.c2(x)x = self.b2(x)x = self.a2(x)x = self.p2(x)x = self.c3(x)x = self.c4(x)x = self.c5(x)x = self.p3(x)x = self.flatten(x)x = self.f1(x)x = self.d1(x)x = self.f2(x)x = self.d2(x)y = self.f3(x)return ymodel = AlexNet8()model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['sparse_categorical_accuracy'])checkpoint_save_path = "./checkpoint/AlexNet8.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):print('-------------load the model-----------------')model.load_weights(checkpoint_save_path)cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,save_weights_only=True,save_best_only=True)history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,callbacks=[cp_callback])
model.summary()# print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:file.write(str(v.name) + '\n')file.write(str(v.shape) + '\n')file.write(str(v.numpy()) + '\n')
file.close()###############################################    show   ################################################ 显示训练集和验证集的acc和loss曲线
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

 VGGNet

非常适合硬件加速。

import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense
from tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0class VGG16(Model):def __init__(self):super(VGG16, self).__init__()self.c1 = Conv2D(filters=64, kernel_size=(3, 3), padding='same')  # 卷积层1self.b1 = BatchNormalization()  # BN层1self.a1 = Activation('relu')  # 激活层1self.c2 = Conv2D(filters=64, kernel_size=(3, 3), padding='same', )self.b2 = BatchNormalization()  # BN层1self.a2 = Activation('relu')  # 激活层1self.p1 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d1 = Dropout(0.2)  # dropout层self.c3 = Conv2D(filters=128, kernel_size=(3, 3), padding='same')self.b3 = BatchNormalization()  # BN层1self.a3 = Activation('relu')  # 激活层1self.c4 = Conv2D(filters=128, kernel_size=(3, 3), padding='same')self.b4 = BatchNormalization()  # BN层1self.a4 = Activation('relu')  # 激活层1self.p2 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d2 = Dropout(0.2)  # dropout层self.c5 = Conv2D(filters=256, kernel_size=(3, 3), padding='same')self.b5 = BatchNormalization()  # BN层1self.a5 = Activation('relu')  # 激活层1self.c6 = Conv2D(filters=256, kernel_size=(3, 3), padding='same')self.b6 = BatchNormalization()  # BN层1self.a6 = Activation('relu')  # 激活层1self.c7 = Conv2D(filters=256, kernel_size=(3, 3), padding='same')self.b7 = BatchNormalization()self.a7 = Activation('relu')self.p3 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d3 = Dropout(0.2)self.c8 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b8 = BatchNormalization()  # BN层1self.a8 = Activation('relu')  # 激活层1self.c9 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b9 = BatchNormalization()  # BN层1self.a9 = Activation('relu')  # 激活层1self.c10 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b10 = BatchNormalization()self.a10 = Activation('relu')self.p4 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d4 = Dropout(0.2)self.c11 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b11 = BatchNormalization()  # BN层1self.a11 = Activation('relu')  # 激活层1self.c12 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b12 = BatchNormalization()  # BN层1self.a12 = Activation('relu')  # 激活层1self.c13 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b13 = BatchNormalization()self.a13 = Activation('relu')self.p5 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d5 = Dropout(0.2)self.flatten = Flatten()self.f1 = Dense(512, activation='relu')self.d6 = Dropout(0.2)self.f2 = Dense(512, activation='relu')self.d7 = Dropout(0.2)self.f3 = Dense(10, activation='softmax')def call(self, x):x = self.c1(x)x = self.b1(x)x = self.a1(x)x = self.c2(x)x = self.b2(x)x = self.a2(x)x = self.p1(x)x = self.d1(x)x = self.c3(x)x = self.b3(x)x = self.a3(x)x = self.c4(x)x = self.b4(x)x = self.a4(x)x = self.p2(x)x = self.d2(x)x = self.c5(x)x = self.b5(x)x = self.a5(x)x = self.c6(x)x = self.b6(x)x = self.a6(x)x = self.c7(x)x = self.b7(x)x = self.a7(x)x = self.p3(x)x = self.d3(x)x = self.c8(x)x = self.b8(x)x = self.a8(x)x = self.c9(x)x = self.b9(x)x = self.a9(x)x = self.c10(x)x = self.b10(x)x = self.a10(x)x = self.p4(x)x = self.d4(x)x = self.c11(x)x = self.b11(x)x = self.a11(x)x = self.c12(x)x = self.b12(x)x = self.a12(x)x = self.c13(x)x = self.b13(x)x = self.a13(x)x = self.p5(x)x = self.d5(x)x = self.flatten(x)x = self.f1(x)x = self.d6(x)x = self.f2(x)x = self.d7(x)y = self.f3(x)return ymodel = VGG16()model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['sparse_categorical_accuracy'])checkpoint_save_path = "./checkpoint/VGG16.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):print('-------------load the model-----------------')model.load_weights(checkpoint_save_path)cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,save_weights_only=True,save_best_only=True)history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,callbacks=[cp_callback])
model.summary()# print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:file.write(str(v.name) + '\n')file.write(str(v.shape) + '\n')file.write(str(v.numpy()) + '\n')
file.close()###############################################    show   ################################################ 显示训练集和验证集的acc和loss曲线
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

InceptionNet

引入Inception结构块,在同一层网络内内,采用不同的卷积核,提升模型感知力,使用批标准化,缓解梯度消失 结构块分为4个分支,分别经过*1卷积核输出到卷积连接器,经过*1配合3*3输出到卷积连接器,经过1*1配合5*5输出到卷积连接器,经过3*3最大池化核配合1*1卷积核,卷积连接器将各个特征数据按照深度方向进行拼接。

分支1,采用16个1*1,全零填充

分支2,先将16*1*1进行降维,采用BN, 在进行16*3*3卷积核进行操作,

分支3,先降维,在使用16个5*5卷积核

分支4,先池化,降维

CBN操作

通过1*1卷积核,作用到每一个像素点,通过设定少于输入特征图深度的1*1卷积核个数,减少输出特征图的深度,起到了降维的作用,减少参数量和计算量。

使用concat堆叠在一起,axis=3表示堆叠方向

 精简的inception网络结构,网络共有10层,第一层采用16个3*3卷积核。

基于cifar数据集是10分类。

将batch_size跳到1024,充分发挥GPU性能,发挥70%到80%

import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense, \GlobalAveragePooling2D
from tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0class ConvBNRelu(Model):def __init__(self, ch, kernelsz=3, strides=1, padding='same'):super(ConvBNRelu, self).__init__()self.model = tf.keras.models.Sequential([Conv2D(ch, kernelsz, strides=strides, padding=padding),BatchNormalization(),Activation('relu')])def call(self, x):x = self.model(x, training=False) #在training=False时,BN通过整个训练集计算均值、方差去做批归一化,training=True时,通过当前batch的均值、方差去做批归一化。推理时 training=False效果好return xclass InceptionBlk(Model):def __init__(self, ch, strides=1):super(InceptionBlk, self).__init__()self.ch = chself.strides = stridesself.c1 = ConvBNRelu(ch, kernelsz=1, strides=strides)self.c2_1 = ConvBNRelu(ch, kernelsz=1, strides=strides)self.c2_2 = ConvBNRelu(ch, kernelsz=3, strides=1)self.c3_1 = ConvBNRelu(ch, kernelsz=1, strides=strides)self.c3_2 = ConvBNRelu(ch, kernelsz=5, strides=1)self.p4_1 = MaxPool2D(3, strides=1, padding='same')self.c4_2 = ConvBNRelu(ch, kernelsz=1, strides=strides)def call(self, x):x1 = self.c1(x)x2_1 = self.c2_1(x)x2_2 = self.c2_2(x2_1)x3_1 = self.c3_1(x)x3_2 = self.c3_2(x3_1)x4_1 = self.p4_1(x)x4_2 = self.c4_2(x4_1)# concat along axis=channelx = tf.concat([x1, x2_2, x3_2, x4_2], axis=3)return xclass Inception10(Model):def __init__(self, num_blocks, num_classes, init_ch=16, **kwargs):super(Inception10, self).__init__(**kwargs)self.in_channels = init_chself.out_channels = init_chself.num_blocks = num_blocksself.init_ch = init_chself.c1 = ConvBNRelu(init_ch)self.blocks = tf.keras.models.Sequential()for block_id in range(num_blocks):for layer_id in range(2):if layer_id == 0:block = InceptionBlk(self.out_channels, strides=2)else:block = InceptionBlk(self.out_channels, strides=1)self.blocks.add(block)# enlarger out_channels per blockself.out_channels *= 2self.p1 = GlobalAveragePooling2D()self.f1 = Dense(num_classes, activation='softmax')def call(self, x):x = self.c1(x)x = self.blocks(x)x = self.p1(x)y = self.f1(x)return ymodel = Inception10(num_blocks=2, num_classes=10)model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['sparse_categorical_accuracy'])checkpoint_save_path = "./checkpoint/Inception10.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):print('-------------load the model-----------------')model.load_weights(checkpoint_save_path)cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,save_weights_only=True,save_best_only=True)history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,callbacks=[cp_callback])
model.summary()# print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:file.write(str(v.name) + '\n')file.write(str(v.shape) + '\n')file.write(str(v.numpy()) + '\n')
file.close()###############################################    show   ################################################ 显示训练集和验证集的acc和loss曲线
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

ResNet

提出了层间残差跳连,引入前方信息减少梯度消失,是增加网络层数成为可能

通过提升网络层数,有可能会使退化,后面特征丢失了前面特征,

此操作缓解了退化

Inception的+是延深度方向叠加,ResNet块中的+是特征图对应元素值相加(矩阵相加)

ResNet块中有两种情况,一种情况用图中实线表示

加速模型收敛,使模型的batch_size调整到128

import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense
from tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0class ResnetBlock(Model):def __init__(self, filters, strides=1, residual_path=False):super(ResnetBlock, self).__init__()self.filters = filtersself.strides = stridesself.residual_path = residual_pathself.c1 = Conv2D(filters, (3, 3), strides=strides, padding='same', use_bias=False)self.b1 = BatchNormalization()self.a1 = Activation('relu')self.c2 = Conv2D(filters, (3, 3), strides=1, padding='same', use_bias=False)self.b2 = BatchNormalization()# residual_path为True时,对输入进行下采样,即用1x1的卷积核做卷积操作,保证x能和F(x)维度相同,顺利相加if residual_path:self.down_c1 = Conv2D(filters, (1, 1), strides=strides, padding='same', use_bias=False)self.down_b1 = BatchNormalization()self.a2 = Activation('relu')def call(self, inputs):residual = inputs  # residual等于输入值本身,即residual=x# 将输入通过卷积、BN层、激活层,计算F(x)x = self.c1(inputs)x = self.b1(x)x = self.a1(x)x = self.c2(x)y = self.b2(x)if self.residual_path:residual = self.down_c1(inputs)residual = self.down_b1(residual)out = self.a2(y + residual)  # 最后输出的是两部分的和,即F(x)+x或F(x)+Wx,再过激活函数return outclass ResNet18(Model):def __init__(self, block_list, initial_filters=64):  # block_list表示每个block有几个卷积层super(ResNet18, self).__init__()self.num_blocks = len(block_list)  # 共有几个blockself.block_list = block_listself.out_filters = initial_filtersself.c1 = Conv2D(self.out_filters, (3, 3), strides=1, padding='same', use_bias=False)self.b1 = BatchNormalization()self.a1 = Activation('relu')self.blocks = tf.keras.models.Sequential()# 构建ResNet网络结构for block_id in range(len(block_list)):  # 第几个resnet blockfor layer_id in range(block_list[block_id]):  # 第几个卷积层if block_id != 0 and layer_id == 0:  # 对除第一个block以外的每个block的输入进行下采样block = ResnetBlock(self.out_filters, strides=2, residual_path=True)else:block = ResnetBlock(self.out_filters, residual_path=False)self.blocks.add(block)  # 将构建好的block加入resnetself.out_filters *= 2  # 下一个block的卷积核数是上一个block的2倍self.p1 = tf.keras.layers.GlobalAveragePooling2D()self.f1 = tf.keras.layers.Dense(10, activation='softmax', kernel_regularizer=tf.keras.regularizers.l2())def call(self, inputs):x = self.c1(inputs)x = self.b1(x)x = self.a1(x)x = self.blocks(x)x = self.p1(x)y = self.f1(x)return ymodel = ResNet18([2, 2, 2, 2])model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['sparse_categorical_accuracy'])checkpoint_save_path = "./checkpoint/ResNet18.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):print('-------------load the model-----------------')model.load_weights(checkpoint_save_path)cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,save_weights_only=True,save_best_only=True)history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,callbacks=[cp_callback])
model.summary()# print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:file.write(str(v.name) + '\n')file.write(str(v.shape) + '\n')file.write(str(v.numpy()) + '\n')
file.close()###############################################    show   ################################################ 显示训练集和验证集的acc和loss曲线
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

总结

lenet 卷积网络的开篇之作,共享卷积核,减少网络参数。

AlexNet 使用relu激活函数,提升训练速度,使用Dropout缓解过拟合

VGGnet 小尺寸卷积核,减少参数,网络结构规整,适合并行加速

InceptionNet一层内使用不同尺寸卷积核,提升感知力使用批标准化,缓解梯度消失。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/871554.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【redis操作语句】

1.数据库操作 redis默认有16个数据库,编号为0~15,且默认访问0号数据库 获取当前键值对数量:先set创建一个键值对,再用dbsize获取,flushdb清空再获取。 127.0.0.1:6379> set k1 v1 OK 127.0.0.1:6379> dbsize (integer) 1 127.0.0.1:…

期货量化交易客户端开源教学第三节——键盘通信协议

一、协议约定: 使用串口通信,波特率57600,一个起始位,一个停止位,一个校验位,8位数据,奇校验;约定键盘到电脑的数据为上行数据,电脑到键盘的数据为下行数据;数据格式为十六进制,高字节在前;协议格式2.1 键值帧(上行)无需应答 名称 长度 说明 帧头 1 键按下 # (0x…

安卓onNewIntent 什么时候执行

一.详细介绍 onNewIntent 方法 onNewIntent 是 Android 中 Activity 生命周期的一部分。它在特定情况下被调用,主要用于处理新的 Intent,而不是创建新的 Activity 实例。详细介绍如下: 使用场景 singleTop 启动模式: 如果一个 Ac…

A66 STM32_HAL库函数 之 USART通用驱动 -- B -- 所有函数的介绍及使用

A66 STM32_HAL库函数 之 USART通用驱动 -- B -- 所有函数的介绍及使用 1 该驱动函数预览1.15 HAL_USART_DMAResume1.16 HAL_USART_DMAStop1.17 HAL_USART_Abort1.18 HAL_USART_Abort_IT1.19 HAL_USART_IRQHandler1.20 HAL_USART_TxCpltCallback1.21 HAL_USART_TxHalfCpltCallba…

pid内容索引

Arduino PID整定 Arduino PID库简介 巡线机器人 - PID控制 掌握 PID 调节: 综合指南 PID控制器解释及整定 PID算法解析及程序代码_pid程序 1. PID 控制 2. 通过经典方法进行 PID 调谐_齐格勒尼科尔斯方法 关于PID知识整理 PID循迹机器人及整定 关于pid收藏…

老板也有生命周期

老板也有生命周期 产品生命周期,企业生命周期,老板也有生命周期 市场淘汰的不是公司,而是对管理认知不足,不能与时俱进和经营不善的老板。 市场每个周期都会淘汰一定数量的老板,老板也很难意识到是自己的问题,既然意识不到自己的问题,也就难以作出反应和应对之策,不…

pytorch学习--使用m1 进行训练

import torch #判断是否存在 gpu torch.backends.mps.is_available()Trueif torch.backends.mps.is_available():mps_device torch.device("mps")x torch.ones(1, devicemps_device)print (x) else:print ("MPS device not found.")tensor([1.], devicem…

UML建模案例分析-需求对类图的影响很大

概要 类图描述系统中类的静态结构。 概念是概念,但类图受需求的影响是非常大的,可以说类图是建模的源头。尽管用例图是源头,但对类图的作用有限。 例子 进销存系统里,产品类中,至少要包括如下属性:名称…

现代动力系统理论导论 第一卷+第二卷 Anatole Katok 金成桴

第0章 引言 0.1. 动力学主要分支 0.2. 流,向量场,微分方程 0.3. 时间1映射,截面,扭扩 0.4. 线性化与局部化 第1部分 例子与基本概念 …

使用Python的qrcode库生成二维码 —— 从入门到实践

引言 在数字时代,二维码已成为我们日常生活中不可或缺的一部分,无论是在支付、广告、产品追踪还是信息共享中,二维码的应用无处不在。Python中的qrcode库提供了一个简单而强大的工具,帮助开发者轻松创建二维码。本文将详细介绍如…

html dialog不显示边框

html dialog不显示边框 在HTML中, 元素默认情况下会显示一个边框。如果你想要一个不显示边框的对话框,你可以通过CSS来隐藏边框。 以下是一个简单的例子,演示如何使用CSS来隐藏 元素的边框: HTML: 这是一个不显示边框的对话框。…

宕机/脱机

目录 概念 区别 概念 宕机和脱机是两个不同的概念 宕机:一般指计算机系统或网络突然停止正常运行,无法继续提供服务。宕机可能是由硬件故障、软件问题、电源中断等原因导致的系统失效。 脱机:通常指设备与网络断开连接或无法直接访问在线资源。例如,…

Ubuntu使用K3S一分钟快速搭建K8S集群

快速入门指南 | Rancher文档 准备3台服务器 Master节点安装脚本# K3s 提供了一个安装脚本,可以方便的在 systemd 或 openrc 的系统上将其作为服务安装。这个脚本可以在 https://get.k3s.io 获得。要使用这种方法安装 K3s,只需运行以下命令: curl -sfL https://rancher-mi…

Android Spinner

1. Spinner Spinner是下拉列表,如图3-14所示,通常用于为用户提供选择输入。Spinner有一个重要的属性:spinnerMode,它有2种情况: 属性值为dropdown时,表示Spinner的数据下拉展示,如图1&#xf…

反应式编程:原理功能介绍及实践

简介 反应式编程(Reactive Programming)是一种面向数据流和变化传播的编程范式。它强调异步数据流的处理,通过声明性地定义依赖关系,使得系统能够自动响应数据的变化。 功能 异步处理:反应式编程天然支持异步操作&am…

机器学习和人工智能对金融行业的影响——案例分析

作者主页: 知孤云出岫 目录 引言机器学习和人工智能在金融行业的应用1. 风险管理信用评分风险预测 2. 交易高频交易量化交易 3. 客户服务聊天机器人个性化推荐 4. 反欺诈检测 机器学习和人工智能带来的变革1. 提高效率2. 降低成本3. 提升客户体验 未来发展趋势1. 更智能的风控系…

【中台】数字中台建设方案(PPT)

数字中台建设要点: 数据采集与整合: 打破企业内部各个业务系统的数据隔阂,通过数据采集和数据交换实现数据的集中管理,形成统一的数据中心,为后续数据价值的挖掘提供基础。 利用自研或第三方ETL(Extract, T…

FreeRTOS学习(1)STM32单片机移植FreeRTOS

一、FreeRTOS源码的下载 1、官网下载 FreeRTOS官方链接 官方下载速度慢,需要翻墙,一般选择第一个 2、直接通过仓库下载 仓库地址链接 同样很慢,甚至打不开网页,也不建议使用这种方法。 3、百度网盘 链接:https:…

多表联合的查询(实例)、对于前端返回数据有很多表,可以分开操作、debug调试教程

2024.7.13 一、 对于多表的更深层的认识1. 认识2. 多表联合查询的列子:3. 对于多表查询的进一步认识4. 在实现功能的时候,原本对于省市县这样的表,对于项目的要求,是直接全部查询出来,然后开始使用,但我想着…

JavaScript中的面向对象编程

OPP在JavaScript的表现方式:原型 传统的OPP:类 ● 对象(实例)由类实例化,类的功能类似于蓝图,通过蓝图来实现建筑(实例) ● 行为(方法)从类复制到所有实例 …