经典卷积网络

放假回家了,感觉快坚持不下去了,目前还没有找到关于无监督学习实现分类的课程,普通数据当然肯定不会给你实现分类的啊 给些建议吧。

LeNet

通过共享卷积核,减少网络参数。

一般只统计卷积计算层和全连接计算层,其余操作是卷积层的附属,

LeNet有五层网络,c1卷积,c3卷积,lenet没有BN操作,没有D

Flatten拉直,连续三层全连接网络,前两层使用sigmod激活函数,左后一层使用softmax。

 class LeNer5(Model)

        def __init__(self):

                super.(LeNet5, eslf).__init__()

                # 6个5*5各卷积核激活函数sigmoid

                self.c1 = Conv2D(filters=6, kernel_size=(5,5), activation='sigmoid')

                #选择最大池化,池化是2*2的尺寸步长为2

                self.p1 = MaxPool2D(pool+size=(2, 2), strides=2)

        

                self.c2 = Conv2D(filters=16, kernel_size=(5, 5), activation'sigmoid')

                self.p2 = MaxPool2D(pool.size=(2, 2), strides=2)

                #拉直

                self.flatten = Flatten()

                # 全连接网络

                self.f1 = Dense(120, activation='sigmod')

                self.f2 = Dense(84, activation='sigmoid')

                #使输出符合概率分布,10个神经元

                self.f3=Dense(10, activation='softmax')

AlexNet

   使用Relu激活函数,提升训练速度,dropout缓解过拟合

8层,第一层使用96个3*3个卷积核,步长为1,不适用填充,使用BN操作,实现特征标准化,使用relu激活函数,用3*3的池化核做最大池化,步长为2

 

import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense
from tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0class AlexNet8(Model):def __init__(self):super(AlexNet8, self).__init__()self.c1 = Conv2D(filters=96, kernel_size=(3, 3))self.b1 = BatchNormalization()self.a1 = Activation('relu')self.p1 = MaxPool2D(pool_size=(3, 3), strides=2)self.c2 = Conv2D(filters=256, kernel_size=(3, 3))self.b2 = BatchNormalization()self.a2 = Activation('relu')self.p2 = MaxPool2D(pool_size=(3, 3), strides=2)self.c3 = Conv2D(filters=384, kernel_size=(3, 3), padding='same',activation='relu')self.c4 = Conv2D(filters=384, kernel_size=(3, 3), padding='same',activation='relu')self.c5 = Conv2D(filters=256, kernel_size=(3, 3), padding='same',activation='relu')self.p3 = MaxPool2D(pool_size=(3, 3), strides=2)self.flatten = Flatten()self.f1 = Dense(2048, activation='relu')self.d1 = Dropout(0.5)self.f2 = Dense(2048, activation='relu')self.d2 = Dropout(0.5)self.f3 = Dense(10, activation='softmax')def call(self, x):x = self.c1(x)x = self.b1(x)x = self.a1(x)x = self.p1(x)x = self.c2(x)x = self.b2(x)x = self.a2(x)x = self.p2(x)x = self.c3(x)x = self.c4(x)x = self.c5(x)x = self.p3(x)x = self.flatten(x)x = self.f1(x)x = self.d1(x)x = self.f2(x)x = self.d2(x)y = self.f3(x)return ymodel = AlexNet8()model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['sparse_categorical_accuracy'])checkpoint_save_path = "./checkpoint/AlexNet8.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):print('-------------load the model-----------------')model.load_weights(checkpoint_save_path)cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,save_weights_only=True,save_best_only=True)history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,callbacks=[cp_callback])
model.summary()# print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:file.write(str(v.name) + '\n')file.write(str(v.shape) + '\n')file.write(str(v.numpy()) + '\n')
file.close()###############################################    show   ################################################ 显示训练集和验证集的acc和loss曲线
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

 VGGNet

非常适合硬件加速。

import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense
from tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0class VGG16(Model):def __init__(self):super(VGG16, self).__init__()self.c1 = Conv2D(filters=64, kernel_size=(3, 3), padding='same')  # 卷积层1self.b1 = BatchNormalization()  # BN层1self.a1 = Activation('relu')  # 激活层1self.c2 = Conv2D(filters=64, kernel_size=(3, 3), padding='same', )self.b2 = BatchNormalization()  # BN层1self.a2 = Activation('relu')  # 激活层1self.p1 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d1 = Dropout(0.2)  # dropout层self.c3 = Conv2D(filters=128, kernel_size=(3, 3), padding='same')self.b3 = BatchNormalization()  # BN层1self.a3 = Activation('relu')  # 激活层1self.c4 = Conv2D(filters=128, kernel_size=(3, 3), padding='same')self.b4 = BatchNormalization()  # BN层1self.a4 = Activation('relu')  # 激活层1self.p2 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d2 = Dropout(0.2)  # dropout层self.c5 = Conv2D(filters=256, kernel_size=(3, 3), padding='same')self.b5 = BatchNormalization()  # BN层1self.a5 = Activation('relu')  # 激活层1self.c6 = Conv2D(filters=256, kernel_size=(3, 3), padding='same')self.b6 = BatchNormalization()  # BN层1self.a6 = Activation('relu')  # 激活层1self.c7 = Conv2D(filters=256, kernel_size=(3, 3), padding='same')self.b7 = BatchNormalization()self.a7 = Activation('relu')self.p3 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d3 = Dropout(0.2)self.c8 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b8 = BatchNormalization()  # BN层1self.a8 = Activation('relu')  # 激活层1self.c9 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b9 = BatchNormalization()  # BN层1self.a9 = Activation('relu')  # 激活层1self.c10 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b10 = BatchNormalization()self.a10 = Activation('relu')self.p4 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d4 = Dropout(0.2)self.c11 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b11 = BatchNormalization()  # BN层1self.a11 = Activation('relu')  # 激活层1self.c12 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b12 = BatchNormalization()  # BN层1self.a12 = Activation('relu')  # 激活层1self.c13 = Conv2D(filters=512, kernel_size=(3, 3), padding='same')self.b13 = BatchNormalization()self.a13 = Activation('relu')self.p5 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')self.d5 = Dropout(0.2)self.flatten = Flatten()self.f1 = Dense(512, activation='relu')self.d6 = Dropout(0.2)self.f2 = Dense(512, activation='relu')self.d7 = Dropout(0.2)self.f3 = Dense(10, activation='softmax')def call(self, x):x = self.c1(x)x = self.b1(x)x = self.a1(x)x = self.c2(x)x = self.b2(x)x = self.a2(x)x = self.p1(x)x = self.d1(x)x = self.c3(x)x = self.b3(x)x = self.a3(x)x = self.c4(x)x = self.b4(x)x = self.a4(x)x = self.p2(x)x = self.d2(x)x = self.c5(x)x = self.b5(x)x = self.a5(x)x = self.c6(x)x = self.b6(x)x = self.a6(x)x = self.c7(x)x = self.b7(x)x = self.a7(x)x = self.p3(x)x = self.d3(x)x = self.c8(x)x = self.b8(x)x = self.a8(x)x = self.c9(x)x = self.b9(x)x = self.a9(x)x = self.c10(x)x = self.b10(x)x = self.a10(x)x = self.p4(x)x = self.d4(x)x = self.c11(x)x = self.b11(x)x = self.a11(x)x = self.c12(x)x = self.b12(x)x = self.a12(x)x = self.c13(x)x = self.b13(x)x = self.a13(x)x = self.p5(x)x = self.d5(x)x = self.flatten(x)x = self.f1(x)x = self.d6(x)x = self.f2(x)x = self.d7(x)y = self.f3(x)return ymodel = VGG16()model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['sparse_categorical_accuracy'])checkpoint_save_path = "./checkpoint/VGG16.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):print('-------------load the model-----------------')model.load_weights(checkpoint_save_path)cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,save_weights_only=True,save_best_only=True)history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,callbacks=[cp_callback])
model.summary()# print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:file.write(str(v.name) + '\n')file.write(str(v.shape) + '\n')file.write(str(v.numpy()) + '\n')
file.close()###############################################    show   ################################################ 显示训练集和验证集的acc和loss曲线
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

InceptionNet

引入Inception结构块,在同一层网络内内,采用不同的卷积核,提升模型感知力,使用批标准化,缓解梯度消失 结构块分为4个分支,分别经过*1卷积核输出到卷积连接器,经过*1配合3*3输出到卷积连接器,经过1*1配合5*5输出到卷积连接器,经过3*3最大池化核配合1*1卷积核,卷积连接器将各个特征数据按照深度方向进行拼接。

分支1,采用16个1*1,全零填充

分支2,先将16*1*1进行降维,采用BN, 在进行16*3*3卷积核进行操作,

分支3,先降维,在使用16个5*5卷积核

分支4,先池化,降维

CBN操作

通过1*1卷积核,作用到每一个像素点,通过设定少于输入特征图深度的1*1卷积核个数,减少输出特征图的深度,起到了降维的作用,减少参数量和计算量。

使用concat堆叠在一起,axis=3表示堆叠方向

 精简的inception网络结构,网络共有10层,第一层采用16个3*3卷积核。

基于cifar数据集是10分类。

将batch_size跳到1024,充分发挥GPU性能,发挥70%到80%

import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense, \GlobalAveragePooling2D
from tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0class ConvBNRelu(Model):def __init__(self, ch, kernelsz=3, strides=1, padding='same'):super(ConvBNRelu, self).__init__()self.model = tf.keras.models.Sequential([Conv2D(ch, kernelsz, strides=strides, padding=padding),BatchNormalization(),Activation('relu')])def call(self, x):x = self.model(x, training=False) #在training=False时,BN通过整个训练集计算均值、方差去做批归一化,training=True时,通过当前batch的均值、方差去做批归一化。推理时 training=False效果好return xclass InceptionBlk(Model):def __init__(self, ch, strides=1):super(InceptionBlk, self).__init__()self.ch = chself.strides = stridesself.c1 = ConvBNRelu(ch, kernelsz=1, strides=strides)self.c2_1 = ConvBNRelu(ch, kernelsz=1, strides=strides)self.c2_2 = ConvBNRelu(ch, kernelsz=3, strides=1)self.c3_1 = ConvBNRelu(ch, kernelsz=1, strides=strides)self.c3_2 = ConvBNRelu(ch, kernelsz=5, strides=1)self.p4_1 = MaxPool2D(3, strides=1, padding='same')self.c4_2 = ConvBNRelu(ch, kernelsz=1, strides=strides)def call(self, x):x1 = self.c1(x)x2_1 = self.c2_1(x)x2_2 = self.c2_2(x2_1)x3_1 = self.c3_1(x)x3_2 = self.c3_2(x3_1)x4_1 = self.p4_1(x)x4_2 = self.c4_2(x4_1)# concat along axis=channelx = tf.concat([x1, x2_2, x3_2, x4_2], axis=3)return xclass Inception10(Model):def __init__(self, num_blocks, num_classes, init_ch=16, **kwargs):super(Inception10, self).__init__(**kwargs)self.in_channels = init_chself.out_channels = init_chself.num_blocks = num_blocksself.init_ch = init_chself.c1 = ConvBNRelu(init_ch)self.blocks = tf.keras.models.Sequential()for block_id in range(num_blocks):for layer_id in range(2):if layer_id == 0:block = InceptionBlk(self.out_channels, strides=2)else:block = InceptionBlk(self.out_channels, strides=1)self.blocks.add(block)# enlarger out_channels per blockself.out_channels *= 2self.p1 = GlobalAveragePooling2D()self.f1 = Dense(num_classes, activation='softmax')def call(self, x):x = self.c1(x)x = self.blocks(x)x = self.p1(x)y = self.f1(x)return ymodel = Inception10(num_blocks=2, num_classes=10)model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['sparse_categorical_accuracy'])checkpoint_save_path = "./checkpoint/Inception10.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):print('-------------load the model-----------------')model.load_weights(checkpoint_save_path)cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,save_weights_only=True,save_best_only=True)history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,callbacks=[cp_callback])
model.summary()# print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:file.write(str(v.name) + '\n')file.write(str(v.shape) + '\n')file.write(str(v.numpy()) + '\n')
file.close()###############################################    show   ################################################ 显示训练集和验证集的acc和loss曲线
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

ResNet

提出了层间残差跳连,引入前方信息减少梯度消失,是增加网络层数成为可能

通过提升网络层数,有可能会使退化,后面特征丢失了前面特征,

此操作缓解了退化

Inception的+是延深度方向叠加,ResNet块中的+是特征图对应元素值相加(矩阵相加)

ResNet块中有两种情况,一种情况用图中实线表示

加速模型收敛,使模型的batch_size调整到128

import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense
from tensorflow.keras import Modelnp.set_printoptions(threshold=np.inf)cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0class ResnetBlock(Model):def __init__(self, filters, strides=1, residual_path=False):super(ResnetBlock, self).__init__()self.filters = filtersself.strides = stridesself.residual_path = residual_pathself.c1 = Conv2D(filters, (3, 3), strides=strides, padding='same', use_bias=False)self.b1 = BatchNormalization()self.a1 = Activation('relu')self.c2 = Conv2D(filters, (3, 3), strides=1, padding='same', use_bias=False)self.b2 = BatchNormalization()# residual_path为True时,对输入进行下采样,即用1x1的卷积核做卷积操作,保证x能和F(x)维度相同,顺利相加if residual_path:self.down_c1 = Conv2D(filters, (1, 1), strides=strides, padding='same', use_bias=False)self.down_b1 = BatchNormalization()self.a2 = Activation('relu')def call(self, inputs):residual = inputs  # residual等于输入值本身,即residual=x# 将输入通过卷积、BN层、激活层,计算F(x)x = self.c1(inputs)x = self.b1(x)x = self.a1(x)x = self.c2(x)y = self.b2(x)if self.residual_path:residual = self.down_c1(inputs)residual = self.down_b1(residual)out = self.a2(y + residual)  # 最后输出的是两部分的和,即F(x)+x或F(x)+Wx,再过激活函数return outclass ResNet18(Model):def __init__(self, block_list, initial_filters=64):  # block_list表示每个block有几个卷积层super(ResNet18, self).__init__()self.num_blocks = len(block_list)  # 共有几个blockself.block_list = block_listself.out_filters = initial_filtersself.c1 = Conv2D(self.out_filters, (3, 3), strides=1, padding='same', use_bias=False)self.b1 = BatchNormalization()self.a1 = Activation('relu')self.blocks = tf.keras.models.Sequential()# 构建ResNet网络结构for block_id in range(len(block_list)):  # 第几个resnet blockfor layer_id in range(block_list[block_id]):  # 第几个卷积层if block_id != 0 and layer_id == 0:  # 对除第一个block以外的每个block的输入进行下采样block = ResnetBlock(self.out_filters, strides=2, residual_path=True)else:block = ResnetBlock(self.out_filters, residual_path=False)self.blocks.add(block)  # 将构建好的block加入resnetself.out_filters *= 2  # 下一个block的卷积核数是上一个block的2倍self.p1 = tf.keras.layers.GlobalAveragePooling2D()self.f1 = tf.keras.layers.Dense(10, activation='softmax', kernel_regularizer=tf.keras.regularizers.l2())def call(self, inputs):x = self.c1(inputs)x = self.b1(x)x = self.a1(x)x = self.blocks(x)x = self.p1(x)y = self.f1(x)return ymodel = ResNet18([2, 2, 2, 2])model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['sparse_categorical_accuracy'])checkpoint_save_path = "./checkpoint/ResNet18.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):print('-------------load the model-----------------')model.load_weights(checkpoint_save_path)cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,save_weights_only=True,save_best_only=True)history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,callbacks=[cp_callback])
model.summary()# print(model.trainable_variables)
file = open('./weights.txt', 'w')
for v in model.trainable_variables:file.write(str(v.name) + '\n')file.write(str(v.shape) + '\n')file.write(str(v.numpy()) + '\n')
file.close()###############################################    show   ################################################ 显示训练集和验证集的acc和loss曲线
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

总结

lenet 卷积网络的开篇之作,共享卷积核,减少网络参数。

AlexNet 使用relu激活函数,提升训练速度,使用Dropout缓解过拟合

VGGnet 小尺寸卷积核,减少参数,网络结构规整,适合并行加速

InceptionNet一层内使用不同尺寸卷积核,提升感知力使用批标准化,缓解梯度消失。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/871554.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【redis操作语句】

1.数据库操作 redis默认有16个数据库,编号为0~15,且默认访问0号数据库 获取当前键值对数量:先set创建一个键值对,再用dbsize获取,flushdb清空再获取。 127.0.0.1:6379> set k1 v1 OK 127.0.0.1:6379> dbsize (integer) 1 127.0.0.1:…

安卓onNewIntent 什么时候执行

一.详细介绍 onNewIntent 方法 onNewIntent 是 Android 中 Activity 生命周期的一部分。它在特定情况下被调用,主要用于处理新的 Intent,而不是创建新的 Activity 实例。详细介绍如下: 使用场景 singleTop 启动模式: 如果一个 Ac…

UML建模案例分析-需求对类图的影响很大

概要 类图描述系统中类的静态结构。 概念是概念,但类图受需求的影响是非常大的,可以说类图是建模的源头。尽管用例图是源头,但对类图的作用有限。 例子 进销存系统里,产品类中,至少要包括如下属性:名称…

现代动力系统理论导论 第一卷+第二卷 Anatole Katok 金成桴

第0章 引言 0.1. 动力学主要分支 0.2. 流,向量场,微分方程 0.3. 时间1映射,截面,扭扩 0.4. 线性化与局部化 第1部分 例子与基本概念 …

Ubuntu使用K3S一分钟快速搭建K8S集群

快速入门指南 | Rancher文档 准备3台服务器 Master节点安装脚本# K3s 提供了一个安装脚本,可以方便的在 systemd 或 openrc 的系统上将其作为服务安装。这个脚本可以在 https://get.k3s.io 获得。要使用这种方法安装 K3s,只需运行以下命令: curl -sfL https://rancher-mi…

Android Spinner

1. Spinner Spinner是下拉列表,如图3-14所示,通常用于为用户提供选择输入。Spinner有一个重要的属性:spinnerMode,它有2种情况: 属性值为dropdown时,表示Spinner的数据下拉展示,如图1&#xf…

反应式编程:原理功能介绍及实践

简介 反应式编程(Reactive Programming)是一种面向数据流和变化传播的编程范式。它强调异步数据流的处理,通过声明性地定义依赖关系,使得系统能够自动响应数据的变化。 功能 异步处理:反应式编程天然支持异步操作&am…

机器学习和人工智能对金融行业的影响——案例分析

作者主页: 知孤云出岫 目录 引言机器学习和人工智能在金融行业的应用1. 风险管理信用评分风险预测 2. 交易高频交易量化交易 3. 客户服务聊天机器人个性化推荐 4. 反欺诈检测 机器学习和人工智能带来的变革1. 提高效率2. 降低成本3. 提升客户体验 未来发展趋势1. 更智能的风控系…

【中台】数字中台建设方案(PPT)

数字中台建设要点: 数据采集与整合: 打破企业内部各个业务系统的数据隔阂,通过数据采集和数据交换实现数据的集中管理,形成统一的数据中心,为后续数据价值的挖掘提供基础。 利用自研或第三方ETL(Extract, T…

FreeRTOS学习(1)STM32单片机移植FreeRTOS

一、FreeRTOS源码的下载 1、官网下载 FreeRTOS官方链接 官方下载速度慢,需要翻墙,一般选择第一个 2、直接通过仓库下载 仓库地址链接 同样很慢,甚至打不开网页,也不建议使用这种方法。 3、百度网盘 链接:https:…

多表联合的查询(实例)、对于前端返回数据有很多表,可以分开操作、debug调试教程

2024.7.13 一、 对于多表的更深层的认识1. 认识2. 多表联合查询的列子:3. 对于多表查询的进一步认识4. 在实现功能的时候,原本对于省市县这样的表,对于项目的要求,是直接全部查询出来,然后开始使用,但我想着…

JavaScript中的面向对象编程

OPP在JavaScript的表现方式:原型 传统的OPP:类 ● 对象(实例)由类实例化,类的功能类似于蓝图,通过蓝图来实现建筑(实例) ● 行为(方法)从类复制到所有实例 …

AWS-S3实现Minio分片上传、断点续传、秒传、分片下载、暂停下载

文章目录 前言一、功能展示上传功能点下载功能点效果展示 二、思路流程上传流程下载流程 三、代码示例四、疑问 前言 Amazon Simple Storage Service(S3),简单存储服务,是一个公开的云存储服务。Web应用程序开发人员可以使用它存…

2024.7.12 检测H1S-0806MT-XP (问题:脉冲自己会给)

步骤一:先把H1s里面的程序上载保存,避免丢失。 注意:上载程序时,参数也需要上载。(勾选软原件内存选项) 步…

EasyExcel批量读取Excel文件数据导入到MySQL表中

1、EasyExcel简介 官网&#xff1a;EasyExcel官方文档 - 基于Java的Excel处理工具 | Easy Excel 官网 2、代码实战 首先引入jar包 <dependency><groupId>com.alibaba</groupId><artifactId>easyexcel</artifactId><version>3.3.2</v…

智慧校园缴费管理-缴费项目类型功能概述

智慧校园的缴费管理系统&#xff0c;以缴费项目类型为核心功能之一&#xff0c;精细划分并优化了各类缴费流程&#xff0c;为学生和家长带来更为直观、便捷的财务管理体验。这一功能通过整合校园内广泛的费用类别&#xff0c;确保每一笔费用都能准确、高效地处理&#xff0c;体…

Provider(2)- SourceAudioBufferProvider

SourceAudioBufferProvider 从Source源端出来的数据&#xff0c;通常是来自于应用层&#xff0c;但没有与应用层直接连接&#xff0c;通过MonoPipe相关类连接&#xff0c;其SourceAudioBufferProvider和MonoPipe相关类的包含关系图如下&#xff1a; 如上图&#xff0c;Sourc…

11计算机视觉—语义分割与转置卷积

目录 1.语义分割应用语义分割和实例分割2.语义分割数据集:Pascal VOC2012 语义分割数据集预处理数据:我们使用图像增广中的随机裁剪,裁剪输入图像和标签的相同区域。3.转置卷积 上采样填充、步幅和多通道填充步幅多通道转置卷积是一种卷积:重新排列输入和核转置卷积是一种卷…

Java高级重点知识点-22-缓冲流、转换流、序列化流、打印流

文章目录 缓冲流字节缓冲流字符缓冲流 转换流InputStreamReader类OutputStreamWriter类 序列化ObjectOutputStream类ObjectInputStream类 打印流 缓冲流 缓冲流,也叫高效流&#xff0c;是对4个基本的 FileXxx 流的增强&#xff0c;所以也是4个流 基本原理&#xff1a; 缓冲流的…

ES13的4个改革性新特性

1、类字段声明 在 ES13 之前,类字段只能在构造函数中声明, ES13 消除了这个限制 // 之前 class Car {constructor() {this.color = blue;this.age = 2