Matlab:利用1D-CNN(一维卷积神经网络),分析高光谱曲线数据或时序数据

1DCNN 简介:

1D-CNN(一维卷积神经网络)是一种特殊类型的卷积神经网络,设计用于处理一维序列数据。这种网络结构通常由多个卷积层和池化层交替组成,最后使用全连接层将提取的特征映射到输出。

以下是1D-CNN的主要组成部分和特点:

  1. 输入层:接收一维序列数据作为模型的输入。
  2. 卷积层:使用一系列可训练的卷积核在输入数据上滑动并提取特征。卷积操作能够有效地提取局部信息,从而捕捉输入序列的局部模式。
  3. 激活函数:对卷积层的输出进行非线性变换,增强模型的表达能力。
  4. 池化层:通过对卷积层输出进行降维,减少计算量,同时提高模型的鲁棒性和泛化能力。
  5. 全连接层:将池化层的输出映射到模型的输出,通常用于分类、回归等任务。

在使用1D-CNN时,通常需要设置一些超参数,如卷积核的大小、卷积层的个数、池化操作的方式、激活函数的选择等。

与传统机器学习对比:

首先,1D-CNN是一种深度学习模型,它使用卷积层来自动提取一维序列数据(如音频、文本等)中的特征。这种方式与传统机器学习中的特征提取方法不同,传统机器学习通常需要手动设计和选择特征。通过自动提取特征,1D-CNN能够减少人工特征提取的工作量,并有可能发现更复杂的特征表示。其次,1D-CNN在处理序列数据时能够更好地捕捉局部关系。卷积操作通过在输入数据上滑动固定大小的窗口来提取局部特征,这使得1D-CNN在语音识别、自然语言处理、时间序列预测等任务中表现出色。而传统机器学习模型,如支持向量机(SVM)或决策树,通常不具备这种处理局部关系的能力。

需要注意的是,在数据尺度较小的时候,如只有100多个参数,相较于传统机器学习模型,1D-CNN并没有优势,表现性能一般和机器学习表现无明显差距。鉴于卷积对于目标特征的提取及压缩的特点,数据长度(参数)越高,1D-CNN就越发有优势。因此在时序回归、高光谱分析、股票预测、音频分析上1D-CNN的表现可圈可点。此外,利用1D-CNN做回归和分类对样本量有较高的要求,因为卷积结构本身对噪声就比较敏感,数据集较少时,特别容易发生严重的过拟合现象,建议样本量800+有比较好的应用效果。

三种不同结构的自定义的1D-CNN

基于VGG结构的1D-CNN(VNet)

基于 VGG 主干网 络设计的 VNet 参考了陈庆等的优化结构,卷积核大 小为 4,包含 6 个卷积深度,并在每个平均池化层后引 入一个比例为 0.3 的随机失活层(dropout layer)防止过拟合,参数量为342K。

matlab构建代码:

function layers=creatCNN2D_VGG(inputsize)
filter=16;
layers = [imageInputLayer([inputsize],"Name","imageinput")convolution2dLayer([1 4],filter,"Name","conv","Padding","same")convolution2dLayer([1 4],filter,"Name","conv_1","Padding","same")maxPooling2dLayer([1 2],"Name","maxpool","Padding","same","Stride",[1 2])convolution2dLayer([1 4],filter*2,"Name","conv_2","Padding","same")convolution2dLayer([1 4],filter*2,"Name","conv_3","Padding","same")maxPooling2dLayer([1 2],"Name","maxpool_1","Padding","same","Stride",[1 2])fullyConnectedLayer(filter*8,"Name","fc")fullyConnectedLayer(1,"Name","fc_1")regressionLayer("Name","regressionoutput")];

基于EfficienNet结构的1D-CNN (ENet)

ENet 采用 Swish 激活函数,引入了跳跃连接与 SE(squeeze and excitation)注意力机制,其不仅能有效 实现更深的卷积深度,还能对通道方向上的数据特征进 行感知,在数据尺度较大时,有一定优势。参数量170.4K

生成代码:

function lgraph=creatCNN2D_EffiPlus2(inputsize)filter=8;
lgraph = layerGraph();tempLayers = [imageInputLayer([1 1293 1],"Name","imageinput")convolution2dLayer([1 3],filter,"Name","conv_11","Padding","same","Stride",[1 2])%'DilationFactor',[1,2]batchNormalizationLayer("Name","batchnorm_8")swishLayer("Name","swish_1_1_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter,"Name","conv_1_1","Padding","same","Stride",[1 1])%%batchNormalizationLayer("Name","batchnorm_1_1")swishLayer("Name","swish_1_5")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [globalAveragePooling2dLayer("Name","gapool_1_1")convolution2dLayer([1 1],2,"Name","conv_2_1_1","Padding","same")swishLayer("Name","swish_2_1_1")convolution2dLayer([1 1],filter,"Name","conv_3_1_1","Padding","same")sigmoidLayer("Name","sigmoid_1_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [multiplicationLayer(2,"Name","multiplication_3")convolution2dLayer([1 3],filter*2,"Name","conv","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm")swishLayer("Name","swish_1_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*2,"Name","conv_1","Padding","same","Stride",[1 1])%%batchNormalizationLayer("Name","batchnorm_1")swishLayer("Name","swish_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [globalAveragePooling2dLayer("Name","gapool_1")convolution2dLayer([1 1],4,"Name","conv_2_1","Padding","same")swishLayer("Name","swish_2_1")convolution2dLayer([1 1],filter*2,"Name","conv_3_1","Padding","same")sigmoidLayer("Name","sigmoid_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [multiplicationLayer(2,"Name","multiplication")convolution2dLayer([1 3],filter*4,"Name","conv_9","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_6")swishLayer("Name","swish_1_4")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*4,"Name","conv_10","Padding","same","Stride",[1 1])%%batchNormalizationLayer("Name","batchnorm_7")swishLayer("Name","swish_1_3")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [globalAveragePooling2dLayer("Name","gapool_2")convolution2dLayer([1 1],8,"Name","conv_2_2","Padding","same")swishLayer("Name","swish_2_2")convolution2dLayer([1 1],filter*4,"Name","conv_3_2","Padding","same")sigmoidLayer("Name","sigmoid_2")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [multiplicationLayer(2,"Name","multiplication_2")convolution2dLayer([1 3],filter*8,"Name","conv_5","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_2")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 1],filter*8,"Name","conv_6","Padding","same")batchNormalizationLayer("Name","batchnorm_3")swishLayer("Name","swish")convolution2dLayer([1 3],filter*8,"Name","conv_7","Padding","same")batchNormalizationLayer("Name","batchnorm_4")swishLayer("Name","swish_1_2")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [globalAveragePooling2dLayer("Name","gapool")convolution2dLayer([1 1],12,"Name","conv_2","Padding","same")swishLayer("Name","swish_2")convolution2dLayer([1 1],filter*8,"Name","conv_3","Padding","same")sigmoidLayer("Name","sigmoid")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [multiplicationLayer(2,"Name","multiplication_1")convolution2dLayer([1 3],filter*8,"Name","conv_8","Padding","same")batchNormalizationLayer("Name","batchnorm_5")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [additionLayer(2,"Name","addition")convolution2dLayer([1 3],1,"Name","conv_4","Padding","same")swishLayer("Name","swish_3")averagePooling2dLayer([1 3],"Name","avgpool2d","Padding","same")fullyConnectedLayer(1,"Name","fc")regressionLayer("Name","regressionoutput")];
lgraph = addLayers(lgraph,tempLayers);lgraph = connectLayers(lgraph,"swish_1_1_1","conv_1_1");
lgraph = connectLayers(lgraph,"swish_1_1_1","gapool_1_1");
lgraph = connectLayers(lgraph,"swish_1_5","multiplication_3/in1");
lgraph = connectLayers(lgraph,"sigmoid_1_1","multiplication_3/in2");
lgraph = connectLayers(lgraph,"swish_1_1","conv_1");
lgraph = connectLayers(lgraph,"swish_1_1","gapool_1");
lgraph = connectLayers(lgraph,"swish_1","multiplication/in1");
lgraph = connectLayers(lgraph,"sigmoid_1","multiplication/in2");
lgraph = connectLayers(lgraph,"swish_1_4","conv_10");
lgraph = connectLayers(lgraph,"swish_1_4","gapool_2");
lgraph = connectLayers(lgraph,"swish_1_3","multiplication_2/in1");
lgraph = connectLayers(lgraph,"sigmoid_2","multiplication_2/in2");
lgraph = connectLayers(lgraph,"batchnorm_2","conv_6");
lgraph = connectLayers(lgraph,"batchnorm_2","addition/in2");
lgraph = connectLayers(lgraph,"swish_1_2","gapool");
lgraph = connectLayers(lgraph,"swish_1_2","multiplication_1/in1");
lgraph = connectLayers(lgraph,"sigmoid","multiplication_1/in2");
lgraph = connectLayers(lgraph,"batchnorm_5","addition/in1");

基于ResNet结构的1D-CNN (RNet)

RNet 由 3 层残差网络模块构成,其结构相较 于 ENet 较为精简,模型容量更少,个人感觉性能比较综合。参数量33.7K

function lgraph=creatCNN2D_ResNet(inputsize)
lgraph = layerGraph();
filter=16;tempLayers = [imageInputLayer([inputsize],"Name","imageinput")convolution2dLayer([1 3],filter,"Name","conv","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm")reluLayer("Name","relu")maxPooling2dLayer([1 3],"Name","maxpool","Padding",'same',"Stride",[1 2])];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter,"Name","conv_1","Padding","same")batchNormalizationLayer("Name","batchnorm_1")reluLayer("Name","relu_1")convolution2dLayer([1 3],filter,"Name","conv_2","Padding","same")batchNormalizationLayer("Name","batchnorm_2")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [additionLayer(2,"Name","addition")reluLayer("Name","relu_3")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*2,"Name","conv_3","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_3")reluLayer("Name","relu_2")convolution2dLayer([1 3],filter*2,"Name","conv_4","Padding","same")batchNormalizationLayer("Name","batchnorm_4")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*2,"Name","conv_8","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_8")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [additionLayer(2,"Name","addition_1")reluLayer("Name","relu_5")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*4,"Name","conv_5","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_5")reluLayer("Name","relu_4")convolution2dLayer([1 3],filter*4,"Name","conv_6","Padding","same")batchNormalizationLayer("Name","batchnorm_6")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*4,"Name","conv_7","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_7")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [additionLayer(2,"Name","addition_2")reluLayer("Name","res3a_relu")globalMaxPooling2dLayer("Name","gmpool")fullyConnectedLayer(1,"Name","fc")regressionLayer("Name","regressionoutput")];
lgraph = addLayers(lgraph,tempLayers);lgraph = connectLayers(lgraph,"maxpool","conv_1");
lgraph = connectLayers(lgraph,"maxpool","addition/in2");
lgraph = connectLayers(lgraph,"batchnorm_2","addition/in1");
lgraph = connectLayers(lgraph,"relu_3","conv_3");
lgraph = connectLayers(lgraph,"relu_3","conv_8");
lgraph = connectLayers(lgraph,"batchnorm_4","addition_1/in1");
lgraph = connectLayers(lgraph,"batchnorm_8","addition_1/in2");
lgraph = connectLayers(lgraph,"relu_5","conv_5");
lgraph = connectLayers(lgraph,"relu_5","conv_7");
lgraph = connectLayers(lgraph,"batchnorm_6","addition_2/in1");
lgraph = connectLayers(lgraph,"batchnorm_7","addition_2/in2");

ENet和RNet的结构示意图

训练代码与案例:

训练代码

我们基于RNet采用1293长度的数据对样本进行训练,做回归任务,代码如下:

clear allload("TestData2.mat");%数据分割
%[AT,AP]=ks(Alltrain,588);
num_div=1;%直接载入数据[numsample,sampleSize]=size(AT);
for i=1:numsampleXTrain(:,:,1,i)=AT(i,1:end-num_div);YTrain(i,1)=AT(i,end);
end
[numtest,~]=size(AP)
for i=1:numtestXTest(:,:,1,i)=AP(i,1:end-num_div);YTest(i,1)=AP(i,end);
end%Ytrain=inputData(:,end);
figure
histogram(YTrain)
axis tight
ylabel('Counts')
xlabel('TDS')options = trainingOptions('adam', ...'MaxEpochs',150, ...'MiniBatchSize',64, ...'InitialLearnRate',0.008, ...'GradientThreshold',1, ...'Verbose',false,...'Plots','training-progress',...'ValidationData',{XTest,YTest});layerN=creatCNN2D_ResNet([1,1293,1]);%创建网络,根据自己的需求改函数名称[Net, traininfo] = trainNetwork(XTrain,YTrain,layerN,options);YPredicted = predict(Net,XTest);
predictionError = YTest- YPredicted;
squares = predictionError.^2;
rmse = sqrt(mean(squares))
[R P] = corrcoef(YTest,YPredicted)
scatter(YPredicted,YTest,'+')
xlabel("Predicted Value")
ylabel("True Value")
R2=R(1,2)^2;
hold on
plot([0 2000], [-0 2000],'r--')

训练数据输入如下:最后一列为预测值: 

训练过程如下: 

训练数据分享    

    源数据分享:TestData2.mat

链接:https://pan.baidu.com/s/1B1o2xB4aUFOFLzZbwT-7aw?pwd=1xe5 
提取码:1xe5 
 

训练建议    

    以个人经验来说,VNet结构最为简单,但是综合表现最差。对于800-3000长度的数据,容量较小的RNet的表现会比ENet好,对于长度超过的3000的一维数据,ENet的表现更好。

    关于超参数的设计:首先最小批次minibatch设置小于64会好一点,确保最终结果会比较好,反正一维卷积神经网络训练很快。第二,与图片不同,一维数据常常数值精度比较高(图片一般就uint8或16格式),因此学习率不宜太高,要不表现会有所下降。我自己尝试的比较好的学习率是0.008.总体来说0.015-0.0005之间都OK,0.05以上结果就开始下降了。

其他引用函数

KS数据划分

Kennard-Stone(KS)方法是一种常用于数据集划分的方法,尤其适用于化学计量学等领域。其主要原理是保证训练集中的样本按照空间距离分布均匀。

function [XSelected,XRest,vSelectedRowIndex]=ks(X,Num) %Num=三分之二的数值
%  ks selects the samples XSelected which uniformly distributed in the exprimental data X's space 
%  Input  
%         X:the matrix of the sample spectra 
%         Num:the number of the sample spectra you want select  
%  Output 
%         XSelected:the sample spectras was sel   ected from the X 
%         XRest:the sample spectras remain int the X after select 
%         vSelectedRowIndex:the row index of the selected sample in the X matrix      
%  Programmer: zhimin zhang @ central south university on oct 28 ,2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% start of the kennard-stone step one 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [nRow,nCol]=size(X); % obtain the size of the X matrix 
mDistance=zeros(nRow,nRow); %dim a matrix for the distance storage 
vAllofSample=1:nRow; for i=1:nRow-1 vRowX=X(i,:); % obtain a row of X for j=i+1:nRow vRowX1=X(j,:); % obtain another row of X         mDistance(i,j)=norm(vRowX-vRowX1); % calc the Euclidian distance end end [vMax,vIndexOfmDistance]=max(mDistance); [nMax,nIndexofvMax]=max(vMax); %vIndexOfmDistance(1,nIndexofvMax) 
%nIndexofvMax 
vSelectedSample(1)=nIndexofvMax; 
vSelectedSample(2)=vIndexOfmDistance(nIndexofvMax); 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% end of the kennard-stone step one 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% start of the kennard-stone step two 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% for i=3:Num vNotSelectedSample=setdiff(vAllofSample,vSelectedSample); vMinDistance=zeros(1,nRow-i + 1); for j=1:(nRow-i+1) nIndexofNotSelected=vNotSelectedSample(j); vDistanceNew = zeros(1,i-1); for k=1:(i-1) nIndexofSelected=vSelectedSample(k); if(nIndexofSelected<=nIndexofNotSelected) vDistanceNew(k)=mDistance(nIndexofSelected,nIndexofNotSelected); else vDistanceNew(k)=mDistance(nIndexofNotSelected,nIndexofSelected);     end                        end vMinDistance(j)=min(vDistanceNew); end [nUseless,nIndexofvMinDistance]=max(vMinDistance); vSelectedSample(i)=vNotSelectedSample(nIndexofvMinDistance); 
end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
%%%%% end of the kennard-stone step two 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
%%%%% start of export the result 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
vSelectedRowIndex=vSelectedSample; for i=1:length(vSelectedSample) XSelected(i,:)=X(vSelectedSample(i),:); 
end vNotSelectedSample=setdiff(vAllofSample,vSelectedSample); 
for i=1:length(vNotSelectedSample) XRest(i,:)=X(vNotSelectedSample(i),:); 
end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
%%%%% end of export the result 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 

参考文献

卷积神经网络的紫外-可见光谱水质分类方法 (wanfangdata.com.cn)

光谱技术结合水分校正与样本增广的棉田土壤盐分精准反演 (tcsae.org)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/669909.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

详细分析Redis中数值乱码的根本原因以及解决方式

目录 前言1. 问题所示2. 原理分析3. 拓展 前言 对于这方面的相关知识推荐阅读&#xff1a; Redis框架从入门到学精&#xff08;全&#xff09;Java关于RedisTemplate的使用分析 附代码java框架 零基础从入门到精通的学习路线 附开源项目面经等&#xff08;超全&#xff09; …

板块零 IDEA编译器基础:第二节 创建JAVA WEB项目与IDEA基本设置 来自【汤米尼克的JAVAEE全套教程专栏】

板块零 IDEA编译器基础&#xff1a;第二节 创建JAVA WEB项目与IDEA基本设置 一、创建JAVA WEB项目&#xff08;1&#xff09;普通项目升级成WEB项目&#xff08;2&#xff09;创建JAVA包 二、IDEA 开荒基本设置&#xff08;1&#xff09;设置字体字号自动缩放 &#xff08;2&am…

C# 根据USB设备VID和PID 获取设备总线已报告设备描述

总线已报告设备描述 DEVPKEY_Device_BusReportedDeviceDesc 模式 winform 语言 c# using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Window…

升级Oracle 单实例数据库19.3到19.22

需求 我的Oracle Database Vagrant Box初始版本为19.3&#xff0c;需要升级到最新的RU&#xff0c;当前为19.22。 以下操作时间为为2024年2月5日。 补丁下载 补丁下载文档参见MOS文档&#xff1a;Primary Note for Database Proactive Patch Program (Doc ID 888.1)。 补丁…

Bootstrap5 图片轮播

Bootstrap5 轮播样式表使用的是CDN资源 <title>亚丁号</title><!-- 自定义样式表 --><link href"static/front/css/front.css" rel"stylesheet" /><!-- 新 Bootstrap5 核心 CSS 文件 --><link rel"stylesheet"…

Meta开源大模型LLaMA2的部署使用

LLaMA2的部署使用 LLaMA2申请下载下载模型启动运行Llama2模型文本补全任务实现聊天任务LLaMA2编程Web UI操作 LLaMA2 申请下载 访问meta ai申请模型下载&#xff0c;注意有地区限制&#xff0c;建议选其他国家 申请后会收到邮件&#xff0c;内含一个下载URL地址&#xff0c;…

【翻译】Processing安卓模式的安装使用及打包发布(内含中文版截图)

原文链接在下面的每一章的最前面。 原文有三篇&#xff0c;译者不知道贴哪篇了&#xff0c;这篇干脆标了原创。。 译者声明&#xff1a;本文原文来自于GNU协议支持下的项目&#xff0c;具备开源二改授权&#xff0c;可翻译后公开。 文章目录 Install&#xff08;安装&#xff0…

42、WEB攻防——通用漏洞文件包含LFIRFI伪协议编码算法代码审计

文章目录 文件包含文件包含原理攻击思路文件包含分类 sessionPHP伪协议进行文件包含 文件包含 文件包含原理 文件包含其实就是引用&#xff0c;相当于C语言中的include <stdio.h>。文件包含漏洞常出现于php脚本中&#xff0c;当include($file)中的$file变量用户可控&am…

88 docker 环境下面 前端A连到后端B + 前端B连到后端A

前言 呵呵 最近出现了这样的一个问题, 我们有多个前端服务, 分别连接了对应的后端服务, 前端A -> 后端A, 前端B -> 后端B 但是 最近的时候 却会出现一种情况就是, 有些时候 前端A 连接到了 后端B, 前端B 连接到了 后端A 我们 前端服务使用 nginx 提供前端 html, js…

嵌入式软件bug分析基本要求

摘要&#xff1a;软件从来不是一次就能完美的&#xff0c;需要以包容的眼光看待它的残缺。那问题究竟为何产生&#xff0c;如何去除呢&#xff1f; 1、软件问题从哪来 软件缺陷问题千千万万&#xff0c;主要是需求、实现、和运行环境三方面。 1.1 需求描述偏差 客户角度的描…

【教程】ESP32-CAM使用I2C驱动0.91寸OLED

转载请注明出处&#xff1a;小锋学长生活大爆炸[xfxuezhang.cn] 先安装库 1. 打开 Arduino IDE&#xff0c;然后转到 Sketch > Include Library > Manage Libraries。 2. 在搜索框中输入“SSD1306”&#xff0c;然后从 Adafruit 安装 SSD1306 库。 3. 选择“install al…

文心一言 VS 讯飞星火 VS chatgpt (195)-- 算法导论14.3 3题

三、用go语言&#xff0c;请给出一个有效的算法&#xff0c;对一个给定的区间i&#xff0c;返回一个与i重叠且具有最小低端点的区间;或者当这样的区间不存在时返回T.nil。 文心一言&#xff0c;去掉math&#xff0c;代码正常运行&#xff1a; 在Go语言中&#xff0c;你可以使…

JenkinsGitLab完成自动化构建部署

关于GitLab安装:GitLab安装-CSDN博客 Docker中安装GitLab:Docker下安装GitLab-CSDN博客 安装JenKins Jenkins官网:Jenkins 中文版:Jenkins 安装时候中文页面的war包下不来 在英文页面 记得装JDK8以上 JenKins使用java写的 运行JenKins需要JDK环境 我这里已经装好了 将下…

python制作恶意软件删除工具

今天&#xff0c;来教大家用python制作一个恶意软件删除工具 查杀流程图 对&#xff0c;就这些&#xff0c;已经具备了杀毒软件的功能 判断文件是否为病毒 要查杀病毒&#xff0c;先要判断文件是不是病毒&#xff08;不然删错了咋办&#xff09;&#xff0c;这里我们用获取文…

云计算市场分析

目录 一、云计算市场概述 1.1 概述 二、国外云计算厂商 2.1 亚马逊AWS 2.2 微软AzureAzure 2.3 Apple iCloud 三、国内云计算厂商 3.1 阿里云 3.2 腾讯云 3.3 华为云 3.4 百度智能云 一、云计算市场概述 1.1 概述 云计算从出现以来&#xff0c;其发展就非常迅速。以…

win10重装Ubuntu22.04安装报错复盘

目录 一&#xff1a;补充启动盘制作 二&#xff1a;错误信息[0xC0030570] The file or directory is corrupted and unreadable. 三&#xff1a;ubuntu重装步骤&#xff1a; 四&#xff1a;磁盘冗余阵列 五&#xff1a;尝试将SCS11(2,0.0), 第1分区(sda)设备的一个vfat文…

大华智慧园区综合管理平台 /ipms/barpay/pay RCE漏洞复现

免责声明&#xff1a;文章来源互联网收集整理&#xff0c;请勿利用文章内的相关技术从事非法测试&#xff0c;由于传播、利用此文所提供的信息或者工具而造成的任何直接或者间接的后果及损失&#xff0c;均由使用者本人负责&#xff0c;所产生的一切不良后果与文章作者无关。该…

springboot kafka 实现延时队列

好文推荐&#xff1a; 2.5万字详解23种设计模式 基于Netty搭建websocket集群实现服务器消息推送 2.5万字讲解DDD领域驱动设计 文章目录 一、延时队列定义二、应用场景三、技术实现方案&#xff1a;1. Redis2. Kafka3. RabbitMQ4. RocketMQ 四、Kafka延时队列背景五、Kafka延时队…

锐捷VSU和M-LAG介绍

参考网站 堆叠、级联和集群的概念 什么是堆叠&#xff1f; 框式集群典型配置 RG-S6230[RG-S6501-48VS8CQ]系列交换机 RGOS 12.5(4)B1005版本 配置指南 总结 根据以上的几篇文章总结如下&#xff1a; 级联&#xff1a;简单&#xff0c;交换机相连就叫级联&#xff0c;跟搭…

HCIA--路由优先级实验

要求&#xff1a; 1. pc1访问pc3,4,5走上面&#xff0c;R1-R2实现备份21.1.1.0/24实现备份&#xff1b; 2. pc3,4,5,6访问pc1,2走下面&#xff0c; R3,4之间实现等价路由&#xff0c;尽量减少路由条目&#xff0c;实现全网可达&#xff0c;pc7代表运营商 所有pc均可访问 1…