Matlab:利用1D-CNN(一维卷积神经网络),分析高光谱曲线数据或时序数据

1DCNN 简介:

1D-CNN(一维卷积神经网络)是一种特殊类型的卷积神经网络,设计用于处理一维序列数据。这种网络结构通常由多个卷积层和池化层交替组成,最后使用全连接层将提取的特征映射到输出。

以下是1D-CNN的主要组成部分和特点:

  1. 输入层:接收一维序列数据作为模型的输入。
  2. 卷积层:使用一系列可训练的卷积核在输入数据上滑动并提取特征。卷积操作能够有效地提取局部信息,从而捕捉输入序列的局部模式。
  3. 激活函数:对卷积层的输出进行非线性变换,增强模型的表达能力。
  4. 池化层:通过对卷积层输出进行降维,减少计算量,同时提高模型的鲁棒性和泛化能力。
  5. 全连接层:将池化层的输出映射到模型的输出,通常用于分类、回归等任务。

在使用1D-CNN时,通常需要设置一些超参数,如卷积核的大小、卷积层的个数、池化操作的方式、激活函数的选择等。

与传统机器学习对比:

首先,1D-CNN是一种深度学习模型,它使用卷积层来自动提取一维序列数据(如音频、文本等)中的特征。这种方式与传统机器学习中的特征提取方法不同,传统机器学习通常需要手动设计和选择特征。通过自动提取特征,1D-CNN能够减少人工特征提取的工作量,并有可能发现更复杂的特征表示。其次,1D-CNN在处理序列数据时能够更好地捕捉局部关系。卷积操作通过在输入数据上滑动固定大小的窗口来提取局部特征,这使得1D-CNN在语音识别、自然语言处理、时间序列预测等任务中表现出色。而传统机器学习模型,如支持向量机(SVM)或决策树,通常不具备这种处理局部关系的能力。

需要注意的是,在数据尺度较小的时候,如只有100多个参数,相较于传统机器学习模型,1D-CNN并没有优势,表现性能一般和机器学习表现无明显差距。鉴于卷积对于目标特征的提取及压缩的特点,数据长度(参数)越高,1D-CNN就越发有优势。因此在时序回归、高光谱分析、股票预测、音频分析上1D-CNN的表现可圈可点。此外,利用1D-CNN做回归和分类对样本量有较高的要求,因为卷积结构本身对噪声就比较敏感,数据集较少时,特别容易发生严重的过拟合现象,建议样本量800+有比较好的应用效果。

三种不同结构的自定义的1D-CNN

基于VGG结构的1D-CNN(VNet)

基于 VGG 主干网 络设计的 VNet 参考了陈庆等的优化结构,卷积核大 小为 4,包含 6 个卷积深度,并在每个平均池化层后引 入一个比例为 0.3 的随机失活层(dropout layer)防止过拟合,参数量为342K。

matlab构建代码:

function layers=creatCNN2D_VGG(inputsize)
filter=16;
layers = [imageInputLayer([inputsize],"Name","imageinput")convolution2dLayer([1 4],filter,"Name","conv","Padding","same")convolution2dLayer([1 4],filter,"Name","conv_1","Padding","same")maxPooling2dLayer([1 2],"Name","maxpool","Padding","same","Stride",[1 2])convolution2dLayer([1 4],filter*2,"Name","conv_2","Padding","same")convolution2dLayer([1 4],filter*2,"Name","conv_3","Padding","same")maxPooling2dLayer([1 2],"Name","maxpool_1","Padding","same","Stride",[1 2])fullyConnectedLayer(filter*8,"Name","fc")fullyConnectedLayer(1,"Name","fc_1")regressionLayer("Name","regressionoutput")];

基于EfficienNet结构的1D-CNN (ENet)

ENet 采用 Swish 激活函数,引入了跳跃连接与 SE(squeeze and excitation)注意力机制,其不仅能有效 实现更深的卷积深度,还能对通道方向上的数据特征进 行感知,在数据尺度较大时,有一定优势。参数量170.4K

生成代码:

function lgraph=creatCNN2D_EffiPlus2(inputsize)filter=8;
lgraph = layerGraph();tempLayers = [imageInputLayer([1 1293 1],"Name","imageinput")convolution2dLayer([1 3],filter,"Name","conv_11","Padding","same","Stride",[1 2])%'DilationFactor',[1,2]batchNormalizationLayer("Name","batchnorm_8")swishLayer("Name","swish_1_1_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter,"Name","conv_1_1","Padding","same","Stride",[1 1])%%batchNormalizationLayer("Name","batchnorm_1_1")swishLayer("Name","swish_1_5")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [globalAveragePooling2dLayer("Name","gapool_1_1")convolution2dLayer([1 1],2,"Name","conv_2_1_1","Padding","same")swishLayer("Name","swish_2_1_1")convolution2dLayer([1 1],filter,"Name","conv_3_1_1","Padding","same")sigmoidLayer("Name","sigmoid_1_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [multiplicationLayer(2,"Name","multiplication_3")convolution2dLayer([1 3],filter*2,"Name","conv","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm")swishLayer("Name","swish_1_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*2,"Name","conv_1","Padding","same","Stride",[1 1])%%batchNormalizationLayer("Name","batchnorm_1")swishLayer("Name","swish_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [globalAveragePooling2dLayer("Name","gapool_1")convolution2dLayer([1 1],4,"Name","conv_2_1","Padding","same")swishLayer("Name","swish_2_1")convolution2dLayer([1 1],filter*2,"Name","conv_3_1","Padding","same")sigmoidLayer("Name","sigmoid_1")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [multiplicationLayer(2,"Name","multiplication")convolution2dLayer([1 3],filter*4,"Name","conv_9","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_6")swishLayer("Name","swish_1_4")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*4,"Name","conv_10","Padding","same","Stride",[1 1])%%batchNormalizationLayer("Name","batchnorm_7")swishLayer("Name","swish_1_3")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [globalAveragePooling2dLayer("Name","gapool_2")convolution2dLayer([1 1],8,"Name","conv_2_2","Padding","same")swishLayer("Name","swish_2_2")convolution2dLayer([1 1],filter*4,"Name","conv_3_2","Padding","same")sigmoidLayer("Name","sigmoid_2")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [multiplicationLayer(2,"Name","multiplication_2")convolution2dLayer([1 3],filter*8,"Name","conv_5","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_2")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 1],filter*8,"Name","conv_6","Padding","same")batchNormalizationLayer("Name","batchnorm_3")swishLayer("Name","swish")convolution2dLayer([1 3],filter*8,"Name","conv_7","Padding","same")batchNormalizationLayer("Name","batchnorm_4")swishLayer("Name","swish_1_2")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [globalAveragePooling2dLayer("Name","gapool")convolution2dLayer([1 1],12,"Name","conv_2","Padding","same")swishLayer("Name","swish_2")convolution2dLayer([1 1],filter*8,"Name","conv_3","Padding","same")sigmoidLayer("Name","sigmoid")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [multiplicationLayer(2,"Name","multiplication_1")convolution2dLayer([1 3],filter*8,"Name","conv_8","Padding","same")batchNormalizationLayer("Name","batchnorm_5")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [additionLayer(2,"Name","addition")convolution2dLayer([1 3],1,"Name","conv_4","Padding","same")swishLayer("Name","swish_3")averagePooling2dLayer([1 3],"Name","avgpool2d","Padding","same")fullyConnectedLayer(1,"Name","fc")regressionLayer("Name","regressionoutput")];
lgraph = addLayers(lgraph,tempLayers);lgraph = connectLayers(lgraph,"swish_1_1_1","conv_1_1");
lgraph = connectLayers(lgraph,"swish_1_1_1","gapool_1_1");
lgraph = connectLayers(lgraph,"swish_1_5","multiplication_3/in1");
lgraph = connectLayers(lgraph,"sigmoid_1_1","multiplication_3/in2");
lgraph = connectLayers(lgraph,"swish_1_1","conv_1");
lgraph = connectLayers(lgraph,"swish_1_1","gapool_1");
lgraph = connectLayers(lgraph,"swish_1","multiplication/in1");
lgraph = connectLayers(lgraph,"sigmoid_1","multiplication/in2");
lgraph = connectLayers(lgraph,"swish_1_4","conv_10");
lgraph = connectLayers(lgraph,"swish_1_4","gapool_2");
lgraph = connectLayers(lgraph,"swish_1_3","multiplication_2/in1");
lgraph = connectLayers(lgraph,"sigmoid_2","multiplication_2/in2");
lgraph = connectLayers(lgraph,"batchnorm_2","conv_6");
lgraph = connectLayers(lgraph,"batchnorm_2","addition/in2");
lgraph = connectLayers(lgraph,"swish_1_2","gapool");
lgraph = connectLayers(lgraph,"swish_1_2","multiplication_1/in1");
lgraph = connectLayers(lgraph,"sigmoid","multiplication_1/in2");
lgraph = connectLayers(lgraph,"batchnorm_5","addition/in1");

基于ResNet结构的1D-CNN (RNet)

RNet 由 3 层残差网络模块构成,其结构相较 于 ENet 较为精简,模型容量更少,个人感觉性能比较综合。参数量33.7K

function lgraph=creatCNN2D_ResNet(inputsize)
lgraph = layerGraph();
filter=16;tempLayers = [imageInputLayer([inputsize],"Name","imageinput")convolution2dLayer([1 3],filter,"Name","conv","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm")reluLayer("Name","relu")maxPooling2dLayer([1 3],"Name","maxpool","Padding",'same',"Stride",[1 2])];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter,"Name","conv_1","Padding","same")batchNormalizationLayer("Name","batchnorm_1")reluLayer("Name","relu_1")convolution2dLayer([1 3],filter,"Name","conv_2","Padding","same")batchNormalizationLayer("Name","batchnorm_2")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [additionLayer(2,"Name","addition")reluLayer("Name","relu_3")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*2,"Name","conv_3","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_3")reluLayer("Name","relu_2")convolution2dLayer([1 3],filter*2,"Name","conv_4","Padding","same")batchNormalizationLayer("Name","batchnorm_4")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*2,"Name","conv_8","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_8")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [additionLayer(2,"Name","addition_1")reluLayer("Name","relu_5")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*4,"Name","conv_5","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_5")reluLayer("Name","relu_4")convolution2dLayer([1 3],filter*4,"Name","conv_6","Padding","same")batchNormalizationLayer("Name","batchnorm_6")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [convolution2dLayer([1 3],filter*4,"Name","conv_7","Padding","same","Stride",[1 2])batchNormalizationLayer("Name","batchnorm_7")];
lgraph = addLayers(lgraph,tempLayers);tempLayers = [additionLayer(2,"Name","addition_2")reluLayer("Name","res3a_relu")globalMaxPooling2dLayer("Name","gmpool")fullyConnectedLayer(1,"Name","fc")regressionLayer("Name","regressionoutput")];
lgraph = addLayers(lgraph,tempLayers);lgraph = connectLayers(lgraph,"maxpool","conv_1");
lgraph = connectLayers(lgraph,"maxpool","addition/in2");
lgraph = connectLayers(lgraph,"batchnorm_2","addition/in1");
lgraph = connectLayers(lgraph,"relu_3","conv_3");
lgraph = connectLayers(lgraph,"relu_3","conv_8");
lgraph = connectLayers(lgraph,"batchnorm_4","addition_1/in1");
lgraph = connectLayers(lgraph,"batchnorm_8","addition_1/in2");
lgraph = connectLayers(lgraph,"relu_5","conv_5");
lgraph = connectLayers(lgraph,"relu_5","conv_7");
lgraph = connectLayers(lgraph,"batchnorm_6","addition_2/in1");
lgraph = connectLayers(lgraph,"batchnorm_7","addition_2/in2");

ENet和RNet的结构示意图

训练代码与案例:

训练代码

我们基于RNet采用1293长度的数据对样本进行训练,做回归任务,代码如下:

clear allload("TestData2.mat");%数据分割
%[AT,AP]=ks(Alltrain,588);
num_div=1;%直接载入数据[numsample,sampleSize]=size(AT);
for i=1:numsampleXTrain(:,:,1,i)=AT(i,1:end-num_div);YTrain(i,1)=AT(i,end);
end
[numtest,~]=size(AP)
for i=1:numtestXTest(:,:,1,i)=AP(i,1:end-num_div);YTest(i,1)=AP(i,end);
end%Ytrain=inputData(:,end);
figure
histogram(YTrain)
axis tight
ylabel('Counts')
xlabel('TDS')options = trainingOptions('adam', ...'MaxEpochs',150, ...'MiniBatchSize',64, ...'InitialLearnRate',0.008, ...'GradientThreshold',1, ...'Verbose',false,...'Plots','training-progress',...'ValidationData',{XTest,YTest});layerN=creatCNN2D_ResNet([1,1293,1]);%创建网络,根据自己的需求改函数名称[Net, traininfo] = trainNetwork(XTrain,YTrain,layerN,options);YPredicted = predict(Net,XTest);
predictionError = YTest- YPredicted;
squares = predictionError.^2;
rmse = sqrt(mean(squares))
[R P] = corrcoef(YTest,YPredicted)
scatter(YPredicted,YTest,'+')
xlabel("Predicted Value")
ylabel("True Value")
R2=R(1,2)^2;
hold on
plot([0 2000], [-0 2000],'r--')

训练数据输入如下:最后一列为预测值: 

训练过程如下: 

训练数据分享    

    源数据分享:TestData2.mat

链接:https://pan.baidu.com/s/1B1o2xB4aUFOFLzZbwT-7aw?pwd=1xe5 
提取码:1xe5 
 

训练建议    

    以个人经验来说,VNet结构最为简单,但是综合表现最差。对于800-3000长度的数据,容量较小的RNet的表现会比ENet好,对于长度超过的3000的一维数据,ENet的表现更好。

    关于超参数的设计:首先最小批次minibatch设置小于64会好一点,确保最终结果会比较好,反正一维卷积神经网络训练很快。第二,与图片不同,一维数据常常数值精度比较高(图片一般就uint8或16格式),因此学习率不宜太高,要不表现会有所下降。我自己尝试的比较好的学习率是0.008.总体来说0.015-0.0005之间都OK,0.05以上结果就开始下降了。

其他引用函数

KS数据划分

Kennard-Stone(KS)方法是一种常用于数据集划分的方法,尤其适用于化学计量学等领域。其主要原理是保证训练集中的样本按照空间距离分布均匀。

function [XSelected,XRest,vSelectedRowIndex]=ks(X,Num) %Num=三分之二的数值
%  ks selects the samples XSelected which uniformly distributed in the exprimental data X's space 
%  Input  
%         X:the matrix of the sample spectra 
%         Num:the number of the sample spectra you want select  
%  Output 
%         XSelected:the sample spectras was sel   ected from the X 
%         XRest:the sample spectras remain int the X after select 
%         vSelectedRowIndex:the row index of the selected sample in the X matrix      
%  Programmer: zhimin zhang @ central south university on oct 28 ,2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% start of the kennard-stone step one 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [nRow,nCol]=size(X); % obtain the size of the X matrix 
mDistance=zeros(nRow,nRow); %dim a matrix for the distance storage 
vAllofSample=1:nRow; for i=1:nRow-1 vRowX=X(i,:); % obtain a row of X for j=i+1:nRow vRowX1=X(j,:); % obtain another row of X         mDistance(i,j)=norm(vRowX-vRowX1); % calc the Euclidian distance end end [vMax,vIndexOfmDistance]=max(mDistance); [nMax,nIndexofvMax]=max(vMax); %vIndexOfmDistance(1,nIndexofvMax) 
%nIndexofvMax 
vSelectedSample(1)=nIndexofvMax; 
vSelectedSample(2)=vIndexOfmDistance(nIndexofvMax); 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% end of the kennard-stone step one 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% start of the kennard-stone step two 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% for i=3:Num vNotSelectedSample=setdiff(vAllofSample,vSelectedSample); vMinDistance=zeros(1,nRow-i + 1); for j=1:(nRow-i+1) nIndexofNotSelected=vNotSelectedSample(j); vDistanceNew = zeros(1,i-1); for k=1:(i-1) nIndexofSelected=vSelectedSample(k); if(nIndexofSelected<=nIndexofNotSelected) vDistanceNew(k)=mDistance(nIndexofSelected,nIndexofNotSelected); else vDistanceNew(k)=mDistance(nIndexofNotSelected,nIndexofSelected);     end                        end vMinDistance(j)=min(vDistanceNew); end [nUseless,nIndexofvMinDistance]=max(vMinDistance); vSelectedSample(i)=vNotSelectedSample(nIndexofvMinDistance); 
end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
%%%%% end of the kennard-stone step two 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
%%%%% start of export the result 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
vSelectedRowIndex=vSelectedSample; for i=1:length(vSelectedSample) XSelected(i,:)=X(vSelectedSample(i),:); 
end vNotSelectedSample=setdiff(vAllofSample,vSelectedSample); 
for i=1:length(vNotSelectedSample) XRest(i,:)=X(vNotSelectedSample(i),:); 
end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
%%%%% end of export the result 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 

参考文献

卷积神经网络的紫外-可见光谱水质分类方法 (wanfangdata.com.cn)

光谱技术结合水分校正与样本增广的棉田土壤盐分精准反演 (tcsae.org)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/669909.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

详细分析Redis中数值乱码的根本原因以及解决方式

目录 前言1. 问题所示2. 原理分析3. 拓展 前言 对于这方面的相关知识推荐阅读&#xff1a; Redis框架从入门到学精&#xff08;全&#xff09;Java关于RedisTemplate的使用分析 附代码java框架 零基础从入门到精通的学习路线 附开源项目面经等&#xff08;超全&#xff09; …

在(龙芯 3A6000)loongnix下编译syncthing

在loongnix下编译syncthing Syncthing&#xff08;https://syncthing.net/&#xff09; 是一个开源的 P2P 文件同步工具&#xff0c;可用于在多个设备&#xff08;包括 Android 手机&#xff09;之间同步文件。 – Ankush Das&#xff08;作者&#xff09;。因为工作资料保存需…

板块零 IDEA编译器基础:第二节 创建JAVA WEB项目与IDEA基本设置 来自【汤米尼克的JAVAEE全套教程专栏】

板块零 IDEA编译器基础&#xff1a;第二节 创建JAVA WEB项目与IDEA基本设置 一、创建JAVA WEB项目&#xff08;1&#xff09;普通项目升级成WEB项目&#xff08;2&#xff09;创建JAVA包 二、IDEA 开荒基本设置&#xff08;1&#xff09;设置字体字号自动缩放 &#xff08;2&am…

C# 根据USB设备VID和PID 获取设备总线已报告设备描述

总线已报告设备描述 DEVPKEY_Device_BusReportedDeviceDesc 模式 winform 语言 c# using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Window…

升级Oracle 单实例数据库19.3到19.22

需求 我的Oracle Database Vagrant Box初始版本为19.3&#xff0c;需要升级到最新的RU&#xff0c;当前为19.22。 以下操作时间为为2024年2月5日。 补丁下载 补丁下载文档参见MOS文档&#xff1a;Primary Note for Database Proactive Patch Program (Doc ID 888.1)。 补丁…

企业级Spring boot项目 配置清单

目录 一、服务基础配置 二、配置数据库数据源 三、配置缓存 四、配置日志 五、配置统一异常处理 六、配置swagger文档 七、配置用户登录模块 八、配置websocket 九、配置定时任务 十、配置文件服务器 十一、配置Nacos 十二、配置项目启动数据库默认初始化(liquibas…

Bootstrap5 图片轮播

Bootstrap5 轮播样式表使用的是CDN资源 <title>亚丁号</title><!-- 自定义样式表 --><link href"static/front/css/front.css" rel"stylesheet" /><!-- 新 Bootstrap5 核心 CSS 文件 --><link rel"stylesheet"…

Meta开源大模型LLaMA2的部署使用

LLaMA2的部署使用 LLaMA2申请下载下载模型启动运行Llama2模型文本补全任务实现聊天任务LLaMA2编程Web UI操作 LLaMA2 申请下载 访问meta ai申请模型下载&#xff0c;注意有地区限制&#xff0c;建议选其他国家 申请后会收到邮件&#xff0c;内含一个下载URL地址&#xff0c;…

【翻译】Processing安卓模式的安装使用及打包发布(内含中文版截图)

原文链接在下面的每一章的最前面。 原文有三篇&#xff0c;译者不知道贴哪篇了&#xff0c;这篇干脆标了原创。。 译者声明&#xff1a;本文原文来自于GNU协议支持下的项目&#xff0c;具备开源二改授权&#xff0c;可翻译后公开。 文章目录 Install&#xff08;安装&#xff0…

1041.困于环中的机器人(Java)

题目描述&#xff1a; 在无限的平面上&#xff0c;机器人最初位于 (0, 0) 处&#xff0c;面朝北方。注意: 北方向 是y轴的正方向。 南方向 是y轴的负方向。 东方向 是x轴的正方向。 西方向 是x轴的负方向。 机器人可以接受下列三条指令之一&#xff1a; “G”&#xff1a;直走 …

42、WEB攻防——通用漏洞文件包含LFIRFI伪协议编码算法代码审计

文章目录 文件包含文件包含原理攻击思路文件包含分类 sessionPHP伪协议进行文件包含 文件包含 文件包含原理 文件包含其实就是引用&#xff0c;相当于C语言中的include <stdio.h>。文件包含漏洞常出现于php脚本中&#xff0c;当include($file)中的$file变量用户可控&am…

88 docker 环境下面 前端A连到后端B + 前端B连到后端A

前言 呵呵 最近出现了这样的一个问题, 我们有多个前端服务, 分别连接了对应的后端服务, 前端A -> 后端A, 前端B -> 后端B 但是 最近的时候 却会出现一种情况就是, 有些时候 前端A 连接到了 后端B, 前端B 连接到了 后端A 我们 前端服务使用 nginx 提供前端 html, js…

嵌入式软件bug分析基本要求

摘要&#xff1a;软件从来不是一次就能完美的&#xff0c;需要以包容的眼光看待它的残缺。那问题究竟为何产生&#xff0c;如何去除呢&#xff1f; 1、软件问题从哪来 软件缺陷问题千千万万&#xff0c;主要是需求、实现、和运行环境三方面。 1.1 需求描述偏差 客户角度的描…

Autovue R21.1 发布

作者: justin.jin 2023年9月, Oracle发布了最新版的Autovue R21.1, 它包括了原来21.0.1 和 21.0.2的全部补丁. Autovue R21.1的关键更新如下: 1, 升级为64位: 性能有较大提升, 不会有32位应用单一线程最大使用2G的内存限制. 2, 紧随IT基础架构的变化, 支持Windows Se…

Excel设置单元格下拉框(poi)

前言 年关在即&#xff0c;还在最后的迭代处理&#xff0c;还分了个其他同事的单&#xff0c;说是导出的Excel模版的2列要修改为下拉选项&#xff0c;过程很曲折&#xff0c;不说&#xff0c;以下其实就是一个笔记而已&#xff01; 其实之前分享过阿里的EasyExcel设置单…

jsp页面,让alert弹出信息换行显示

第一种方式&#xff1a;后端拼接上换行符前端显示 1&#xff0c;java后端将信息封装成字符串时&#xff0c;在需要换行的地方拼接上一个换行符&#xff0c; 显示在HTML中的换行&#xff0c;通常需要用<br>标签替代\n&#xff0c;如下&#xff1a; String javaString &…

【教程】ESP32-CAM使用I2C驱动0.91寸OLED

转载请注明出处&#xff1a;小锋学长生活大爆炸[xfxuezhang.cn] 先安装库 1. 打开 Arduino IDE&#xff0c;然后转到 Sketch > Include Library > Manage Libraries。 2. 在搜索框中输入“SSD1306”&#xff0c;然后从 Adafruit 安装 SSD1306 库。 3. 选择“install al…

各种编程语言送祝福:2024龙年大吉

我是码农一枚&#xff0c;在这里用不同编程语言中祝福大家"2024&#xff0c;龙年大吉"~ Python print("2024&#xff0c;龙年大吉")Java public class Main {public static void main(String[] args) {System.out.println("2024&#xff0c;龙年大…

文心一言 VS 讯飞星火 VS chatgpt (195)-- 算法导论14.3 3题

三、用go语言&#xff0c;请给出一个有效的算法&#xff0c;对一个给定的区间i&#xff0c;返回一个与i重叠且具有最小低端点的区间;或者当这样的区间不存在时返回T.nil。 文心一言&#xff0c;去掉math&#xff0c;代码正常运行&#xff1a; 在Go语言中&#xff0c;你可以使…

Opencc4j 开源中文繁简体使用介绍

Opencc4j Opencc4j 支持中文繁简体转换&#xff0c;考虑到词组级别。 Features 特点 严格区分「一简对多繁」和「一简对多异」。 完全兼容异体字&#xff0c;可以实现动态替换。 严格审校一简对多繁词条&#xff0c;原则为「能分则不合」。 词库和函数库完全分离&#xff0c…