系统学习Linux-ELK日志收集系统

ELK日志收集系统集群实验

实验环境

角色主机名IP接口
httpd192.168.31.50ens33
node1192.168.31.51ens33
noed2192.168.31.53ens33

环境配置

设置各个主机的ip地址为拓扑中的静态ip,并修改主机名

#httpd
[root@localhost ~]# hostnamectl set-hostname httpd
[root@localhost ~]# bash
[root@httpd ~]# #node1
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# bash
[root@node1 ~]# vim /etc/hosts
192.168.31.51   node1
192.168.31.53   node2#node2
[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# bash
[root@node2 ~]# vim /etc/hosts
192.168.31.51   node1
192.168.31.53   node2

安装elasticsearch

#node1
[root@node1 ~]# ls
elk软件包  公共  模板  视频  图片  文档  下载  音乐  桌面
[root@node1 ~]# mv elk软件包 elk
[root@node1 ~]# ls
elk  公共  模板  视频  图片  文档  下载  音乐  桌面
[root@node1 ~]# cd elk
[root@node1 elk]# ls
elasticsearch-5.5.0.rpm    kibana-5.5.1-x86_64.rpm  node-v8.2.1.tar.gz
elasticsearch-head.tar.gz  logstash-5.5.1.rpm       phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@node1 elk]# rpm -ivh elasticsearch-5.5.0.rpm 
警告:elasticsearch-5.5.0.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
正在升级/安装...1:elasticsearch-0:5.5.0-1          ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemdsudo systemctl daemon-reloadsudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executingsudo systemctl start elasticsearch.service#node2

配置

node1

vim /etc/elasticsearch/elasticsearch.yml

 17 cluster.name: my-elk-cluster                //集群名称   23 node.name: node1                           //节点名字33 path.data: /var/lib/elasticsearch         //数据存放路径37 path.logs: /var/log/elasticsearch/       //日志存放路径43 bootstrap.memory_lock: false            //在启动的时候不锁定内存55 network.host: 0.0.0.0                  //提供服务绑定的IP地址,0.0.0.0代表所有地址59 http.port: 9200                       //侦听端口为920068 discovery.zen.ping.unicast.hosts: ["node1", "node2"]    //群集发现通过单播实现

 node2

 17 cluster.name: my-elk-cluster                //集群名称   23 node.name: node1                           //节点名字33 path.data: /var/lib/elasticsearch         //数据存放路径37 path.logs: /var/log/elasticsearch/       //日志存放路径43 bootstrap.memory_lock: false            //在启动的时候不锁定内存55 network.host: 0.0.0.0                  //提供服务绑定的IP地址,0.0.0.0代表所有地址59 http.port: 9200                       //侦听端口为920068 discovery.zen.ping.unicast.hosts: ["node1", "node2"]    //群集发现通过单播实现

在node1安装-elasticsearch-head插件

移动到elk文件夹

#安装插件编译很慢
[root@node1 ~]# cd elk/
[root@node1 elk]# ls
elasticsearch-5.5.0.rpm    kibana-5.5.1-x86_64.rpm  phantomjs-2.1.1-linux-x86_64.tar.bz2
elasticsearch-head.tar.gz  logstash-5.5.1.rpm       node-v8.2.1.tar.gz
[root@node1 elk]# tar xf node-v8.2.1.tar.gz
[root@node1 elk]# cd node-v8.2.1/
[root@node1 node-v8.2.1]# ./configure && make && make install
[root@node1 elk]# cd ~/elk
[root@node1 elk]# tar xf phantomjs-2.1.1-linux-x86_64.tar.bz2 
[root@node1 elk]# cd phantomjs-2.1.1-linux-x86_64/bin/
[root@node1 bin]# ls
phantomjs
[root@node1 bin]# cp phantomjs /usr/local/bin/
[root@node1 bin]# cd ~/elk/
[root@node1 elk]# tar xf elasticsearch-head.tar.gz 
[root@node1 elk]# cd elasticsearch-head/
[root@node1 elasticsearch-head]# npm install
npm WARN deprecated fsevents@1.2.13: The v1 package contains DANGEROUS / INSECURE binaries. Upgrade to safe fsevents v2
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.0 (node_modules/karma/node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expressionup to date in 3.536s

修改elasticsearch配置文件

[root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml 84 # ---------------------------------- Various -----------------------------------85 #86 # Require explicit names when deleting indices:87 #88 #action.destructive_requires_name: true89 http.cors.enabled: true                //开启跨域访问支持,默认为false90 http.cors.allow-origin:"*"            //跨域访问允许的域名地址
[root@node1 ~]# systemctl restart elasticsearch.service #启动elasticsearch-head
cd /root/elk/elasticsearch-head
npm run start &
#查看监听
netstat -anput | grep :9100
#访问
http://192.168.31.51:9100

node1服务器安装logstash

[root@node1 elk]# rpm -ivh logstash-5.5.1.rpm 
警告:logstash-5.5.1.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]软件包 logstash-1:5.5.1-1.noarch 已经安装
#开启并创建一个软连接
[root@node1 elk]# systemctl start logstash.service
[root@node1 elk]# In -s /usr/share/logstash/bin/logstash  /usr/local/bin/
#测试1
[root@node1 elk]# logstash -e 'input{ stdin{} }output { stdout{} }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
16:03:50.250 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
16:03:50.256 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
16:03:50.330 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"9ba08544-a7a7-4706-a3cd-2e2ca163548d", :path=>"/usr/share/logstash/data/uuid"}
16:03:50.584 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
16:03:50.739 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
16:03:50.893 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
^C16:04:32.838 [SIGINT handler] WARN  logstash.runner - SIGINT received. Shutting down the agent.
16:04:32.855 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}
#测试2
[root@node1 elk]# logstash -e 'input { stdin{} } output { stdout{ codec=>rubydebug }}'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
16:46:23.975 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
The stdin plugin is now waiting for input:
16:46:24.014 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
16:46:24.081 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
^C16:46:29.970 [SIGINT handler] WARN  logstash.runner - SIGINT received. Shutting down the agent.
16:46:29.975 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}
#测试3
16:46:29.975 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}
[root@node1 elk]# logstash -e 'input { stdin{} } output { elasticsearch{ hosts=>["192.168.31.51:9200"]} }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
16:46:55.951 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.31.51:9200/]}}
16:46:55.955 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.31.51:9200/, :path=>"/"}
16:46:56.049 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<Java::JavaNet::URI:0x3a106333>}
16:46:56.068 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
16:46:56.204 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
16:46:56.233 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Installing elasticsearch template to _template/logstash
16:46:56.429 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x19aeba5c>]}
16:46:56.432 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
16:46:56.461 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
16:46:56.561 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
^C16:46:57.638 [SIGINT handler] WARN  logstash.runner - SIGINT received. Shutting down the agent.
16:46:57.658 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}

logstash日志收集文件格式(默认存储在/etc/logstash/conf.d)

Logstash配置文件基本由三部分组成:input、output以及 filter(根据需要)。标准的配置文件格式如下:

input (...)  输入

filter {...}   过滤

output {...}  输出

在每个部分中,也可以指定多个访问方式。例如,若要指定两个日志来源文件,则格式如下:

input {

file{path =>"/var/log/messages" type =>"syslog"}

file { path =>"/var/log/apache/access.log"  type =>"apache"}

}

案例:通过logstash收集系统信息日志

[root@node1 conf.d]# chmod o+r /var/log/messages
[root@node1 conf.d]# vim /etc/logstash/conf.d/system.conf
input {
file{
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch{
hosts =>["192.168.31.51:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
[root@node1 conf.d]# systemctl restart logstash.service 

node1节点安装kibana

cd ~/elk

[root@node1 elk]# rpm -ivh kibana-5.5.1-x86_64.rpm 
警告:kibana-5.5.1-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...1:kibana-5.5.1-1                   ################################# [100%]
[root@node1 elk]# vim /etc/kibana/kibana.yml 2 server.port: 5601                    //Kibana打开的端口7 server.host: "0.0.0.0"              //Kibana侦听的地址21 elasticsearch.url: "http://192.168.31.51:9200"    //和Elasticsearch 建立连接30 kibana.index: ".kibana"            //在Elasticsearch中添加.kibana索引
[root@node1 elk]# systemctl start kibana.service 

访问kibana

首次访问需要添加索引,我们添加前面已经添加过的索引:system-*

企业案例

收集nginx访问日志信息

在httpd服务器上安装logstash,参数上述安装过程

logstash在httpd服务器上作为agent(代理),不需要启动

编写httpd日志收集配置文件

[root@httpd ]# yum install -y httpd
[root@httpd ]# systemctl start httpd
[root@httpd ]# systemctl start logstash
[root@httpd ]# vim /etc/logstash/conf.d/httpd.conf
input {file {path => "/var/log/httpd/access_log"type => "access"start_position => "beginning"}
}
output {elasticsearch {hosts => ["192.168.31.51:9200"]index => "httpd-%{+YYYY.MM.dd}"}
}
[root@httpd ]# logstash -f  /etc/logstash/conf.d/httpd.conf
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
21:29:34.272 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.31.51:9200/]}}
21:29:34.275 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.31.51:9200/, :path=>"/"}
21:29:34.400 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<Java::JavaNet::URI:0x1c254b0a>}
21:29:34.423 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
21:29:34.579 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
21:29:34.585 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x3b483278>]}
21:29:34.588 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
21:29:34.845 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
21:29:34.921 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
wwwww
w
w

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/59959.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Jupyter installation Tutorial

文章目录 1. 面向的系统2. 什么是Jupyter&#xff1f;3. 安装Python环境4. 安装Jupyter notebook5. Jupyter的启动和配置6. Jupyter的使用技巧7. conclusion参考文献 1. 面向的系统 Windows安装 2. 什么是Jupyter&#xff1f; Jupyter Notebook是一个开源的Web应用程序&…

Qt6.5安装教程——国内源

为什么离线包没了&#xff1f; Qt6开始非商业授权下&#xff0c;不再提供离线安装方式的exe&#xff0c;但源码安装费时费力&#xff0c;所以推荐安装方式已经为在线组件安装方式&#xff0c;包括vs2022、Qt在线安装工具已经成为开发工具新的安装趋势。 Qt是不是要放弃开源&…

【LeetCode: 56. 合并区间+贪心+双指针+标识+模拟】

&#x1f680; 算法题 &#x1f680; &#x1f332; 算法刷题专栏 | 面试必备算法 | 面试高频算法 &#x1f340; &#x1f332; 越难的东西,越要努力坚持&#xff0c;因为它具有很高的价值&#xff0c;算法就是这样✨ &#x1f332; 作者简介&#xff1a;硕风和炜&#xff0c;…

C#,《小白学程序》第六课:队列(Queue)的应用,《实时叫号系统》

医院里面常见的叫号系统怎么实现的&#xff1f; 1 文本格式 /// <summary> /// 下面定义一个新的队列&#xff0c;用于演示《实时叫号系统》 /// </summary> Queue<Classmate> q2 new Queue<Classmate>(); /// <summary> /// 《小白学程序》第…

CentOS 8 安装 Code Igniter 4

在安装好LNMP运行环境基础上&#xff0c;将codeigniter4文件夹移动到/var/nginx/html根目录下&#xff0c;浏览器地址栏输入IP/codeigniter/pulbic 一直提示&#xff1a; Cache unable to write to "/var/nginx/html/codeigniter/writable/cache/". 找了好久&…

图像颜色空间转换

目录 1.图像颜色空间介绍 RGB 颜色空间 2.HSV 颜色空间 3.RGBA 颜色空间 2.图像数据类型间的互相转换convertTo() 3.不同颜色空间互相转换cvtColor() 4.Android JNI demo 1.图像颜色空间介绍 RGB 颜色空间 RGB 颜色空间是最常见的颜色表示方式之一&#xff0c;其中 R、…

SQL注入之报错注入

文章目录 报错注入是什么&#xff1f;报错注入获取cms账号密码成功登录 报错注入是什么&#xff1f; 在注入点的判断过程中&#xff0c;发现数据库中SQL 语句的报错信息&#xff0c;会显示在页面中&#xff0c;因此可以利用报错信息进行注入。 报错注入的原理&#xff0c;就是在…

jwt安全问题

文章目录 jwt安全问题jwt简介jwt组成headerpayloadsignature 潜在漏洞空加密算法web346 密钥爆破web348 敏感信息泄露web349 **修改算法RS256为HS256**web350 jwt安全问题 jwt简介 JWT的全称是Json Web Token&#xff0c;遵循JSON格式&#xff0c;跨域认证解决方案&#xff0…

Vue.js2+Cesium1.103.0 十、加载 Three.js

Vue.js2Cesium1.103.0 十、加载 Three.js Demo ThreeModel.vue <template><divid"three_container"class"three_container"/> </template><script> /* eslint-disable eqeqeq */ /* eslint-disable no-unused-vars */ /* eslint…

unity VS无法进行断点调试

有时候我们的VS无法进行断点调试&#xff0c;报错如下&#xff1a; 原因是&#xff1a;开启了多个项目&#xff0c;vs无法找到调式项目 解决&#xff1a;点击菜单栏--调试----附加unity调试程序 会弹出一个框&#xff0c;然后选择你要调试的项目 即可

Android屏幕显示 android:screenOrientation configChanges 处理配置变更 代码中动态切换横竖屏

显示相关 屏幕朝向 https://developer.android.com/reference/android/content/res/Configuration.html#orientation 具体区别如下&#xff1a; activity.getResources().getConfiguration().orientation获取的是当前设备的实际屏幕方向值&#xff0c;可以动态地根据设备的旋…

vue2使用 vis-network 和 vue-vis-network 插件封装一个公用的关联关系图

效果图&#xff1a; vis组件库&#xff1a;vis.js vis-network中文文档&#xff1a;vis-network 安装组件库&#xff1a; npm install vis-network vue-vis-network 或 yarn add vis-network vue-vis-network 新建RelationGraph.vue文件&#xff1a; <template><…

写用例写的焦头烂额?看看摸鱼5年的老点工是怎么写的...

给你个需求&#xff0c;你要怎么转变成最终的用例&#xff1f; 直接把需求文档翻译一下就完事了。 老点工拿到需求后的标准操作&#xff1a; 第一步&#xff1a;解析需求 先解析需求-找出所有需求中的动词&#xff0c;再列出所有测试点。测试点过程不断发散&#xff0c;对于…

【Linux】centos8安装cmake3.27.4

第一步&#xff0c;去官网下安装包&#xff0c;一定不要下错了 下好了之后&#xff0c;用ftp软件传到云服务器或者虚拟机上&#xff0c;我用的是centos8系统&#xff0c;安装之前先准备好这些依赖项 yum install -y gcc gcc-c make automake yum install -y openssl openssl-…

视频云存储/安防监控视频智能分析网关V3:明烟/明火检测功能详解

智能分析网关系列是基于边缘AI计算技术&#xff0c;可对前端摄像头采集的视频流进行实时检测分析&#xff0c;能对监控画面中的人、车、物进行识别。我们的AI边缘计算网关硬件——智能分析网关目前有5个版本&#xff1a;V1、V2、V3、V4、V5&#xff0c;每个版本都能实现对监控视…

计算机毕设之Python的高校成绩分析(含文档+源码+部署)

本系统阐述的是一个高校成绩分析系统的设计与实现&#xff0c;对于Python、B/S结构、MySql进行了较为深入的学习与应用。主要针对系统的设计&#xff0c;描述&#xff0c;实现和分析与测试方面来表明开发的过程。开发中使用了 django框架和MySql数据库技术搭建系统的整体架构。…

线上祭奠软件:虚拟纪念与情感表达的新方式

线上祭奠软件作为一种新兴的技术应用&#xff0c;正在改变传统祭奠方式&#xff0c;为人们提供了跨越时空的虚拟纪念和情感表达方式。本文将深入探讨线上祭奠软件的意义、功能与挑战&#xff0c;并思考其对社会和个人的影响。 一、线上祭奠软件的意义&#xff1a; 1.跨…

前端:横向滚动条,拖动进行左右滚动(含隐藏滚动条)

效果 代码 <!DOCTYPE html> <html lang"en"><head><meta charset"UTF-8"><meta http-equiv"X-UA-Compatible" content"IEedge"><meta name"viewport" content"widthdevice-width, i…

LeetCode面试经典150题(day 1)

LeetCode是一个免费刷题的一个网站&#xff0c;想要通过笔试的小伙伴可以每天坚持刷两道算法题。 接下来&#xff0c;每天我将更新LeetCode面试经典150题的其中两道算法题&#xff0c;一边巩固自己&#xff0c;一遍希望能帮助到有需要的小伙伴。 88.合并两个有序数组 给你两个…

SQL语法与DDL语句的使用

文章目录 前言一、SQL通用语法二、DDL语句1、DDL功能介绍2、DDL语句对数据库操作&#xff08;1&#xff09;查询所有数据库&#xff08;2&#xff09;查询当前数据库&#xff08;3&#xff09;创建数据库&#xff08;4&#xff09;删除数据库&#xff08;5&#xff09;切换数据…