Flink任务优化分享

Flink任务优化分享

1.背景介绍

线上计算任务在某版本上线之后发现每日的任务时长都需要三个多小时才能完成,计算时间超过了预估时间,通过Dolphinscheduler的每日调度任务看,在数据层 dwd 的数据分段任务存在严重的性能问题,每天的计算耗时将近40分钟,并且随着数据量的上涨,时间越来越长,因此这个计算节点需要着重优化。

2.改进思路及实施

现在的大数据计算任务是用 flink 执行的,因此优化的入手点就是从 Flink History Server 上看任务的执行计划,找到耗时较多的节点以及是否有节点因为sql逻辑被重复执行,导致耗时较高。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-GwHDHsdd-1690266698352)(/Users/apple/Documents/yangxin/develop/image-20230413173034642.png)]

如图所示,可以发现计算任务走了三个分叉,从sql最后的输出来看,只有两个insert表操作,所以这里至少有一条分叉是不必要的;然后就是找到分叉点的原因,为什么会导致任务分成了三个分支,这个就需要执行计划慢慢去理,界面上可以点开每个节点看到他的执行计算优化之后的结果,然后来判断这一节点对应了sql的哪一步。着重需要判断的就是产生分支的那个节点

Sort(orderBy=[tenant_id ASC, room_id ASC, msg_start_time ASC]
) -> 
Calc(select=[__etl_time__, date_id, tenant_id, brand_id, channel, channel_app_id, channel_session_type, msg_id, msg_start_time, msg_end_time, msg_from_id, msg_from_orig_id, msg_from_nk, msg_from_role, msg_to_ids, msg_to_users, msg_type, msg_content, msg_detail, group_chat_info, dialogue_id, room_id, operation_flags, recording_properties, asr_properties, metric_properties, tags, tag_properties, dialogue_properties, lastMsgEndTime, nextMsgStartTime, is_cut_by_msg_time, is_fit_specific_event, pre_is_fit_specific_event, fit_specific_row,CAST(FROM_UNIXTIME(w0$o0)) AS start_time,CAST(FROM_UNIXTIME(w0$o1)) AS end_time,CAST(w1$o0) AS fit_specific_rows,GenIsFitConsecutiveRowsAndTime(channel_session_type, tenant_id, CAST(w1$o0), CAST(CAST(FROM_UNIXTIME(w0$o0))), CAST(CAST(FROM_UNIXTIME(w0$o1))), is_fit_specific_event) AS is_fit_specific_flag,(is_cut_by_msg_time = _UTF-16LE'1':VARCHAR(2147483647) CHARACTER SET "UTF-16LE") AS $39, // (GenIsFitConsecutiveRowsAndTime(channel_session_type, tenant_id, CAST(w1$o0), CAST(CAST(FROM_UNIXTIME(w0$o0))), CAST(CAST(FROM_UNIXTIME(w0$o1))), is_fit_specific_event) = 1) AS $40]
) -> 
OverAggregate(partitionBy=[tenant_id, room_id],orderBy=[msg_start_time ASC],window#0=[LAG(is_fit_specific_flag) AS w0$o0RANG BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW],select=[__etl_time__, date_id, tenant_id, brand_id, channel, channel_app_id, channel_session_type, msg_id, msg_start_time, msg_end_time, msg_from_id, msg_from_orig_id, msg_from_nk, msg_from_role, msg_to_ids, msg_to_users, msg_type, msg_content, msg_detail, group_chat_info, dialogue_id, room_id, operation_flags, recording_properties, asr_properties, metric_properties, tags, tag_properties, dialogue_properties, lastMsgEndTime, nextMsgStartTime, is_cut_by_msg_time, is_fit_specific_event, pre_is_fit_specific_event, fit_specific_row,start_time, end_time, fit_specific_rows, is_fit_specific_flag, $39,    // 是否按时间切 $40,    // 当前条是否为 1w0$o0 -> pre_is_fit_specific_flag // 前一条是否满足特殊规则]
) -> (Calc(select=[date_id, tenant_id, channel_session_type, msg_id, msg_start_time, room_id, tags,IF(($39 OR (w0$o0 IS NULL AND $40) OR ((w0$o0 <> is_fit_specific_flag) IS TRUE AND w0$o0 IS NOT NULL)), 1, 0) AS is_cut_flag,CAST(tenant_id) AS $8,CAST(msg_start_time) AS $9,GenCutPointTypeByFeature(channel_session_type, tenant_id, tags) AS $10]) -> OverAggregate(partitionBy=[tenant_id, room_id],orderBy=[msg_start_time ASC],window#0=[COUNT(is_cut_flag) AS w0$o0,$SUM0(is_cut_flag) AS w0$o1RANG BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW],select=[date_id, tenant_id, channel_session_type, msg_id, msg_start_time, room_id, tags, is_cut_flag, $8, -> tenant_id$9, -> msg_start_time$10, -> 特征切分点: START/END/NOw0$o0, -> countw0$o1 -> sum]) -> Calc(select=[CONCAT_WS(_UTF-16LE'-', $8, room_id, date_id, CAST(CASE((w0$o0 > 0:BIGINT), w0$o1, null:INTEGER))) AS dialogue_id1,channel_session_type, tenant_id, msg_id, $9 AS $f4, tags, $10 AS cutPointType]), #####################CREATE TEMPORARY VIEW keep_cutpoint_view ASSELECT dialogue_id1, smoothRes.smoothResultVoMapFROM (SELECT dialogue_id1,smoothCutPoint(channel_session_type, tenant_id, dialogue_id1, msg_id, msg_start_time, tags, cutPointType) AS smoothResFROM gen_cut_type_by_feature_viewGROUP BY dialogue_id1);#####################Calc(select=[__etl_time__, date_id, tenant_id, brand_id, channel, channel_app_id, channel_session_type, msg_id, msg_start_time, msg_end_time, msg_from_id, msg_from_orig_id, msg_from_nk, msg_from_role, msg_to_ids, msg_to_users, msg_type, msg_content, msg_detail, group_chat_info, dialogue_id, room_id, operation_flags, recording_properties, asr_properties, metric_properties, tags, tag_properties, dialogue_properties, lastMsgEndTime, nextMsgStartTime, is_cut_by_msg_time, is_fit_specific_event, pre_is_fit_specific_event, fit_specific_row, start_time, end_time, fit_specific_rows, is_fit_specific_flag, w0$o0 AS pre_is_fit_specific_flag, CASE(w0$o0 IS NULL, is_fit_specific_flag, (w0$o0 <> is_fit_specific_flag), 1, 0) AS is_cut_by_specific, IF(($39 OR (w0$o0 IS NULL AND $40) OR ((w0$o0 <> is_fit_specific_flag) IS TRUE AND w0$o0 IS NOT NULL)), 1, 0) AS is_cut_flag, IF((IF(($39 OR (w0$o0 IS NULL AND $40) OR ((w0$o0 <> is_fit_specific_flag) IS TRUE AND w0$o0 IS NOT NULL)), 1, 0) = 1), _UTF-16LE'start', null:VARCHAR(2147483647) CHARACTER SET "UTF-16LE") AS $42,CAST(tenant_id) AS $43, GenCutPointTypeByFeature(channel_session_type, tenant_id, tags) AS $44]) -> OverAggregate(partitionBy=[tenant_id, room_id],orderBy=[msg_start_time ASC],window#0=[COUNT(is_cut_flag) AS w0$o0,$SUM0(is_cut_flag) AS w0$o1RANG BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW], select=[__etl_time__, date_id, tenant_id, brand_id, channel, channel_app_id, channel_session_type, msg_id, msg_start_time, msg_end_time, msg_from_id, msg_from_orig_id, msg_from_nk, msg_from_role, msg_to_ids, msg_to_users, msg_type, msg_content, msg_detail, group_chat_info, dialogue_id, room_id, operation_flags, recording_properties, asr_properties, metric_properties, tags, tag_properties, dialogue_properties, lastMsgEndTime, nextMsgStartTime, is_cut_by_msg_time, is_fit_specific_event, pre_is_fit_specific_event, fit_specific_row, start_time, end_time, fit_specific_rows, is_fit_specific_flag, pre_is_fit_specific_flag, is_cut_by_specific, is_cut_flag, $42, -> cut_point_type$43, $44, w0$o0, w0$o1]) -> Calc(select=[date_id, tenant_id, brand_id, channel, channel_app_id, channel_session_type, msg_id, msg_start_time, msg_end_time, msg_from_id, msg_from_orig_id, msg_from_nk, msg_from_role, msg_to_ids, msg_to_users, msg_type, msg_content, msg_detail, group_chat_info, room_id, operation_flags, recording_properties, asr_properties, metric_properties, tags, tag_properties, dialogue_properties, CONCAT_WS(_UTF-16LE'-', $43, room_id, date_id, CAST(CASE((w0$o0 > 0:BIGINT), w0$o1, null:INTEGER))) AS dialogue_id1, $44 AS cutPointType])
)######################
根据时间+特殊规则先生成一轮 dialogu_id1
特征的切分点命中被提前下推到这个阶段执行BUG&优化点:
1. 平滑使用了 groupBy 再 join 会主表, 导致计算流走了分支, 导致部分计算逻辑重复执行了, 这一部分可以考虑用 over 聚合来做gen_fit_sprcific_flag_view -> gen_cut_flag_view -> gen_dialogue_id_by_cut_flag_view -> gen_cut_type_by_feature_view -> keep_cutpoint_view2. CASE WHEN pre_is_fit_specific_flag IS NULL THEN is_fit_specific_flagWHEN pre_is_fit_specific_flag <> is_fit_specific_flag THEN 1WHEN pre_is_fit_specific_flag = is_fit_specific_flag THEN 0ELSE 0END AS is_cut_by_specific逻辑不对, 导致 GenIsFitConsecutiveRowsAndTime 被重复执行
3. IF(is_cut_flag = 1, 'start', CAST(NULL AS STRING)) AS cut_point_type, 需要判断一下是否还有必要5. 
======================
Sort(orderBy=[dialogue_id1 ASC]
) -> 
SortAggregate(isMerge=[false], groupBy=[dialogue_id1], select=[dialogue_id1, smoothCutPoint(channel_session_type, tenant_id, dialogue_id1, msg_id, $f4, tags, cutPointType) AS smoothRes]
) -> 
Calc(select=[dialogue_id1, smoothRes.smoothResultVoMap AS smoothResultVoMap]
) -> (Correlate(invocation=[GetCutPointBySplit($cor7.smoothResultVoMap)],correlate=[table(GetCutPointBySplit($cor7.smoothResultVoMap))],select=[dialogue_id1,smoothResultVoMap,msgId,cutPointMap],rowType=[RecordType(VARCHAR(2147483647) dialogue_id1, (VARCHAR(2147483647), (VARCHAR(2147483647), VARCHAR(2147483647)) MAP) MAP smoothResultVoMap,VARCHAR(2147483647) msgId,(VARCHAR(2147483647), VARCHAR(2147483647)) MAP cutPointMap)], joinType=[INNER]) -> Calc(select=[dialogue_id1, msgId, ITEM(cutPointMap, _UTF-16LE'isKeep') AS isKeep]),Correlate(invocation=[GetCutPointBySplit($cor9.smoothResultVoMap)],correlate=[table(GetCutPointBySplit($cor9.smoothResultVoMap))],select=[dialogue_id1,smoothResultVoMap,msgId,cutPointMap], rowType=[RecordType(VARCHAR(2147483647) dialogue_id1, (VARCHAR(2147483647), (VARCHAR(2147483647), VARCHAR(2147483647)) MAP) MAP smoothResultVoMap, VARCHAR(2147483647) msgId, (VARCHAR(2147483647), VARCHAR(2147483647)) MAP cutPointMap)], joinType=[INNER]) -> Calc(select=[dialogue_id1, msgId, ITEM(cutPointMap, _UTF-16LE'isKeep') AS isKeep]), Correlate(invocation=[GetCutPointBySplit($cor8.smoothResultVoMap)], correlate=[table(GetCutPointBySplit($cor8.smoothResultVoMap))], select=[dialogue_id1,smoothResultVoMap,msgId,cutPointMap], rowType=[RecordType(VARCHAR(2147483647) dialogue_id1, (VARCHAR(2147483647), (VARCHAR(2147483647), VARCHAR(2147483647)) MAP) MAP smoothResultVoMap, VARCHAR(2147483647) msgId, (VARCHAR(2147483647), VARCHAR(2147483647)) MAP cutPointMap)], joinType=[INNER]) ->Calc(select=[dialogue_id1, msgId, ITEM(cutPointMap, _UTF-16LE'isKeep') AS isKeep])
)######################
计算平滑逻辑优化点:
1. smoothCutPoint 很大的性能问题, 改成基于 over 聚合的 udaf, 优化掉 GROUPBY + LATERAL TABLE + JOIN

对应sql如下:

--根据配置文件特征数据对数据进行特征切分标记
CREATE TEMPORARY VIEW gen_cut_type_by_feature_view AS
SELECT *,GenCutPointTypeByFeature(channel_session_type, tenant_id, tags) AS cutPointType
FROM gen_dialogue_id_by_cut_flag_view;CREATE TEMPORARY VIEW keep_cutpoint_view AS
SELECT dialogue_id1, smoothRes.smoothResultVoMap
FROM (SELECT dialogue_id1,smoothCutPoint(channel_session_type, tenant_id, dialogue_id1, msg_id, msg_start_time, tags, cutPointType) AS smoothResFROM gen_cut_type_by_feature_viewGROUP BY dialogue_id1
);CREATE TEMPORARY VIEW keep_cutpoint_breakup AS
SELECT dialogue_id1, smoothResultVoMap, msgId, cutPointMap, cutPointMap['isKeep'] AS isKeep
FROM keep_cutpoint_view, LATERAL TABLE(GetCutPointBySplit(smoothResultVoMap)) AS T(msgId, cutPointMap);CREATE TEMPORARY VIEW keep_cutpoint_join AS
SELECT t1.*,t2.isKeep, IF(t2.isKeep = '0', 'no', t1.cutPointType) AS curCutPointType, msg_start_time
FROM gen_cut_type_by_feature_view t1
LEFT JOIN keep_cutpoint_breakup t2 ON t1.dialogue_id1 = t2.dialogue_id1 AND t1.msg_id = t2.msgId;CREATE TEMPORARY VIEW gen_dialogue_id_by_feature_view0 AS
SELECTdate_id,tenant_id,brand_id,channel,channel_app_id,channel_session_type,msg_id,msg_start_time,msg_end_time,msg_from_id,msg_from_nk,msg_from_orig_id,msg_from_role,msg_to_ids,msg_to_users,msg_type,msg_content,msg_detail,group_chat_info,room_id,operation_flags,recording_properties,asr_properties,metric_properties,tags,tag_properties,dialogue_properties,dialogue_id1,cutPointType,curCutPointType,preCutPointType,isKeep,CONCAT_WS('-',CAST(tenant_id AS STRING),room_id,date_id,CAST(SUM(IF(preCutPointType IS NULL OR preCutPointType = 'end' OR curCutPointType = 'start', 1, 0)) OVER (PARTITION BY tenant_id, room_id, date_id ORDER BY msg_start_time)AS STRING)) AS dialogue_id
FROM (SELECTdate_id,tenant_id,brand_id,channel,channel_app_id,channel_session_type,msg_id,msg_start_time,msg_end_time,msg_from_id,msg_from_nk,msg_from_orig_id,msg_from_role,msg_to_ids,msg_to_users,msg_type,msg_content,msg_detail,group_chat_info,room_id,operation_flags,recording_properties,asr_properties,metric_properties,tags,tag_properties,dialogue_properties,dialogue_id1,cutPointType,curCutPointType,isKeep,LAG(curCutPointType) OVER ( PARTITION BY dialogue_id1 ORDER BY msg_start_time) AS preCutPointTypeFROM keep_cutpoint_join);

之前sql对于这个分段平滑逻辑的实现是,先根据idalogue_id group by数据,使用udaf去得到聚合结果,然后在通过msg_id将聚合结果join回原来的明细数据里,这种做法就会产生分岔,不仅性能差,而且会重复执行计算节点导致耗时上升。这种做法在后边的相关性聚合也是差不多的,这样一分析问题就找到了,就是要把聚合结果join回主表这种做法换一种更高效的方式实现,具体改进思路就是将原来这种方式改成基于 over 聚合的 udaf, 优化掉 GROUPBY + LATERAL TABLE + JOIN

优化之后的sql:

--根据配置文件特征数据对数据进行特征切分标记
CREATE TEMPORARY VIEW gen_cut_type_by_feature_view AS
SELECT *,GenCutPointTypeByFeature(channel_session_type, tenant_id, tags) AS cutPointType
FROM gen_dialogue_id_by_cut_flag_view;CREATE TEMPORARY VIEW keep_cutpoint_view AS
SELECT *,smooth_result[msg_id]['is_keep'] AS isKeep,CASE WHEN smooth_result[msg_id]['is_keep'] = '0' THEN 'no' ELSE cutPointType END AS curCutPointType
FROM(SELECT *,smoothCutPoint(channel_session_type, tenant_id, dialogue_id1, msg_id, msg_start_time, tags, cutPointType) OVER ( PARTITION BY dialogue_id1) AS smooth_resultFROM gen_cut_type_by_feature_view);CREATE TEMPORARY VIEW gen_dialogue_id_by_feature_view0 AS
SELECTdate_id,tenant_id,brand_id,channel,channel_app_id,channel_session_type,msg_id,msg_start_time,msg_end_time,msg_from_id,msg_from_nk,msg_from_orig_id,msg_from_role,msg_to_ids,msg_to_users,msg_type,msg_content,msg_detail,group_chat_info,room_id,operation_flags,recording_properties,asr_properties,metric_properties,tags,tag_properties,dialogue_properties,CONCAT_WS('-',CAST(tenant_id AS STRING),room_id,date_id,CAST(SUM( CASE WHEN dialogue_id1 <> preDialogueId OR preCutPointType IS NULL OR preCutPointType = 'end' OR curCutPointType = 'start' THEN 1 ELSE 0 END) OVER (PARTITION BY tenant_id, room_id, date_id ORDER BY msg_start_time)AS STRING)) AS dialogue_id
FROM (SELECTdate_id,tenant_id,brand_id,channel,channel_app_id,channel_session_type,msg_id,msg_start_time,msg_end_time,msg_from_id,msg_from_nk,msg_from_orig_id,msg_from_role,msg_to_ids,msg_to_users,msg_type,msg_content,msg_detail,group_chat_info,room_id,operation_flags,recording_properties,asr_properties,metric_properties,tags,tag_properties,dialogue_properties,dialogue_id1,cutPointType,curCutPointType,LAG(dialogue_id1) OVER ( PARTITION BY tenant_id, room_id, date_id ORDER BY msg_start_time) AS preDialogueId,LAG(curCutPointType) OVER ( PARTITION BY tenant_id, room_id, date_id ORDER BY msg_start_time) AS preCutPointTypeFROM keep_cutpoint_view);

相关性的优化也是一样的思路,改成基于 over 聚合的 udaf,减少聚合结果join回原表的这种操作

相关性sql对比:

CREATE TEMPORARY VIEW dialogue_relevant_view AS
SELECT`tenant_id`,`brand_id`,`channel`,`channel_app_id`,`channel_session_type`,`date_id`,dialogue_id as dialogue_id,res.relevant_config_version as relevant_config_version ,res.relevant_config as relevant_config ,res.metrics as metrics ,res.dialogue_relevant as dialogue_relevant
FROM (select dialogue_relevant_udaf(channel_session_type, tenant_id, msg_id, msg_start_time, msg_end_time, msg_from_role,tags) as res,`tenant_id`,`brand_id`,`channel`,`channel_app_id`,`channel_session_type`,`date_id`,`dialogue_id`from gen_dialogue_id_by_feature_viewgroup by `tenant_id`,`brand_id`,`channel`,`channel_app_id`,`channel_session_type`,`date_id`,`dialogue_id`);CREATE TEMPORARY VIEW dialogue_view_all AS
selectNOW() as `__etl_time__`,a.date_id,a.tenant_id,a.brand_id,a.channel,a.channel_app_id,a.channel_session_type,a.msg_id,a.msg_start_time,a.msg_end_time,a.msg_from_id,a.msg_from_nk,a.msg_from_orig_id,a.msg_from_role,a.msg_to_ids,a.msg_to_users,a.msg_type,a.msg_content,a.msg_detail,a.group_chat_info,a.room_id,a.operation_flags,a.recording_properties,a.asr_properties,a.metric_properties,a.tags,a.tag_properties,map_put(map_put(a.dialogue_properties , 'dialogue_relevant' , b.dialogue_relevant),'relevant_config',b.relevant_config)  as dialogue_properties,a.dialogue_id1,a.cutPointType,a.curCutPointType,a.preCutPointType,a.isKeep,a.dialogue_id
from  gen_dialogue_id_by_feature_view  aleft join dialogue_relevant_view bon  a.tenant_id = b.tenant_id anda.brand_id = b.brand_id anda.channel = b.channel anda.channel_app_id = b.channel_app_id anda.channel_session_type = b.channel_session_type anda.dialogue_id = b.dialogue_id;#####################################CREATE TEMPORARY VIEW dialogue_view AS
selectdate_id,tenant_id,brand_id,channel,channel_app_id,channel_session_type,msg_id,msg_start_time,msg_end_time,msg_from_id,msg_from_nk,msg_from_orig_id,msg_from_role,msg_to_ids,msg_to_users,msg_type,msg_content,msg_detail,group_chat_info,room_id,operation_flags,recording_properties,asr_properties,metric_properties,tags,tag_properties,dialogue_properties,dialogue_id,dialogue_relevant_udaf(channel_session_type, tenant_id, msg_id, msg_start_time, msg_end_time, msg_from_role,tags) OVER (PARTITION BY `tenant_id`,`brand_id`,`channel`,`channel_app_id`,`channel_session_type`,`date_id`,`dialogue_id`) AS res
from gen_dialogue_id_by_feature_view ;

3.优化结果

优化之后的执行计划清爽很多,执行速度也有了明显提升,从原来的将近40分钟的计算时长,减少到7分钟,提升巨大

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/9008.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

20230721在WIN10下安装openssl并解密AES-128加密的ts视频切片

20230721在WIN10下安装openssl并解密AES-128加密的ts视频切片 2023/7/21 22:58 1、前言&#xff1a; AES-128加密的ts视频切片【第一个】&#xff0c;打开有时间限制的&#xff01; https://app1ce7glfm1187.h5.xiaoeknow.com/v2/course/alive/l_64af6130e4b03e4b54da1681?typ…

研发机器配网方案(针对禁止外网电脑的组网方案)

背景&#xff1a;公司是研发型小公司&#xff0c;难免会使用A某D和K某l 等国内免费软件&#xff0c;这两个是业界律师函发得最多的软件。最简单的方案是离网使用&#xff0c;但是离网使用比较麻烦的是要进行文件传输&#xff0c;需要使用U盘拷贝&#xff0c;另外研发型企业一般…

【动态规划上分复盘】这是你熟悉的地下城游戏吗?

欢迎 前言一、动态规划五步曲二、地下城游戏题目分析思路&#xff1a;动态规划具体代码如下 总结 前言 本文讲解关于动态规划思路的两道题目。 一、动态规划五步曲 1.确定状态表示&#xff08;确定dp数组的含义&#xff09;2.确定状态转移方程&#xff08;确定dp的递推公式&a…

Python中pyecharts模块

pyecharts模块 官网&#xff1a;pyecharts官网 pyecharts框架画廊 如果想要做出数据可视化效果图, 可以借助pyecharts模块来完成概况 : Echarts 是个由百度开源的数据可视化&#xff0c;凭借着良好的交互性&#xff0c;精巧的图表设计&#xff0c;得到了众多开发者的认可. 而…

巅峰极客2023 hellosql

随便输一个payload&#xff0c;有waf 这题只有两个回显&#xff0c;分别是太酷啦和nonono&#xff0c;不显示报错、登录成功等各种信息&#xff0c;目前只能想到用时间盲注。 抓包fuzz&#xff0c;194都是被过滤的 不止这些&#xff0c;手工测出来if、sleep、benchmark、*、rp…

办公室安全升级,如何保障人身财产安全?

视频监控&#xff0c;一种常见的安全措施&#xff0c;以监视和记录办公室内的活动。这项技术为企业提供了许多优势&#xff0c;包括保障员工和财产安全、帮助调查犯罪事件、提高业务管理效率以及应对突发事件。 因此&#xff0c;在合理范围内应用视频监控&#xff0c;将为企业提…

SAP从放弃到入门系列之批次派生-Batch Derivation-Part1

文章目录 一、概述二、系统配置三、主数据3.1 分类主数据3.2 派生规则设置3.2.1发送物料3.2.2 接收物料 四、 测试数据&#xff08;生产订单&#xff09;五、 最后 Batch Derivation翻译成批次派生&#xff08;衍生&#xff09;或批次继承都是问题不大&#xff0c;继承和派生个…

(学习笔记-IP)Ping的工作原理

Ping是基于ICMP协议工作的&#xff0c;ICMP报文封装在IP包里面&#xff0c;它工作在网络层&#xff0c;是IP协议的助手。 ICMP包头的类型字段&#xff0c;大致可分为两大类&#xff1a; 一类是用于诊断的查询消息&#xff0c;也就是查询报文类型一类是通知出错原因的错误消息&…

Spring 的元注解

一、元注解介绍 1.1.源码引入 1.2.元注解介绍 从上面的图片可知&#xff0c;Spring 有四个【负责注解其他注解】的元注解&#xff0c;分别是&#xff1a; Target&#xff1a;标识该注解可以用于标注哪些程序元素&#xff0c;比如类、方法、字段等。 Retention&#xff1a;标…

Zabbix-server监控mysql及httpd服务

目录 一、Zabbix监控mysql数据库 1、为server.Zabbix.com添加服务模板 2、创建mysql服务图形 二、server.zabbix.com服务器操作 编辑chk_mysql.sh脚本 三、server.Zabbix.com测试 四、查看web效果 五、Zabbix监控apache&#xff08;httpd服务&#xff09; 安装master 六、…

C++ 提高编程

C 提高编程 主要针对C泛型编程和STL技术 一、 模板 1、 概念 模板就是建立通用的模具&#xff0c;大大提高代码的复用性 模板特点 模板不可以直接使用&#xff0c;它只是一个框架 ​ 模板的通用并不是万能的 2、 函数模板 C 另一种编程思想为泛型编程&#xff0c;主要利用的…

Ubuntu搭建Samba服务-学习记录

文章目录 Ubuntu安装Samba流程Samba配置文件Samba添加账户配置文件修改Samba服务控制设置开机自动启动通过systemctl 启动服务通过 rc.local 启动 Windows访问参考链接 当前文章仅用于记录&#xff0c;在 Ubuntu中安装使用Samba&#xff0c;在Windows访问 系统环境&#xff1a;…

【问题记录】Ubuntu 22.04 环境下,程序报:段错误(核心已转储)怎么使用 core 文件和GDB调试器 解决?

目录 环境 问题情况 解决思路 原因分析 解决方法 番外知识 环境 VMware Workstation 16 Pro &#xff08;版本&#xff1a;16.1.2 build-17966106&#xff09;ubuntu-22.04.2-desktop-amd64 问题情况 本人在运行百万并发的服务端程序时&#xff0c;程序运行报&#xff1a…

VLAN---虚拟局域网

VLAN— 虚拟局域网 LAN—局域网 MAN—城域网 WAN—广域网 1.一个VLAN相当于是一个广播域 VLAN—通过路由器和交换机协同工作后&#xff0c;将原本的一个广播域逻辑上&#xff0c;拆 分为多个虚拟的广播域。 VLAN配置&#xff1a; 1.创建VLAN VID—VLAN ID------用来区分和…

浅谈能源管理系统在水泥行业中设计分析

安科瑞 华楠 摘要&#xff1a;水泥企业作为我国产业结构中重要的耗能产业&#xff0c;同时对环境的污染也比较大&#xff0c;因此在水泥企业中建立能源管理系统&#xff0c;对水泥企业的生产过程过程进行全过程的监控和管理&#xff0c;对于降低企业的能源消耗和提高企业的经济…

【Ajax】笔记-Axios与函数发送AJAX请求

Axios 和 Ajax 的区别 1、Axios是一个基于Promise的HTTP库&#xff0c;而Ajax是对原生XHR的封装&#xff1b; 2、Ajax技术实现了局部数据的刷新&#xff0c;而Axios实现了对ajax的封装。 优缺点&#xff1a; ajax&#xff1a; 本身是针对MVC的编程,不符合现在前端MVVM的浪潮 基…

【MySQL】之复合查询

【MySQL】之复合查询 基本查询多表查询笛卡尔积自连接子查询单行子查询多行子查询多列子查询在from子句中使用子查询 合并查询小练习 基本查询 查询工资高于500或岗位为MANAGER的雇员&#xff0c;同时还要满足他们的姓名首字母为大写的J按照部门号升序而雇员的工资降序排序使用…

性能测试Ⅱ(压力测试与负载测试详解)

协议 性能理论&#xff1a;并发编程 &#xff0c;系统调度&#xff0c;调度算法 监控 压力测试与负载测试的区别是什么&#xff1f; 负载测试 在被测系统上持续不断的增加压力&#xff0c;直到性能指标(响应时间等)超过预定指标或者某种资源(CPU&内存)使用已达到饱和状…

全志F1C200S嵌入式驱动开发(解决spi加载过慢的问题)

【 声明:版权所有,欢迎转载,请勿用于商业用途。 联系信箱:feixiaoxing @163.com】 之前的几个章节当中,我们陆续解决了spi-nor驱动的问题、uboot支持spi-nor的问题。按道理来说,下面要做的应该就是用uboot的loady命令把kernel、dtb、rootfs这些文件下载到ddr,然…

WebRTC Simulcast介绍

原文地址&#x1f447; https://blog.livekit.io/an-introduction-to-webrtc-simulcast-6c5f1f6402eb/ 你想知道的关于Simulcast的一切 Simulcast是WebRTC中最酷的功能之一,它允许WebRTC会议在参与者网络连接不可预测的情况下进行扩展。在这篇文章中,我们将深入探讨Simulcas…