离线任务的稳定性

数据同步底层脚本

日志追踪,关键字提取,任务失败重启策略

Mysql_to_hive.sh

#!/bin/bashecho "mysql host is" $1
echo "mysql db is" $2
echo "mysql table is" $3
echo "mysql username is" $4
echo "mysql passwd is" $5
echo "hive db is" $6
echo "hive table prefix is" $7
echo "format is" $8
echo "create_time is" $9
echo "update_time is" ${10}
echo "query_begin_date is" ${11}
echo "query_end_date is" ${12}
echo "hive_tables is" ${13}
echo "condition is" ${14}
echo "dropCols is" ${15}host=$1
db=$2
table=$3
username=$4
passwd=$5
hive_db=$6
hive_table_prefix=$7
format=$8
create_time=$9
update_time=${10}
dt=${11}
dt1=${12}
hive_tables=${13}
condition=${14}
dropCols=${15}s=0
limit_cnts=10f(){s=$(($s+1))echo "函数f被调用次数:$s"if [ $s -gt $limit_cnts ]thenecho "the cycle times is gt $limit_cnts,exit"exit 1fiquery_begin_date=${1}
query_end_date=${2}WORK_DIR=$(cd "$(dirname "$0")";pwd)
echo "WORK_DIR:"${WORK_DIR}
LOG_PATH="${WORK_DIR}/log/${hive_db}/${query_end_date}"
echo "LOG_PATH:${LOG_PATH}"
mkdir -p ${LOG_PATH}
FILE_LOG="${LOG_PATH}/${table}_to_hive.log"
echo "FILE_LOG:${FILE_LOG}"/opt/cloudera/parcels/SPARK2/bin/spark2-submit \
--class com.mingzhi.common.universal.common_mysql_to_hive \
--master yarn \
--deploy-mode client \
--driver-memory 2g \
--num-executors 3 \
--executor-memory 3G \
--executor-cores 3 \
--conf spark.default.parallelism=200 \
--conf spark.port.maxRetries=1000 \
--conf spark.rpc.numRetries=1000000 \
--conf spark.sql.shuffle.partitions=10 \
--conf spark.dynamicAllocation.enabled=false \
--conf spark.rpc.askTimeout=3600 \
--conf spark.rpc.lookupTimeout=3600 \
--conf spark.network.timeout=3600 \
--conf spark.rpc.io.connectionTimeout=3600 \
/mnt/db_file/jars/common-1.0-SNAPSHOT-jar-with-dependencies.jar \
"$host" "$db" "$table" "$username" "$passwd" "$hive_db" "$hive_table_prefix" "$format" "$create_time" "$update_time" "$query_begin_date" "$query_end_date" "$hive_tables" "$condition" "$dropCols" 2>&1 | tee ${FILE_LOG}while read -r line
do#echo "$line"error1="SparkContext has been shutdown"error2="Failed to send RPC"error3="java.nio.channels.ClosedChannelException"error4="Marking as slave lost"error5="org.apache.spark.SparkException"error6="Exception in thread"error7="SparkContext was shut down"error8="org.apache.spark.sql.AnalysisException"error9="java.util.concurrent.RejectedExecutionException"if [[ ${line} == *${error9}* || ${line} == *${error1}* || ${line} == *${error2}* || ${line} == *${error3}* || ${line} == *${error4}* || ${line} == *${error5}* || ${line} == *${error6}* || ${line} == *${error7}* || ${line} == *${error8}* ]]thenecho "SPARK SQL EXECUTION FAILED ......AND JOB FAILED DATE IS 【${query_begin_date},${query_end_date}】......"#exit 1sleep 1mf "$query_begin_date" "$query_end_date"fi 
done < ${FILE_LOG}}interval_day=5
start_date=${dt}
end_date=${dt1}
while [[ $start_date < $end_date || $start_date = $end_date ]]
doquery_begin_date=`date -d "$start_date" "+%Y-%m-%d"`query_end_date=`date -d "+${interval_day} day ${start_date}" +%Y-%m-%d`start_date=`date -d "+${interval_day} day +1 day ${start_date}" +%Y-%m-%d`if [[ $query_end_date > $end_date ]]thenquery_end_date=$end_datefiecho "【本次任务开始时间:$query_begin_date,本次任务结束时间:$query_end_date,下一次任务的开始时间:$start_date】"#开始执行spark任务sleep 1sf "$query_begin_date" "$query_end_date"
done

Hive_to_mysql.sh

#!/bin/bashmysql_host=$1
from_db=$2
from_tables=$3
to_db=$4
to_tables=$5
username=$6
passwd=$7
dt=$8
dt1=$9
savemode=${10}
dropCols=${11}echo "mysql host is $1"
echo "from_db is ${2}"
echo "from tables is $3"
echo "to_db is $4"
echo "to_tables is $5"
echo "username is $6"
echo "passwd is $7"
echo "dt is $8"
echo "dt1 is $9"
echo "savemode is ${10}"
echo "dropCols is ${11}"if [ $8 ];
then
dt=$8
elsedt=`date -d "-1 day" +%F`fiif [ $9 ];
then
dt1=$9
else
dt1=`date -d "-1 day" +%F`
fiif [ ${10} ];
then
savemode=${10}
elsesavemode='OverWriteByDt'fiecho '==============================================================================='echo "final dt is $dt"
echo "final dt1 is $dt1"
echo "final savemode is $savemode"
echo "final dropCols is $dropCols"#/mnt/db_file/jars/common-1.0-SNAPSHOT-jar-with-dependencies_202201.jar "${mysql_host}" "${from_db}" "${from_tables}" "${to_db}" "${to_tables}" "$username" "${passwd}" "${dt}" "${dt1}" "${savemode}"s=0
limit_cnts=10f(){s=$(($s+1))echo "函数f被调用次数:$s"if [ $s -gt $limit_cnts ]thenecho "the cycle times is gt $limit_cnts,exit"exit 1fiquery_begin_date=${1}
query_end_date=${2}WORK_DIR=$(cd "$(dirname "$0")";pwd)
echo "WORK_DIR:"${WORK_DIR}
LOG_PATH="${WORK_DIR}/log/${from_db}/${query_end_date}"
echo "LOG_PATH:${LOG_PATH}"
mkdir -p ${LOG_PATH}
FILE_LOG="${LOG_PATH}/${from_tables}_to_mysql.log"
echo "FILE_LOG:${FILE_LOG}"/opt/cloudera/parcels/SPARK2/bin/spark2-submit \
--class com.mingzhi.common.universal.hive_to_mysql \
--master yarn \
--deploy-mode client \
--executor-memory 2G \
--num-executors 2 \
--executor-cores 4 \
--conf spark.dynamicAllocation.maxExecutors=3 \
--conf spark.driver.cores=4 \
--conf spark.driver.memory=2g \
/mnt/db_file/jars/common-1.0-SNAPSHOT-jar-with-dependencies.jar \
"${mysql_host}" "${from_db}" "${from_tables}" "${to_db}" "${to_tables}" "$username" "${passwd}" "${query_begin_date}" "${query_end_date}" "${savemode}" "${dropCols}" 2>&1 | tee ${FILE_LOG}while read -r line
do#echo "$line"error1="SparkContext has been shutdown"error2="Failed to send RPC"error3="java.nio.channels.ClosedChannelException"error4="Deadlock found"error5="org.apache.spark.SparkException"error6="Exception in thread"error7="SparkContext was shut down"error8="org.apache.spark.sql.AnalysisException"if [[ ${line} == *${error1}* || ${line} == *${error2}* || ${line} == *${error3}* || ${line} == *${error4}* || ${line} == *${error5}* || ${line} == *${error6}* || ${line} == *${error7}* || ${line} == *${error8}* ]]thenecho "SPARK SQL EXECUTION FAILED ......AND JOB FAILED DATE IS 【${query_begin_date},${query_end_date}】......"#exit 1sleep 1mf "$query_begin_date" "$query_end_date"fi 
done < ${FILE_LOG}}interval_day=4
start_date=${dt}
end_date=${dt1}if [[ $start_date == "9999"* ]]theninterval_day=0
fiwhile [[ $start_date < $end_date || $start_date = $end_date ]]
doquery_begin_date=`date -d "$start_date" "+%Y-%m-%d"`query_end_date=`date -d "+${interval_day} day ${start_date}" +%Y-%m-%d`start_date=`date -d "+${interval_day} day +1 day ${start_date}" +%Y-%m-%d`if [[ $query_end_date > $end_date ]]thenquery_end_date=$end_datefiecho "【本次任务开始时间:$query_begin_date,本次任务结束时间:$query_end_date,下一次任务的开始时间:$start_date】"#开始执行spark任务if [[ $query_begin_date == "10000"* ]]thenexitFi
s=0f "$query_begin_date" "$query_end_date"
done

Es_to_hive.sh


[root@mz-hadoop-01 import]# cat /root/bin/es_to_hive.sh #!/bin/bashecho "es_host is" $1
echo "es_indexes is" $2
echo "hive_db is" $3
echo "hive_tables is" $4
echo "create_time is" $5
echo "update_time is" $6
echo "dt is" $7
echo "dt1 is" $8
echo "format is $9"
echo partitions is ${10}es_host=$1
es_indexes=$2
hive_db=$3
hive_tables=$4
create_time=$5
update_time=$6
dt=$7
dt1=$8
format=$9
partitions=${10}cnts=0
cnts_limit=5memory=1
memory_limit=4f(){cnts=$(($cnts+1))
memory=$(($memory+1))if [ $memory -gt $memory_limit ]
thenmemory=$memory_limit
fiecho "函数f被调用次数cnts:$cnts and memory is $memory G"if [ $cnts -gt $cnts_limit ]
thenecho "the cycle times is gt $cnts_limit,exit"exit 1
fiquery_begin_date=$1
query_end_date=$2WORK_DIR=$(cd "$(dirname "$0")";pwd)
echo "WORK_DIR:"${WORK_DIR}
LOG_PATH="${WORK_DIR}/log/${hive_db}/${query_end_date}"
echo "LOG_PATH:${LOG_PATH}"
mkdir -p ${LOG_PATH}
FILE_LOG="${LOG_PATH}/${hive_tables}_to_hive.log"
echo "FILE_LOG:${FILE_LOG}"/opt/cloudera/parcels/SPARK2/bin/spark2-submit \
--class com.mingzhi.common.universal.common_es_to_hive \
--master yarn \
--deploy-mode client \
--num-executors 2 \
--executor-memory ${memory}G \
--executor-cores 2 \
--conf spark.default.parallelism=200 \
--conf spark.port.maxRetries=300 \
/mnt/db_file/jars/common-1.0-SNAPSHOT-jar-with-dependencies.jar \
"$es_host" "$es_indexes" "$hive_db" "$hive_tables" "$create_time" "$update_time" "$query_begin_date" "$query_end_date" "$format" "$partitions" 2>&1 | tee ${FILE_LOG}while read -r line
do#echo "$line"error1="SparkContext has been shutdown"error2="Failed to send RPC"error3="java.nio.channels.ClosedChannelException"error4="Marking as slave lost"error5="org.apache.spark.SparkException"error6="Exception in thread"error7="SparkContext was shut down"error8="org.apache.spark.sql.AnalysisException"error9="java.util.concurrent.RejectedExecutionException"error0="java.io.IOException"if [[ ${line} == *${error9}* || ${line} == *${error1}* || ${line} == *${error2}* || ${line} == *${error3}* || ${line} == *${error4}* || ${line} == *${error5}* || ${line} == *${error6}* || ${line} == *${error7}* || ${line} == *${error8}* || ${line} == *${error0}* ]]thenecho "SPARK SQL EXECUTION FAILED ......AND JOB FAILED DATE IS 【${query_begin_date},${query_end_date}】......"#exit 1sleep 5sf "$query_begin_date" "$query_end_date"fi 
done < ${FILE_LOG}}interval_day=0
start_date=${dt}
end_date=${dt1}while [[ $start_date < $end_date || $start_date = $end_date ]]
doquery_begin_date=`date -d "$start_date" "+%Y-%m-%d"`query_end_date=`date -d "+${interval_day} day ${start_date}" +%Y-%m-%d`start_date=`date -d "+${interval_day} day +1 day ${start_date}" +%Y-%m-%d`if [[ $query_end_date > $end_date ]]thenquery_end_date=$end_datefiecho "【本次任务开始时间:$query_begin_date,本次任务结束时间:$query_end_date,下一次任务的开始时间:$start_date】"#开始执行spark任务cnts=0f "${query_begin_date}" "${query_end_date}"
done

数据同步业务脚本

Import

#!/bin/bashsource /root/bin/common_config/db_config.propertiesif [ $1 ];
then
dt=$1
elsedt=`date -d "-1 day" +%F`fiif [ $2 ];
then
dt1=$2
else
dt1=`date -d "-1 day" +%F`
fi/root/bin/mysql_to_hive.sh "$wfs_host" $wfs_db tbwork_order "$wfs_user" "$wfs_pwd" paascloud '' '' 'create_time' 'update_time' $dt $dt1
[root@mz-hadoop-01 import]# cat wfs_order_list_index.sh 
#!/bin/bashsource /root/bin/common_config/es_config.propertiesif [ $1 ];
then
dt=$1
elsedt=`date -d "-1 day" +%F`fiif [ $2 ];
then
dt1=$2
else
dt1=`date -d "-1 day" +%F`
fiyear=${dt:0:4}cur_day=`date -d "-1 day" +%F`
cur_year=${cur_day:0:4}echo "year is ${year} and cur_year is ${cur_year}"#if [ ${year} == '2023' ];
if [ ${year} == ${cur_year} ];
then
port=9200
else
port=9500
fiecho "port is ${port}"/root/bin/es_to_hive.sh "$wfs_es_host:${port}" wfs_order_list_index_${year} paascloud wfs_order_list_index "orderCreateTime" "orderUpdateTime" $dt $dt1 'parquet' 1

Export

有更新的导出

[root@mz-hadoop-01 tcm]# cat /mnt/db_file/tcm/hive_to_olap_4_tcm_parse.sh#!/bin/bashsource /root/bin/common_config/db_config.propertieshive_table=$1
target_table=$2if [ $3 ];
then
dt=$3
elsedt=`date -d "-1 day" +%F`fiif [ $4 ];
then
dt1=$4
else
dt1=`date -d "-1 day" +%F`
fiecho "起始日期为$dt"
echo "结束日期为$dt1"f(){do_date=$1
echo "===函数执行日期为 $do_date==="/root/bin/hive_to_mysql.sh "$olap_host" tcm "$hive_table" "$olap_db" "$target_table" "$olap_user" "$olap_pwd" $1 $1}if [[ $dt == $dt1 ]]thenecho "dt = dt1......"for i in `/mnt/db_file/tcm/get_changed_dt.sh $dt`
doecho "同步变化的日期======================>$i"f $i
doneelseecho "batch process..."start_day=$dt
end_day=$dt1
dt=$start_day
while [[ $dt < `date -d "+1 day $end_day" +%Y-%m-%d` ]]
doecho "批处理===>"$dtf $dtdt=`date -d "+1 day $dt" +%Y-%m-%d`
donefi 

无更新的导出
直接导出


[root@mz-hadoop-01 export]# cat wfs_ads_order_material_stats_to_olap.sh source /root/bin/common_config/db_config.propertiesecho "olap_host is :${olap_host}"if [ $1 ];
then
dt=$1
elsedt=`date -d "-1 day" +%F`fiecho "dt:$dt"if [ $2 ];
then
dt1=$2
elsedt1=`date -d "-1 day" +%F`fi/root/bin/hive_to_mysql.sh "${olap_host}" paascloud wfs_ads_order_material_stats mz_olap wfs_ads_order_material_stats $olap_user $olap_pwd ${dt} ${dt1}

spark通用计算任务

#!/bin/bashif [ $1 ];
then
className=$1
elseecho "need className"
exitfiif [ $2 ];
then
jarPath=$2
elseecho "need jarPath"
exitfiif [ $3 ];
then
dt=$3
elsedt=`date -d "-1 day" +%F`fiif [ $4 ];
then
dt1=$4
else
dt1=`date -d "-1 day" +%F`
fi echo "起始日期:$dt,结束日期:$dt1"f(){query_begin_date=${1}
query_end_date=${2}WORK_DIR=$(cd "$(dirname "$0")";pwd)
echo "WORK_DIR:"${WORK_DIR}
LOG_PATH="${WORK_DIR}/log/${query_end_date}"
echo "LOG_PATH:${LOG_PATH}"
mkdir -p ${LOG_PATH}
FILE_LOG="${LOG_PATH}/$className.log"
echo "FILE_LOG:${FILE_LOG}"/opt/cloudera/parcels/SPARK2/bin/spark2-submit \
--class $className \
--master yarn \
--deploy-mode client \
--driver-memory 4g \
--num-executors 2 \
--executor-memory 4G \
--executor-cores 4 \
--conf spark.driver.cores=5 \
--conf spark.port.maxRetries=1000 \
--conf spark.rpc.numRetries=1000000 \
--conf spark.sql.shuffle.partitions=3 \
--conf spark.dynamicAllocation.enabled=false \
--conf spark.rpc.askTimeout=3600 \
--conf spark.rpc.lookupTimeout=3600 \
--conf spark.network.timeout=3600 \
--conf spark.rpc.io.connectionTimeout=3600 \
--conf spark.default.parallelism=50 \
$jarPath \
$query_begin_date $query_begin_date 2>&1 | tee ${FILE_LOG}while read -r line
do#echo "$line"error1="SparkContext has been shutdown"error2="Failed to send RPC"error3="java.nio.channels.ClosedChannelException"error4="Marking as slave lost"error5="org.apache.spark.SparkException"error6="Exception in thread"error7="SparkContext was shut down"error8="org.apache.spark.sql.AnalysisException"if [[ ${line} == *${error1}* || ${line} == *${error2}* || ${line} == *${error3}* || ${line} == *${error4}* || ${line} == *${error5}* || ${line} == *${error6}* || ${line} == *${error7}* || ${line} == *${error8}* ]]thenecho "SPARK SQL EXECUTION FAILED ......AND JOB FAILED DATE IS 【${query_begin_date},${query_end_date}】......"#exit 1sleep 10sf "$query_begin_date" "$query_end_date"fi 
done < ${FILE_LOG}}interval_day=0
start_date=${dt}
end_date=${dt1}
while [[ $start_date < $end_date || $start_date = $end_date ]]
doquery_begin_date=`date -d "$start_date" "+%Y-%m-%d"`query_end_date=`date -d "+${interval_day} day ${start_date}" +%Y-%m-%d`start_date=`date -d "+${interval_day} day +1 day ${start_date}" +%Y-%m-%d`if [[ $query_end_date > $end_date ]]thenquery_end_date=$end_datefiecho "【本次任务开始时间:$query_begin_date,本次任务结束时间:$query_end_date,下一次任务的开始时间:$start_date】"#开始执行spark任务f "$query_begin_date" "$query_end_date"
done

Spark业务计算任务

spark_job_4_wfs.sh


#!/bin/bashif [ $1 ];
then
className=$1
elseecho "need className"
exitfiif [ $2 ];
then
dt=$2
elsedt=`date -d "-1 day" +%F`fiif [ $3 ];
then
dt1=$3
else
dt1=`date -d "-1 day" +%F`
fi sh /root/bin/spark_job.sh $className /mnt/db_file/wfs/jar/wfs-1.0-SNAPSHOT-jar-with-dependencies.jar $dt $dt1

dwd_order_info_abi.sh


[root@mz-hadoop-01 dwd]# cat dwd_order_info_abi.sh 
#!/bin/bashif [ $1 ];
then
dt=$1
elsedt=`date -d "-1 day" +%F`fiif [ $2 ];
then
dt1=$2
else
dt1=`date -d "-1 day" +%F`
fiecho "起始日期:$dt,结束日期:$dt1"/mnt/db_file/wfs/spark_job_4_wfs.sh com.mingzhi.wfs.dwd.dwd_order_info_abi $dt $dt1

Hive计算任务

#!/bin/bash
db=paascloud
hive=/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/bin/hiveif [ $1 ];
then
dt=$1
elsedt=`date -d "-1 day" +%F`fiif [ $2 ];
then
dt1=$2
else
dt1=`date -d "-1 day" +%F`
fiecho "起始日期:$dt,结束日期:$dt1"f(){do_date=$1
echo "===函数日期为 $do_date==="
sql="
use $db;set hive.exec.dynamic.partition.mode=nonstrict;add jar /mnt/db_file/wfs/jar/udf-1.4.3-SNAPSHOT-jar-with-dependencies.jar;
create temporary function str_distinct as 'com.mingzhi.StringDistinct';insert overwrite table ads_order_overall_cube partition(dt)
select
corcode_f3,max(sort_f3),
corcode_f2,max(sort_f2),
corcode_f1,max(sort_f1),
orderlargertype,
--ordersecondtype,
--orderthirdlytype,
orderSource,
orderStatus,
count(1) as cnts,str_distinct(concat_ws(',',collect_set(deal_user_ids))) as all_persons,
if(str_distinct(concat_ws(',',collect_set(deal_user_ids)))='' ,0, size(split(str_distinct(concat_ws(',',collect_set(deal_user_ids))),','))) as all_person_cnts,--regexp_replace(regexp_replace(regexp_replace(lpad(bin(cast(grouping_id() as bigint)),5,'0'),"0","x"),"1","0"),"x","1") as dim
reverse(lpad(bin(cast(GROUPING__ID as bigint)),6,'0')) as dim
,'$do_date'
from dwd_order_info_abi where dt='$do_date'
group by 
corcode_f3,
corcode_f2,
corcode_f1,
orderlargertype,
--ordersecondtype,
--orderthirdlytype,
orderSource,
orderStatus
grouping sets(corcode_f2,corcode_f1,(corcode_f2,orderlargertype),(corcode_f2,orderlargertype,orderSource,orderStatus),(corcode_f1,orderlargertype),(corcode_f1,orderlargertype,orderSource,orderStatus)  )
;
"# 获取当前目录
WORK_DIR=$(cd "$(dirname "$0")";pwd)LOG_PATH="$WORK_DIR/log/$do_date"
mkdir -p $LOG_PATH
FILE_NAME="ads_order_overall_cube"
#*****************************************************************************$hive -e "$sql" 2>&1 | tee $LOG_PATH/${FILE_NAME}.logwhile read -r line
doecho "$line"error="FAILED"if [[ $line == *$error* ]]thenecho "HIVE JOB EXECUTION FAILED AND DATE IS 【${do_date}】......"#exit 1f ${do_date}fi 
done < ${LOG_PATH}/${FILE_NAME}.log}start_day=$dt
end_day=$dt1
dt=$start_day
while [[ $dt < `date -d "+1 day $end_day" +%Y-%m-%d` ]]
do#for i in `/mnt/db_file/wfs/get_changed_dt.sh $dt`
for i in `cat /mnt/db_file/wfs/log/$dt/get_changed_dt_result`
doecho ===================执行变化的日期:$i ===========================f $i
donedt=`date -d "+1 day $dt" +%Y-%m-%d`
done

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/153624.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

最新企业服务总线ESB的国内主要厂商和开源厂商排名,方案书价格多少

企业服务总线ESB是什么&#xff1f; ESB平台&#xff08;企业服务总线&#xff0c;Enterprise Service Bus&#xff09;是一种企业级集成平台&#xff0c;它提供了一种开放的、基于标准的消息机制&#xff0c;通过简单的标准适配器和接口&#xff0c;来完成粗粒度应用&#xff…

使用sql语句获取当前SQL sever 的数据库名和名称

1、获取当前数据库名 Select Name From Master..SysDataBases Where DbId(Select Dbid From Master..SysProcesses Where Spid spid) 2、获取当前数据库注释 select db_id(NoteMessage) 或者 Select Dbid From Master..SysProcesses Where Spid spid

外呼系统作用和优势有哪些okcc,ai源码

随着外呼系统诞生&#xff0c;普通中小企业也开始广泛使用&#xff0c;系统给他们带来更多的服务方式和提升业绩的可能。然而&#xff0c;许多企业对外呼系统的理解相对片面和简单&#xff0c;认为它是一个成本中心&#xff0c;需要继续投入人力和使用。事实上&#xff0c;外呼…

HDFS的Shell操作

文章目录 一、HDFS的Shell介绍二、了解HDFS常用Shell命令&#xff08;一&#xff09;三种Shell命令方式&#xff08;二&#xff09;FileSystem Shell文档&#xff08;三&#xff09;常用HDFS的Shell命令 三、HDFS常用命令操作实战&#xff08;一&#xff09;创建目录 一、HDFS的…

Django 入门学习总结8-管理页面的生成

修改polls/admin.py文件为&#xff1a; from django.contrib import admin from .models import Choice, Question class ChoiceInline(admin.StackedInline): model Choice extra 3 class QuestionAdmin(admin.ModelAdmin): fieldsets [ (None, {&q…

java系列之 页面打印出 [object Object],[object Object]

我 | 在这里 &#x1f575;️ 读书 | 长沙 ⭐软件工程 ⭐ 本科 &#x1f3e0; 工作 | 广州 ⭐ Java 全栈开发&#xff08;软件工程师&#xff09; &#x1f383; 爱好 | 研究技术、旅游、阅读、运动、喜欢流行歌曲 &#x1f3f7;️ 标签 | 男 自律狂人 目标明确 责任心强 ✈️公…

【gitlab初始密码登录失败】

gitlab初始密码登录失败 修改密码 修改密码 [rootlocalhost ~]# gitlab-rake "gitlab:password:reset[root]" Enter password: Confirm password: Password successfully updated for user with username root. # 再重新配置gitlab [rootlocalhost ~]# gitlab-ctl…

九宫格 图片 自定义 路径

<image :src" ../../static/img/ item.urlname .png " class"u-w-82 u-h-82 u-p-t-36"></image>使用场景&#xff1a;九宫格里含有多张图片 html <view class"u-p-b-46 u-p-x-35"><u-grid :border"false" c…

一整个分析模型库,大数据分析工具都这么玩了吗?

一整个分析模型库&#xff0c;100张BI报表&#xff0c;覆盖销售、财务、采购、库存等多个分析主题。只需对接ERP&#xff0c;就能自动生成BI报表&#xff0c;完成对海量数据的系统化分析。现在大数据分析工具都发展到这种程度了吗&#xff1f; 放眼看去&#xff0c;现阶段能做…

Mysql之多表查询下篇

Mysql之多表查询下篇 满外连接的实现UNION关键字UNIONUNION ALL操作符 7种SQL JOINS的实现语法格式小结自然连接USING连接表连接的约束条件 满外连接的实现 在上篇博客中&#xff0c;我们可以了解到在Mysql中是不支持FULL JOIN来实现 满外连接的&#xff0c;那么我们在Mysql采用…

国产低功耗Sub-1G全频段收发一体芯片DP4306遥控器、智能抄表、工业控制等应用。

国产低功耗Sub-1G全频段收发一体芯片DP4306遥控器、智能抄表、工业控制等应用。 DP4306芯片是一款高性能低功耗的单片集成收发机&#xff0c;工作频率可覆盖 200MHz~1000MHz&#xff0c;芯片集成了射频接收器、射频发射器、频率综合器、GFSK 调制器、GFSK 解调器等功能模块。通…

CRM系统的销售预测是什么?怎么做?

简单来说&#xff0c;销售预测可以通过销售关键信息为团队预测收入&#xff0c;分配目标。CRM中的销售预测可以帮助企业制定合理的销售目标和策略&#xff0c;并通过实时数据发现瓶颈所在&#xff0c;提高团队绩效。下面说说CRM中销售预测是什么&#xff1f;如何销售预测&#…

Ubuntu环境下基于libxl库文件使用C++实现对表格的操作

功能 表格不存在则创建后再进行操作创建sheet添加新的工作表在sheet中增加数据设置单元格样式 相关配置 下载地址&#xff1a;libxl选择 LibXL for Linux 4.2.0 i386 x64 armhf aarch64 安装配置 1&#xff0c;使用 tar zxvf 文件名.tar.gz 进行文件解压2&#xff0c;创…

《rPPG》——(1)PyTorch——Windows环境配置

《rPPG》——&#xff08;1&#xff09;PyTorch——Windows环境配置 如何查看电脑是否已安装Python环境以及Python版本 anaconda对应python3.8的版本号是多少? 强烈建议大家安装最新版的anaconda&#xff0c;虽然最新版的anaconda是Python3.11的&#xff0c;但是这个并不会影…

[汇编实操]DOSBox工具安装——Ubuntu18.04系统

一、下载&安装 sudo apt install -y dosbox 二、启动 dosbox 三、C盘挂载 将上述文件下载放在任意路径&#xff0c;将DEBUG目录映射为虚拟C盘 MASM.EXE 是用来编译的&#xff0c;LINK.EXE 用来链接&#xff0c;这俩是必须的。 执行如下命令&#xff1a; mount c /m…

机器学习与计算机视觉 D2

整合为学习笔记&#xff01;参考阅读了几位大佬的作品&#xff0c;已标注出处~ 机器学习的数学基础 线性与非线性变换 从几何意义上&#xff0c;线性变换表示的是直线的特性&#xff0c;符合两个性质: 变换前后零点不变&#xff0c;变换前后直线还是直线。 线性变换意味着可以…

SpringBoot 集成Sa-Token 一个轻量级Java权限认证框架,让鉴权变得简单、优雅!

文章目录 前言Sa-Token 相关链接Sa-Token 介绍Sa-Token 功能模块一览Sa-Token-SSO 单点登录Sa-Token-OAuth2 授权认证使用 Sa-Token 的开源项目SpringBoot集成Sa-Token1. 添加依赖Maven 方式Gradle 方式2. 设置配置文件3. 查看 Sa-Token 配置4. 创建测试Controller5. 运行前言 …

【面试】你有使用过链路追踪技术吗?

这个问题之前也有被问过。的确&#xff0c;链路追踪技术对系统异常定位非常有用。目前公司的链路追踪已经形成数据闭环&#xff0c;下面我将简单说说我是如何为公司规划这套链路追踪技术的。 目前公司的链路追踪技术使用的解决方案是&#xff1a;Zipkin Prometheus Grafana。…

2.3IP详解及配置

2.3IP详解及配置 一、ip地址组成 IP地址由4部分数字组成&#xff0c;每部分数字对应于8位二进制数字&#xff0c;各部分之间用小数点分开 这是点分 2进制 如果换算为10进制我们称为点分10进制. 每个ip地址由两部分组成网络地址(NetID)和主z机地址(HostID).网络地址表示其属于…

11.9 实现磁盘相关操作

11.9.1 遍历磁盘容量 如下代码实现了在Windows系统中获取所有磁盘驱动器的信息。具体包括两个函数&#xff0c;一个用于获取驱动器类型&#xff0c;另一个用于获取驱动器空间信息。主函数则调用这两个函数来遍历所有逻辑驱动器并输出相应的信息。在输出驱动器空间信息时&#…