RK3568笔记二十:PP-YOLOE部署测试

若该文为原创文章,转载请注明原文出处。

注:转换测试使用的是Autodl服务器,CUDA11.1版本,py3.8。

一、PP-YOLOE环境安装

创建环境

# 使用 conda 创建一个名为 PaddleYOLO 的环境,并指定 python 版本conda create -n PaddleYOLO python=3.8

激活

conda activate PaddleYOLO

安装(参考官网)

# 安装 Paddle,PaddleYOLO 代码库推荐使用 paddlepaddle-2.4.2 以上的版本
# 教程测试使用 conda 安装 gpu 版 paddlepaddle 2.5
python -m pip install paddlepaddle-gpu==2.5.2.post112 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html
二、PP-YOLOE+ 模型简单使用
1、获取 PaddleYOLO 源码
# 拉取 PaddleYOLO
git clone https://github.com/PaddlePaddle/PaddleYOLO.git
# 切换到 PaddleYOLO 目录,安装相关依赖库
cd PaddleYOLO
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# install
2、模型推理

下載模型

# PP-YOLOE+_s
wget https://bj.bcebos.com/v1/paddledet/models/ppyoloe_plus_crn_s_80e_coco.pdparams
# PP-YOLOE+_m
wget https://bj.bcebos.com/v1/paddledet/models/ppyoloe_plus_crn_m_80e_coco.pdparams

使用 tools/infer.py 进行推理:


# 可能需要安装 9.5.0 版本的 Pillow

pip install Pillow==9.5.0

推理測試

python tools/infer.py -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml -o weights=ppyoloe_plus_crn_s_80e_coco.pdparams --infer_img=demo/000000014439_640x640.jpg --draw_threshold=0.5
# -c 指定配置文件,configs/目录下的配置文件(测试使用 ppyoloe_plus_crn_s_80e_coco.yml)也可以是自己添加的,
# -o 或者 --opt 设置配置选项,这里设置了 weights 使用前面手动下载的权重,也可以直接设置
weights=https://bj.bcebos.com/v1/paddledet/models/ppyoloe_plus_crn_s_80e_coco.pdparams
# --infer_dir 指定推理的图片路径或者文件夹,--draw_threshold 画框的阈值,默认 0.5,
图像尺寸是 640*640

推理正常

三、Train
1、数据集下载
# 数据集很大,有18G
http://images.cocodataset.org/zips/train2017.zip
2、train
config=configs/${model_name}/${job_name}.yml
python tools/train.py -c ${config} --eval --amp

根据readme提供的train方法

执行下面命令

python tools/train.py -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml --eval --amp

这里没有重新train,直接使用官方模型

四、板卡部署模型
1、针对 RKNN 优化

为了在 RKNPU 上获得更优的推理性能,我们调整了模型的输出结构,这些调整会影响到后处理

的逻辑,主要包含以下内容

• DFL 结构被移至后处理• 新增额外输出,该输出为所有类别分数的累加,用于加速后处理的候选框过滤逻辑具体请参考 rknn_model_zoo 。

我们在前面拉取的 PaddleDetection 源码或者 PaddleYOLO 源码基础上,简单修改下源码(版本是release/2.6

使用的版本是:

https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.6
下载后解压
cd PaddleYOLO-release-2.6

列表 1: ppdet/modeling/architectures/yolo.py

if self.training:yolo_losses = self.yolo_head(neck_feats, self.inputs)return yolo_losses
else:yolo_head_outs = self.yolo_head(neck_feats)
+   return yolo_head_outs

列表 2: ppdet/modeling/heads/ppyoloe_head.py

+ rk_out_list = []
for i, feat in enumerate(feats):_, _, h, w = feat.shapel = h * wavg_feat = F.adaptive_avg_pool2d(feat, (1, 1))cls_logit = self.pred_cls[i](self.stem_cls[i](feat, avg_feat) reg_dist = self.pred_reg[i](self.stem_reg[i](feat, avg_feat))
+   rk_out_list.append(reg_dist)
+   rk_out_list.append(F.sigmoid(cls_logit))
+   rk_out_list.append(paddle.clip(rk_out_list[-1].sum(1, keepdim=True), 0, 1))reg_dist = reg_dist.reshape([-1, 4, self.reg_channels, l]).transpose([0, 2, 3, 1])if self.use_shared_conv:reg_dist = self.proj_conv(F.softmax(reg_dist, axis=1)).squeeze(1)else:reg_dist = F.softmax(reg_dist, axis=1)# cls and regcls_score = F.sigmoid(cls_logit)cls_score_list.append(cls_score.reshape([-1, self.num_classes,1]))reg_dist_list.append(reg_dist)
+ return rk_out_list

上面简单的修改只用于模型导出,训练模型时请注释掉,(上面版本是2.6不要弄错)源码修改也可以直接打补丁(源码版本是 release/2.5),具体参考下 rknn_model_zoo/models/CV/object_detection/yolo/patch_for_model_export/ppyoloe at v1.5.0 · airockchip/rknn_model_zoo (github.com)

2、导出 ONNX 模型

# 切换到 PaddleDetection 或者 PaddleYOLO 源码目录下,然后使用 tools/export_model.py 导出 paddle 模型

cd PaddleDetection
python tools/export_model.py -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml -o weights=ppyoloe_plus_crn_s_80e_coco.pdparams exclude_nms=True exclude_post_process=True --output_dir inference_model

# 其中 -c 是设置配置文件,configs/目录下的配置文件

# --output_dir 指定模型保存目录,默认是 output_inference

# -o 或者 --opt 设置配置选项,这里设置了 weights 使用前面手动下载的权重等等

模型保存在 inference_model/ppyoloe_plus_crn_s_80e_coco 目录下:

ppyoloe_plus_crn_s_80e_coco
├── infer_cfg.yml # 模型配置文件信息
├── model.pdiparams # 静态图模型参数
├── model.pdiparams.info # 参数额外信息,一般无需关注
└── model.pdmodel # 静态图模型文件

然后将 paddle 模型转换成 ONNX 模型:

 pip install paddle2onnx
# 转换模型
paddle2onnx --model_dir inference_model/ppyoloe_plus_crn_s_80e_coco --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 11 --save_file ./inference_model/ppyoloe_plus_crn_s_80e_coco/ppyoloe_plus_crn_s_80e_coco.onnx
# 固定模型 shape
python -m paddle2onnx.optimize --input_model inference_model/ppyoloe_plus_crn_s_80e_coco/ppyoloe_plus_crn_s_80e_coco.onnx --output_model inference_model/ppyoloe_plus_crn_s_80e_coco/ppyoloe_plus_crn_s_80e_coco.onnx --input_shape_dict "{'image':[1,3,640,640]}"

使用 Netron 查看下导出的ppyoloe_plus_crn_s_80e_coco.onnx模型的输出和输出:

3、导出RKNN模型

使用 rknn-Toolkit2 工具,将onnx 转换为 rknn模型,并进行推理测试(参考配套例程):

代码参考rknn_model_zoo程序。

import os
import cv2
import sys
​
import numpy as np
from copy import copy
from rknn.api import RKNN
​
# model,image path
ONNX_MODEL = './ppyoloe_plus_crn_s_80e_coco.onnx'
RKNN_MODEL = 'ppyoloe_plus_crn_s_80e_coco.rknn'
IMG_PATH = './test.jpg'
#IMG_PATH = './000000087038.jpg'
DATASET = './dataset.txt'
​
QUANTIZE_ON = True
​
OBJ_THRESH = 0.5
NMS_THRESH = 0.45
# OBJ_THRESH = 0.001
# NMS_THRESH = 0.65
​
IMG_SIZE = (640, 640)  # (width, height)
​
CLASSES = ("person", "bicycle", "car","motorbike ","aeroplane ","bus ","train","truck ","boat","traffic light","fire hydrant","stop sign ","parking meter","bench","bird","cat","dog ","horse ","sheep","cow","elephant","bear","zebra ","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife ","spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza ","donut","cake","chair","sofa","pottedplant","bed","diningtable","toilet ","tvmonitor","laptop  ","mouse    ","remote ","keyboard ","cell phone","microwave ","oven ","toaster","sink","refrigerator ","book","clock","vase","scissors ","teddy bear ","hair drier", "toothbrush ")
​
def filter_boxes(boxes, box_confidences, box_class_probs):"""Filter boxes with object threshold."""box_confidences = box_confidences.reshape(-1)
​class_max_score = np.max(box_class_probs, axis=-1)classes = np.argmax(box_class_probs, axis=-1)
​_class_pos = np.where(class_max_score* box_confidences >= OBJ_THRESH)scores = (class_max_score* box_confidences)[_class_pos]
​boxes = boxes[_class_pos]classes = classes[_class_pos]
​return boxes, classes, scores
​
def nms_boxes(boxes, scores):"""Suppress non-maximal boxes.# Returnskeep: ndarray, index of effective boxes."""x = boxes[:, 0]y = boxes[:, 1]w = boxes[:, 2] - boxes[:, 0]h = boxes[:, 3] - boxes[:, 1]
​areas = w * horder = scores.argsort()[::-1]
​keep = []while order.size > 0:i = order[0]keep.append(i)
​xx1 = np.maximum(x[i], x[order[1:]])yy1 = np.maximum(y[i], y[order[1:]])xx2 = np.minimum(x[i] + w[i], x[order[1:]] + w[order[1:]])yy2 = np.minimum(y[i] + h[i], y[order[1:]] + h[order[1:]])
​w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)inter = w1 * h1
​ovr = inter / (areas[i] + areas[order[1:]] - inter)inds = np.where(ovr <= NMS_THRESH)[0]order = order[inds + 1]keep = np.array(keep)return keep
​
def dfl(position):# Distribution Focal Loss (DFL)import torchx = torch.tensor(position)n,c,h,w = x.shapep_num = 4mc = c//p_numy = x.reshape(n,p_num,mc,h,w)y = y.softmax(2)acc_metrix = torch.tensor(range(mc)).float().reshape(1,1,mc,1,1)y = (y*acc_metrix).sum(2)return y.numpy()
​
​
def box_process(position):grid_h, grid_w = position.shape[2:4]col, row = np.meshgrid(np.arange(0, grid_w), np.arange(0, grid_h))col = col.reshape(1, 1, grid_h, grid_w)row = row.reshape(1, 1, grid_h, grid_w)grid = np.concatenate((col, row), axis=1)stride = np.array([IMG_SIZE[1]//grid_h, IMG_SIZE[0]//grid_w]).reshape(1,2,1,1)
​position = dfl(position)box_xy  = grid +0.5 -position[:,0:2,:,:]box_xy2 = grid +0.5 +position[:,2:4,:,:]xyxy = np.concatenate((box_xy*stride, box_xy2*stride), axis=1)
​return xyxy
​
def post_process(input_data):boxes, scores, classes_conf = [], [], []defualt_branch=3pair_per_branch = len(input_data)//defualt_branch# Python 忽略 score_sum 输出for i in range(defualt_branch):boxes.append(box_process(input_data[pair_per_branch*i]))classes_conf.append(input_data[pair_per_branch*i+1])scores.append(np.ones_like(input_data[pair_per_branch*i+1][:,:1,:,:], dtype=np.float32))
​def sp_flatten(_in):ch = _in.shape[1]_in = _in.transpose(0,2,3,1)return _in.reshape(-1, ch)
​boxes = [sp_flatten(_v) for _v in boxes]classes_conf = [sp_flatten(_v) for _v in classes_conf]scores = [sp_flatten(_v) for _v in scores]
​boxes = np.concatenate(boxes)classes_conf = np.concatenate(classes_conf)scores = np.concatenate(scores)
​# filter according to thresholdboxes, classes, scores = filter_boxes(boxes, scores, classes_conf)
​# nmsnboxes, nclasses, nscores = [], [], []for c in set(classes):inds = np.where(classes == c)b = boxes[inds]c = classes[inds]s = scores[inds]keep = nms_boxes(b, s)
​if len(keep) != 0:nboxes.append(b[keep])nclasses.append(c[keep])nscores.append(s[keep])
​if not nclasses and not nscores:return None, None, None
​boxes = np.concatenate(nboxes)classes = np.concatenate(nclasses)scores = np.concatenate(nscores)
​return boxes, classes, scores
​
def draw(image, boxes, scores, classes):for box, score, cl in zip(boxes, scores, classes):top, left, right, bottom = [int(_b) for _b in box]print('class: {}, score: {}'.format(CLASSES[cl], score))print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(top, left, right, bottom))
​cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)cv2.putText(image, '{:.2f}'.format(score),(top, left - 6),cv2.FONT_HERSHEY_SIMPLEX,0.6, (0, 0, 255), 2)
​
def letter_box(im, new_shape, color=(0, 0, 0)):# Resize and pad image while meeting stride-multiple constraintsshape = im.shape[:2]  # current shape [height, width]if isinstance(new_shape, int):new_shape = (new_shape, new_shape)
​# Scale ratio (new / old)r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
​# Compute paddingratio = r  # width, height ratiosnew_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding
​dw /= 2  # divide padding into 2 sidesdh /= 2
​if shape[::-1] != new_unpad:  # resizeim = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))left, right = int(round(dw - 0.1)), int(round(dw + 0.1))im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add borderreturn im, ratio, (dw, dh)
​
def get_real_box(src_shape, box, dw, dh, ratio):bbox = copy(box)# unletter_box resultbbox[:,0] -= dwbbox[:,0] /= ratiobbox[:,0] = np.clip(bbox[:,0], 0, src_shape[1])
​bbox[:,1] -= dhbbox[:,1] /= ratiobbox[:,1] = np.clip(bbox[:,1], 0, src_shape[0])
​bbox[:,2] -= dwbbox[:,2] /= ratiobbox[:,2] = np.clip(bbox[:,2], 0, src_shape[1])
​bbox[:,3] -= dhbbox[:,3] /= ratiobbox[:,3] = np.clip(bbox[:,3], 0, src_shape[0])return bbox
​
if __name__ == '__main__':# Create RKNN object#rknn = RKNN(verbose=True)rknn = RKNN()
​# pre-process config,target_platform='rk3588'print('--> Config model')rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588')print('done')
​# Load ONNX modelprint('--> Loading model')ret = rknn.load_onnx(model=ONNX_MODEL)if ret != 0:print('Load model failed!')exit(ret)print('done')
​# Build modelprint('--> Building model')ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET)if ret != 0:print('Build model failed!')exit(ret)print('done')
​# Export RKNN modelprint('--> Export rknn model')ret = rknn.export_rknn(RKNN_MODEL)if ret != 0:print('Export rknn model failed!')exit(ret)print('done')
​# Init runtime environmentprint('--> Init runtime environment')ret = rknn.init_runtime()# ret = rknn.init_runtime('rk3566')if ret != 0:print('Init runtime environment failed!')exit(ret)print('done')
​# Set inputsimg_src = cv2.imread(IMG_PATH)src_shape = img_src.shape[:2]img, ratio, (dw, dh) = letter_box(img_src, IMG_SIZE)img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)#img = cv2.resize(img_src, IMG_SIZE)
​# Inferenceprint('--> Running model')outputs = rknn.inference(inputs=[img])print('done')
​# post processboxes, classes, scores = post_process(outputs)
​img_p = img_src.copy()if boxes is not None:draw(img_p, get_real_box(src_shape, boxes, dw, dh, ratio), scores, classes)cv2.imwrite("result.jpg", img_p)
五、部署测试

rknn_model_zoo测试前面有提及,这里不在复现。自行测试。

六、参考连接

GitHub - PaddlePaddle/PaddleYOLO: 🚀🚀🚀 YOLO series of PaddlePaddle implementation, PP-YOLOE+, RT-DETR, YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOX, YOLOv5u, YOLOv7u, YOLOv6Lite, RTMDet and so on. 🚀🚀🚀

https://github.com/airockchip/rknn_model_zoo/tree/main/models/CV/object_detection/yolo/

注意:rknn_model_zoo使用的是PaddleDetection版本2.5,这里使用的是PaddleYOLO版本2.6.

如有侵权,或需要完整代码,请及时联系博主。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/762328.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

(vue)新闻列表与图片对应显示,体现选中、移入状态

(vue)新闻列表与图片对应显示&#xff0c;体现选中、移入状态 项目背景&#xff1a;郑州院XX项目首页-新闻展示模块&#xff0c;鼠标移入显示对应图片&#xff0c;且体现选中和移入状态 首次加载&#xff1a; 切换列表后&#xff1a; html: <el-row :gutter"20"…

阶乘求和(第十四届蓝桥杯JavaB组省赛真题)

/ 10^9考虑前九位&#xff0c;% 10^9保留后9位 解题思路: 求获取结果的后九位数字&#xff0c;需要对10^9取余&#xff0c;因为202320232023这个数字的阶乘太大&#xff0c;必须要减少计算量&#xff0c;因为当一个整数乘以10^9后对其取余&#xff0c;那么结果都为0。 所以我…

【毕设级项目】基于ESP8266的家庭灯光与火情智能监测系统——文末源码及PPT

目录 系统介绍 硬件配置 硬件连接图 系统分析与总体设计 系统硬件设计 ESP8266 WIFI开发板 人体红外传感器模块 光敏电阻传感器模块 火焰传感器模块 可燃气体传感器模块 温湿度传感器模块 OLED显示屏模块 系统软件设计 温湿度检测模块 报警模块 OLED显示模块 …

FebHost:注册了新加坡.SG域名,还需要申请SSL证书吗?

在互联网飞速发展的今天&#xff0c;网站安全性已经成为评价一个网站是否专业、可靠的重要标准。对于注册了新加坡.SG域名的企业或个人而言&#xff0c;申请SSL证书不仅是保障网站数据安全的关键&#xff0c;更是提升网站在搜索引擎中排名的有效手段。 首先&#xff0c;SSL证书…

osgEarth学习笔记2-第一个Osg QT程序

原文链接 上个帖子介绍了osgEarth开发环境的安装。本帖介绍我的第一个Osg QT程序。 下载 https://github.com/openscenegraph/osgQt 解压&#xff0c;建立build目录。 使用Cmake-GUI Configure 根据需要选择win32或者x64&#xff0c;这里我使用win32. 可以看到include和lib路…

RK3568驱动指南|第二篇 字符设备基础-第13章 杂项设备驱动实验

瑞芯微RK3568芯片是一款定位中高端的通用型SOC&#xff0c;采用22nm制程工艺&#xff0c;搭载一颗四核Cortex-A55处理器和Mali G52 2EE 图形处理器。RK3568 支持4K 解码和 1080P 编码&#xff0c;支持SATA/PCIE/USB3.0 外围接口。RK3568内置独立NPU&#xff0c;可用于轻量级人工…

【go从入门到精通】select条件控制

作者简介&#xff1a; 高科&#xff0c;先后在 IBM PlatformComputing从事网格计算&#xff0c;淘米网&#xff0c;网易从事游戏服务器开发&#xff0c;拥有丰富的C&#xff0c;go等语言开发经验&#xff0c;mysql&#xff0c;mongo&#xff0c;redis等数据库&#xff0c;设计模…

MATLAB环境下基于振动信号的轴承状态监测和故障诊断

故障预测与健康管理PHM分为故障预测和健康管理与维修两部分&#xff0c;PHM首先借助传感器采集关键零部件的运行状态数据&#xff0c;如振动信号、温度图像、电流电压信号、声音信号及油液分析等&#xff0c;提取设备的运行监测指标&#xff0c;进而实现对设备关键零部件运行状…

复旦MBA GIP全球课程:数字化新趋势带来的机遇和挑战

复旦MBA站在商业教育变革前沿&#xff0c;联合世界顶级商学院开展长、中、短期的第二学位、海外交换及GIP全球课程&#xff0c;使同学们与世界紧密接轨&#xff0c;打破国界、专业和过往的边界&#xff0c;探索更广阔的未来可能。至今已有2000多名复旦MBA学生通过近60个GIP全球…

【vue】vue中的路由vue-router,vue-cli脚手架详细使用教程

✨✨ 欢迎大家来到景天科技苑✨✨ &#x1f388;&#x1f388; 养成好习惯&#xff0c;先赞后看哦~&#x1f388;&#x1f388; &#x1f3c6; 作者简介&#xff1a;景天科技苑 &#x1f3c6;《头衔》&#xff1a;大厂架构师&#xff0c;华为云开发者社区专家博主&#xff0c;…

Ubuntu Desktop - lock screen (锁屏)

Ubuntu Desktop - lock screen [锁屏] 1. System Settings -> Security & Privacy (安全和隐私)2. System Settings -> Keyboard -> Shortcuts -> System3. LockReferences 1. System Settings -> Security & Privacy (安全和隐私) 使用 Putty 远程登录…

FcaNet:频率通道注意力,进阶版SE

paper&#xff1a;https://arxiv.org/abs/2012.11879 github&#xff1a;GitHub - cfzd/FcaNet: FcaNet: Frequency Channel Attention Networks 目录 1. 动机 2. 方法 2.1. 回顾通道注意力和离散余弦变换&#xff08;DCT&#xff09; 通道注意力&#xff1a; 离散余弦变换…

如何使用Android平板公网访问本地Linux code-server

文章目录 1.ubuntu本地安装code-server2. 安装cpolar内网穿透3. 创建隧道映射本地端口4. 安卓平板测试访问5.固定域名公网地址6.结语 1.ubuntu本地安装code-server 准备一台虚拟机,Ubuntu或者centos都可以&#xff0c;这里以VMwhere ubuntu系统为例 下载code server服务,浏览器…

Skywalking的Helm Chart方式部署

背景 之前介绍了AWS云上面的EKS的集中日志方案。这次主要介绍调用链监控了&#xff0c;这里我们用的是Skywalking。监控三王者&#xff08;EFKPrometheusSkywalking&#xff09;之一。之前AWS云上面使用fluent bit替代EFK方案&#xff0c;其实&#xff0c;AWS云在调用链方面&a…

Elasticsearch:ES|QL 入门 - Python Notebook

数据丰富在本笔记本中&#xff0c;你将学习 Elasticsearch 查询语言 (ES|QL) 的基础知识。 你将使用官方 Elasticsearch Python 客户端。 你将学习如何&#xff1a; 运行 ES|QL 查询使用处理命令对表格进行排序查询数据链式处理命令计算值计算统计数据访问列创建直方图丰富数…

UE4 Json事件设置Asset值(Asset如果都在同一目录下)

通过Json事件来设置&#xff0c;比如骨骼网格体&#xff08;换皮&#xff09;等等

docker可视化管理工具-DockerUI

系列文章目录 文章目录 系列文章目录前言 前言 前些天发现了一个巨牛的人工智能学习网站&#xff0c;通俗易懂&#xff0c;风趣幽默&#xff0c;忍不住分享一下给大家。点击跳转到网站&#xff0c;这篇文章男女通用&#xff0c;看懂了就去分享给你的码吧。 一个可视化的管理工…

ABAP笔记:定义指针,动态指针分配:ASSIGN COMPONENT <N> OF STRUCTURE <结构> TO <指针>.

参考大佬文章学习&#xff0c;总结了下没有提到的点&#xff1a;SAP ABAP指针的6种用法。_abap 指针-CSDN博客 定义指针&#xff1a;其实指针这玩意&#xff0c;就是类似你给个地方&#xff0c;把东西临时放进去&#xff0c;然后指针就是这个东西的替身了&#xff0c;写代码的…

iPhone语音备忘录误删?掌握这几个技巧轻松恢复【详】

语音备忘录是一款强大的应用程序&#xff0c;它允许用户使用语音输入功能来快速记录想法、提醒、待办事项等。无论是在行进间、工作中还是日常生活中&#xff0c;语音备忘录都是一个非常实用的工具&#xff0c;可以帮助您随时随地记录重要信息&#xff0c;而无需打字或者手动输…

redis-黑马点评-商户查询缓存

缓存&#xff1a;cache public Result queryById(Long id) {//根据id在redis中查询数据String s redisTemplate.opsForValue().get(CACHE_SHOP_KEY id);//判断是否存在if (!StrUtil.isBlank(s)) {//将字符串转为bean//存在&#xff0c;直接返回Shop shop JSONUtil.toBean(s, …