【“星瑞” O6 评测】—NPU 部署 face parser 模型

在这里插入图片描述

前言

瑞莎星睿 O6 (Radxa Orion O6) 拥有高达 28.8TOPs NPU (Neural Processing Unit) 算力,支持 INT4 / INT8 / INT16 / FP16 / BF16 和 TF32 类型的加速。这里通过通过官方的工具链进行FaceParsingBiSeNet的部署

1. FaceParsingBiSeNet onnx 推理

  1. 首先从百度网盘 提取码 8gin,下载开源的模型:face_parsing_512x512.onnx

  2. 编写 onnx 的推理脚本,如下

import os
import cv2
import argparse
import numpy as np
from PIL import Image
import onnxruntime
import timedef letterbox(image, new_shape=(640, 640), color=(114, 114, 114), auto=False, scaleFill=False, scaleup=True):"""对图像进行letterbox操作,保持宽高比缩放并填充到指定尺寸:param image: 输入的图像,格式为numpy数组 (height, width, channels):param new_shape: 目标尺寸,格式为 (height, width):param color: 填充颜色,默认为 (114, 114, 114):param auto: 是否自动计算最小矩形,默认为True:param scaleFill: 是否不保持宽高比直接缩放,默认为False:param scaleup: 是否只放大不缩小,默认为True:return: 处理后的图像,缩放比例,填充大小"""shape = image.shape[:2]  # 当前图像的高度和宽度r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])if not scaleup:  # 只缩小不放大(为了更好的效果)r = min(r, 1.0)new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # 计算填充尺寸if auto:  # 最小矩形dw, dh = np.mod(dw, 64), np.mod(dh, 64)  # 强制为 64 的倍数dw /= 2  # 从两侧填充dh /= 2if shape[::-1] != new_unpad:  # 缩放图像image = cv2.resize(image, new_unpad, interpolation=cv2.INTER_LINEAR)top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))left, right = int(round(dw - 0.1)), int(round(dw + 0.1))image = cv2.copyMakeBorder(image, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # 添加填充scale_ratio = rpad_size = (dw, dh)return image, scale_ratio, pad_sizedef preprocess_image(image, shape, bgr2rgb=True):"""图片预处理"""img, scale_ratio, pad_size = letterbox(image, new_shape=shape)if bgr2rgb:img = img[:, :, ::-1]img = img.transpose(2, 0, 1)  # HWC2CHWimg = np.ascontiguousarray(img, dtype=np.float32)return img, scale_ratio, pad_sizedef generate_mask(img, seg, outpath, scale=0.4):'分割结果可视化'color = [[255, 0,   0],[255, 85,  0],[255, 170, 0],[255, 0,   85],[255, 0,   170],[0,   255, 0],[85,  255, 0],[170, 255, 0],[0,   255, 85],[0,   255, 170],[0,   0,   255],[85,  0,   255],[170, 0,   255],[0,   85,  255],[0,   170, 255],[255, 255, 0],[255, 255, 85],[255, 255, 170],[255, 0,   255],[255, 85,  255]]img = img.transpose(1, 2, 0)  # HWC2CHWminidx = int(seg.min())maxidx = int(seg.max())color_img = np.zeros_like(img)for i in range(minidx, maxidx):if i <= 0:continuecolor_img[seg == i] = color[i]showimg = scale * img + (1 - scale) * color_imgImage.fromarray(showimg.astype(np.uint8)).save(outpath)if __name__ == '__main__':# define cmd argumentsparser = argparse.ArgumentParser()parser.add_argument('--image-path', type=str, help='path of the input image (a file)')parser.add_argument('--output-path', type=str, help='paht for saving the predicted alpha matte (a file)')parser.add_argument('--model-path', type=str, help='path of the ONNX model')args = parser.parse_args()# check input argumentsif not os.path.exists(args.image_path):print('Cannot find the input image: {0}'.format(args.image_path))exit()if not os.path.exists(args.model_path):print('Cannot find the ONXX model: {0}'.format(args.model_path))exit()ref_size = [512, 512]# read imageim = cv2.imread(args.image_path)img, scale_ratio, pad_size = preprocess_image(im, ref_size)showimg = img.copy()[::-1, ...]mean = np.asarray([0.485, 0.456, 0.406])scale = np.asarray([0.229, 0.224, 0.225])mean = mean.reshape((3, 1, 1))scale = scale.reshape((3, 1, 1))img = (img / 255 - mean) * scaleim = img[None].astype(np.float32)np.save("models/ComputeVision/Semantic_Segmentation/onnx_faceparse/datasets/calibration_data.npy", im)# Initialize session and get predictionsession = onnxruntime.InferenceSession(args.model_path, None)input_name = session.get_inputs()[0].nameoutput_name = session.get_outputs()[0].nameoutput = session.run([output_name], {input_name: im})start_time = time.perf_counter()for _ in range(5):output = session.run([output_name], {input_name: im})end_time = time.perf_counter()use_time = (end_time - start_time) * 1000fps = 1000 / use_timeprint(f"推理耗时:{use_time:.2f} ms, fps:{fps:.2f}")# refine matteseg = np.argmax(output[0], axis=1).squeeze()generate_mask(showimg, seg, args.output_path)
  1. 推理
python models/ComputeVision/Semantic_Segmentation/onnx_faceparse/inference_onnx.py --image-path models/ComputeVision/Semantic_Segmentation/onnx_faceparse/test_data/test_lite_face_parsing.png --output-path output/face_parsering.jpg --model-path asserts/models/bisenet/face_parsing_512x512.onnx

打印输出

推理耗时:1544.36 ms, fps:0.65

可视化效果
在这里插入图片描述

  1. 代码解释
  • np.save(“models/ComputeVision/Semantic_Segmentation/onnx_faceparse/datasets/calibration_data.npy”, im)

这里是将输入保存供 NPU PTQ 量化使用.

2. FaceParsingBiSeNet NPU 推理

  1. 创建 cfg 配置文件,具体如下。
[Common]
mode = build[Parser]
model_type = onnx
model_name = face_parsing_512x512
detection_postprocess =
model_domain = image_segmentation
input_model = /home/5_radxa/ai_model_hub/asserts/models/bisenet/face_parsing_512x512.onnx
input = input
input_shape = [1, 3, 512, 512]
output = out
output_dir = ./[Optimizer]
output_dir = ./
calibration_data = ./datasets/calibration_data.npy
calibration_batch_size = 1
metric_batch_size = 1
dataset = NumpyDataset
quantize_method_for_weight = per_channel_symmetric_restricted_range
quantize_method_for_activation = per_tensor_asymmetric
save_statistic_info = True[GBuilder]
outputs = bisenet.cix
target = X2_1204MP3
profile = True
tiling = fps

注意: [Parser]中的 input,output 是输入,输出 tensor 的名字,可以通过 netron 打开 onnx 模型看。输入,输出名字不匹配时,会有报错:

[I] Build with version 6.1.3119
[I] Parsing model....
[I] [Parser]: Begin to parse onnx model face_parsing_512x512...
2025-04-18 11:13:53.104146: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /root/miniconda3/envs/radxa/lib/python3.8/site-packages/cv2/../../lib64:/root/miniconda3/envs/radxa/lib/python3.8/site-packages/AIPUBuilder/simulator-lib/:/root/miniconda3/envs/radxa/lib/python3.8/site-packages/AIPUBuilder/simulator-lib
2025-04-18 11:13:53.104217: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2025-04-18 11:13:54.266791: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /root/miniconda3/envs/radxa/lib/python3.8/site-packages/cv2/../../lib64:/root/miniconda3/envs/radxa/lib/python3.8/site-packages/AIPUBuilder/simulator-lib/:/root/miniconda3/envs/radxa/lib/python3.8/site-packages/AIPUBuilder/simulator-lib
2025-04-18 11:13:54.266893: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2025-04-18 11:13:54.266959: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (chenjun): /proc/driver/nvidia/version does not exist
[E] [Parser]: Graph does not contain such a node/tensor name:output
[E] [Parser]: Parser Failed!
  1. 编译模型
    在 x86 主机上编译模型
cd models/ComputeVision/Semantic_Segmentation/onnx_faceparse
cixbuild ./cfg/onnx_bisenet.cfg

报错: ImportError: libaipu_simulator_x2.so: cannot open shared object file: No such file or directory

解决方案

  • 通过find / -name libaipu_simulator_x2.so
  • 通过export LD_LIBRARY_PATH=/root/miniconda3/envs/radxa/lib/python3.8/site-packages/AIPUBuilder/simulator-lib/libaipu_simulator_x2.so:$LD_LIBRARY_PATH添加环境变量

编译成功的打印信息

[I] Build with version 6.1.3119
[I] Parsing model....
[I] [Parser]: Begin to parse onnx model face_parsing_512x512...
2025-04-18 11:20:16.726111: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /root/miniconda3/envs/radxa/lib/python3.8/site-packages/cv2/../../lib64:/root/miniconda3/envs/radxa/lib/python3.8/site-packages/AIPUBuilder/simulator-lib/:/root/miniconda3/envs/radxa/lib/python3.8/site-packages/AIPUBuilder/simulator-lib
2025-04-18 11:20:16.726199: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2025-04-18 11:20:17.908202: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /root/miniconda3/envs/radxa/lib/python3.8/site-packages/cv2/../../lib64:/root/miniconda3/envs/radxa/lib/python3.8/site-packages/AIPUBuilder/simulator-lib/:/root/miniconda3/envs/radxa/lib/python3.8/site-packages/AIPUBuilder/simulator-lib
2025-04-18 11:20:17.908267: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2025-04-18 11:20:17.908283: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (chenjun): /proc/driver/nvidia/version does not exist
[W] [Parser]: The output name out is not a node but a tensor. However, we will use the node Resize_161 as output node.
2025-04-18 11:20:21.229063: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
[I] [Parser]: The input tensor(s) is/are: input_0
[I] [Parser]: Input input from cfg is shown as tensor input_0 in IR!
[I] [Parser]: Output out from cfg is shown as tensor Resize_161_post_transpose_0 in IR!
[I] [Parser]: 0 error(s), 1 warning(s) generated.
[I] [Parser]: Parser done!
[I] Parse model complete
[I] Simplifying float model.
[I] [IRChecker] Start to check IR: /home/5_radxa/ai_model_hub/models/ComputeVision/Semantic_Segmentation/onnx_faceparse/internal/face_parsing_512x512.txt
[I] [IRChecker] model_name: face_parsing_512x512
[I] [IRChecker] IRChecker: All IR pass (Checker Plugin disabled)
[I] [graph.cpp :1600] loading graph weight: /home/5_radxa/ai_model_hub/models/ComputeVision/Semantic_Segmentation/onnx_faceparse/./internal/face_parsing_512x512.bin size: 0x322454c
[I] Start to simplify the graph...
[I] Using fixed-point full optimization, it may take long long time ....
[I] GSim simplified result:
------------------------------------------------------------------------OpType.Eltwise:   -3OpType.Mul:   +3OpType.Tile:   -3
------------------------------------------------------------------------
略
略
略
略
略
[I] [builder.cpp:1939]     Read and Write:80.21MB
[I] [builder.cpp:1080] Reduce constants memory size: 3.477MB
[I] [builder.cpp:2411] memory statistics for this graph (face_parsing_512x512)
[I] [builder.cpp: 585] Total memory     :       0x00d52b98 Bytes ( 13.323MB)
[I] [builder.cpp: 585] Text      section:       0x00042200 Bytes (  0.258MB)
[I] [builder.cpp: 585] RO        section:       0x00006d00 Bytes (  0.027MB)
[I] [builder.cpp: 585] Desc      section:       0x0002ea00 Bytes (  0.182MB)
[I] [builder.cpp: 585] Data      section:       0x00c8b360 Bytes ( 12.544MB)
[I] [builder.cpp: 585] BSS       section:       0x0000fb38 Bytes (  0.061MB)
[I] [builder.cpp: 585] Stack            :       0x00040400 Bytes (  0.251MB)
[I] [builder.cpp: 585] Workspace(BSS)   :       0x004c0000 Bytes (  4.750MB)
[I] [builder.cpp:2427]
[I] [tools.cpp :1181]  -  compile time: 20.726 s
[I] [tools.cpp :1087] With GM optimization, DDR Footprint stastic(estimation):
[I] [tools.cpp :1094]     Read and Write:92.67MB
[I] [tools.cpp :1137]  -  draw graph time: 0.03 s
[I] [tools.cpp :1954] remove global cwd: /tmp/af3c1da8ea81cc1cf85dba1587ff72126ee96222bb098b52633050918b4c7
build success.......
Total errors: 0,  warnings: 15
  1. NPU 推理可视化
    编写 npu 推理脚本, 可视化推理结果,统计推理耗时
import numpy as np
import cv2
import argparse
import os
import sys
import time# Define the absolute path to the utils package by going up four directory levels from the current file location
_abs_path = "/home/radxa/1_AI_models/ai_model_hub"
# Append the utils package path to the system path, making it accessible for imports
sys.path.append(_abs_path)
from utils.tools import get_file_list
from utils.NOE_Engine import EngineInferimport os
import cv2
import argparse
import numpy as np
from PIL import Imagedef letterbox(image, new_shape=(640, 640), color=(114, 114, 114), auto=False, scaleFill=False, scaleup=True):"""对图像进行letterbox操作,保持宽高比缩放并填充到指定尺寸:param image: 输入的图像,格式为numpy数组 (height, width, channels):param new_shape: 目标尺寸,格式为 (height, width):param color: 填充颜色,默认为 (114, 114, 114):param auto: 是否自动计算最小矩形,默认为True:param scaleFill: 是否不保持宽高比直接缩放,默认为False:param scaleup: 是否只放大不缩小,默认为True:return: 处理后的图像,缩放比例,填充大小"""shape = image.shape[:2]  # 当前图像的高度和宽度r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])if not scaleup:  # 只缩小不放大(为了更好的效果)r = min(r, 1.0)new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # 计算填充尺寸if auto:  # 最小矩形dw, dh = np.mod(dw, 64), np.mod(dh, 64)  # 强制为 64 的倍数dw /= 2  # 从两侧填充dh /= 2if shape[::-1] != new_unpad:  # 缩放图像image = cv2.resize(image, new_unpad, interpolation=cv2.INTER_LINEAR)top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))left, right = int(round(dw - 0.1)), int(round(dw + 0.1))image = cv2.copyMakeBorder(image, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # 添加填充scale_ratio = rpad_size = (dw, dh)return image, scale_ratio, pad_sizedef preprocess_image(image, shape, bgr2rgb=True):"""图片预处理"""img, scale_ratio, pad_size = letterbox(image, new_shape=shape)if bgr2rgb:img = img[:, :, ::-1]img = img.transpose(2, 0, 1)  # HWC2CHWimg = np.ascontiguousarray(img, dtype=np.float32)return img, scale_ratio, pad_sizedef generate_mask(img, seg, outpath, scale=0.4):'分割结果可视化'color = [[255, 0,   0],[255, 85,  0],[255, 170, 0],[255, 0,   85],[255, 0,   170],[0,   255, 0],[85,  255, 0],[170, 255, 0],[0,   255, 85],[0,   255, 170],[0,   0,   255],[85,  0,   255],[170, 0,   255],[0,   85,  255],[0,   170, 255],[255, 255, 0],[255, 255, 85],[255, 255, 170],[255, 0,   255],[255, 85,  255]]img = img.transpose(1, 2, 0)  # HWC2CHWminidx = int(seg.min())maxidx = int(seg.max())color_img = np.zeros_like(img)for i in range(minidx, maxidx):if i <= 0:continuecolor_img[seg == i] = color[i]showimg = scale * img + (1 - scale) * color_imgImage.fromarray(showimg.astype(np.uint8)).save(outpath)if __name__ == '__main__':parser = argparse.ArgumentParser()parser.add_argument('--image-path', type=str, help='path of the input image (a file)')parser.add_argument('--output-path', type=str, help='paht for saving the predicted alpha matte (a file)')parser.add_argument('--model-path', type=str, help='path of the ONNX model')args = parser.parse_args()model = EngineInfer(args.model_path)ref_size = [512, 512]# read imageim = cv2.imread(args.image_path)img, scale_ratio, pad_size = preprocess_image(im, ref_size)showimg = img.copy()[::-1, ...]mean = np.asarray([0.485, 0.456, 0.406])scale = np.asarray([0.229, 0.224, 0.225])mean = mean.reshape((3, 1, 1))scale = scale.reshape((3, 1, 1))img = (img / 255 - mean) * scaleim = img[None].astype(np.float32)## inferinput_data = [im]# output = model.forward(input_data)[0]N = 5start_time = time.perf_counter()for _ in range(N):output = model.forward(input_data)[0]end_time = time.perf_counter()use_time = (end_time - start_time) * 1000 / Nfps = N / (end_time - start_time)print(f"包含输入量化,输出反量化,推理耗时:{use_time:.2f} ms, fps:{fps:.2f}")fps = model.get_ave_fps()use_time2 = 1000 / fpsprint(f"NPU 计算部分耗时:{use_time2:.2f} ms, fps: {fps:.2f}")# refine matteoutput = np.reshape(output, (1, 19, 512, 512))seg = np.argmax(output, axis=1).squeeze()generate_mask(showimg, seg, args.output_path)# release modelmodel.clean()

推理

source /home/radxa/1_AI_models/ai_model_hub/.venv/bin/activate
python models/ComputeVision/Semantic_Segmentation/onnx_faceparse/inference_npu.py --image-path models/ComputeVision/Semantic_Segmentation/onnx_faceparse/test_data/test_lite_face_parsing.png --output-path output/face_parsering.jpg --model-path models/ComputeVision/Semantic_Segmentation/onnx_faceparse/bisenet.cix

推理耗时

npu: noe_init_context success
npu: noe_load_graph success
Input tensor count is 1.
Output tensor count is 1.
npu: noe_create_job success
包含输入量化,输出反量化,推理耗时:379.63 ms, fps:2.63
NPU 计算部分耗时:10.70 ms, fps: 93.43
npu: noe_clean_job success
npu: noe_unload_graph success
npu: noe_deinit_context success

可以看到这里输入量化,输出反量化还是很耗时

可视化效果ok.
在这里插入图片描述

3. benchmark

序号硬件模型输入分辨率量化类型执行 engine推理耗时/msfps
cpubisenet512x512fp32onnxruntime309.753.23
npubisenet512x512A8W8周易 NPU10.7093.43

4. 参考

  • https://github.com/xlite-dev/lite.ai.toolkit?tab=readme-ov-file
  • https://github.com/zllrunning/face-parsing.PyTorch
  • 【“星睿O6”评测】RVM人像分割torch➡️ncnn-CPU/GPU和o6-NPU部署全过程

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/76651.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

单例模式的使用场景 以及 饿汉式写法(智能指针)

单例模式的使用场景 以及 饿汉式写法&#xff08;智能指针&#xff09; 饿汉式&#xff1a;创建类时就已经创建好了类的实例&#xff08;用智能指针实现&#xff09;什么时候用单例模式&#xff1a;1. 全局配置管理2. 日志系统3. 资源管理器4. 硬件设备访问总结 饿汉式&#xf…

微信小程序的全局变量(quanjubianliang)

在微信小程序开发中&#xff0c;管理和使用全局变量是一种常见的需求。例如&#xff0c;可以通过小程序的App实例和globalData对象来实现全局变量的存储和共享。以下是详细说明&#xff1a; 1. 全局变量的定义 微信小程序提供了 App() 函数&#xff0c;其中可以定义一个 global…

Latex科研入门教程

Introduction 这篇文章适合有markdown基础的人看,不会的人可以先去学一下markdown. 仅适用于科研入门. 本文使用的latex环境为overleaf Latex概况 文件格式 以.tex为结尾的文件可能有多个.tex文件最终只编译一个文件,相当于一个文件控制其他子文件. Latex 代码分为三种&…

FastGPT Docker Compose本地部署与硅基流动免费AI接口集成指南

本文参考&#xff1a;https://doc.tryfastgpt.ai/docs/development/ 一、背景与技术优势 FastGPT是基于LLM的知识库问答系统&#xff0c;支持自定义数据训练与多模型接入。硅基流动&#xff08;SiliconFlow&#xff09;作为AI基础设施平台&#xff0c;提供高性能大模型推理引…

19_大模型微调和训练之-基于LLamaFactory+LoRA微调LLama3

基于LLamaFactory微调_LLama3的LoRA微调 1. 基本概念1.1. LoRA微调的基本原理1.2. LoRA与QLoRA1.3. 什么是 GGUF 2.LLaMA-Factory介绍3. 实操3.1 实验环境3.2 基座模型3.3 安装 LLaMA-Factory 框架3.3.1 前置条件 3.4 数据准备3.5 微调和训练模型torch.cuda.OutOfMemoryError: …

Linux Wlan-四次握手(eapol)框架流程

协议基础 基于 IEEE 802.1X 标准实现的协议 抓包基础 使用上一章文章的TPLINK wn722n v1网卡在2.4G 频段抓包&#xff08;v2、v3是不支持混杂模式的&#xff09; eapol的四个交互流程 根据不同的认证模式不同&#xff0c;两者的Auth流程有所不同&#xff0c;但是握手流程基…

基于亚马逊云科技 Amazon Bedrock Tool Use 实现 Generative UI

背景 在当前 AI 应用开发浪潮中&#xff0c;越来越多的开发者专注于构建基于大语言模型&#xff08;LLM&#xff09;的 chatbot 和 AI Agent。然而&#xff0c;传统的纯文本对话形式存在局限性&#xff0c;无法为用户提供足够直观和丰富的交互体验。为了增强用户体验&#xff…

第 2 篇:初探时间序列 - 可视化与基本概念

第 2 篇&#xff1a;初探时间序列 - 可视化与基本概念 (图片来源: Luke Chesser on Unsplash) 在上一篇《你好&#xff0c;时间序列&#xff01;》中&#xff0c;我们了解了什么是时间序列数据以及学习它的重要性。现在&#xff0c;是时候卷起袖子&#xff0c;真正开始接触和探…

Linux服务器配置Anaconda环境、Pytorch库(图文并茂的教程)

引言&#xff1a;为了方便后续新进组的 师弟/师妹 使用课题组的服务器&#xff0c;特此编文&#xff08;ps&#xff1a;我导从教至今四年&#xff0c;还未招师妹&#xff09; ✅ NLP 研 2 选手的学习笔记 笔者简介&#xff1a;Wang Linyong&#xff0c;NPU&#xff0c;2023级&a…

Spring-AOP分析

Spring分析-AOP 1.案例引入 在上一篇文章中&#xff0c;【Spring–IOC】【https://www.cnblogs.com/jackjavacpp/p/18829545】&#xff0c;我们了解到了IOC容器的创建过程&#xff0c;在文末也提到了AOP相关&#xff0c;但是没有作细致分析&#xff0c;这篇文章就结合示例&am…

【Python网络爬虫开发】从基础到实战的完整指南

目录 前言&#xff1a;技术背景与价值当前技术痛点解决方案概述目标读者说明 一、技术原理剖析核心概念图解核心作用讲解关键技术模块技术选型对比 二、实战演示环境配置要求核心代码实现&#xff08;10个案例&#xff09;案例1&#xff1a;基础静态页面抓取案例2&#xff1a;动…

服务器监控软件推荐

以下是几款常用的服务器监控软件推荐&#xff0c;涵盖开源和商业方案&#xff0c;适用于不同规模和需求&#xff1a; 一、开源免费方案 Prometheus Grafana 特点&#xff1a;时序数据库 可视化仪表盘&#xff0c;支持多维度监控和告警。适用场景&#xff1a;云原生、Kubernet…

编译原理实验(四)———— LR(1)分析法

一、实验目的 掌握LR(1)分析法的基本原理与实现流程。通过构造LR(1)分析表&#xff0c;验证符号串是否符合给定文法规则。理解LR(1)分析中向前搜索符&#xff08;Lookahead Symbol&#xff09;的作用&#xff0c;解决移进-归约冲突。 二、实验题目 1.对下列文法&#xff0c;用…

vue3 主题模式 结合 element-plus的主题

vue3 主题模式 结合 element-plus的主题 npm i element-plus --save-dev在 Vue 3 中&#xff0c;实现主题模式主要有以下几种方式 1.使用 CSS 变量&#xff08;自定义属性&#xff09; CSS 变量是一种在 CSS 中定义可重用值的方式。在主题模式中&#xff0c;可以将颜色、字体…

科大讯飞Q1营收46.6亿同比增长27.7%,扣非净利同比增长48.3%

4月21日盘后&#xff0c;AI龙头科大讯飞&#xff08;002230.SZ&#xff09;发布2024年报&#xff0c;公司全年实现营业收入233.43亿元&#xff0c;同比增长18.79%&#xff0c;同期归母净利润为5.6亿元。 公司核心赛道业务保持快速增长&#xff0c;消费者、教育、汽车、医疗业务…

Day5-UFS总结

UFS 传输协议的本质&#xff1a;两个收发器件&#xff0c;对需要传输的数据&#xff0c;一层一层的封装和解析&#xff0c;利用封装增加的额外信息&#xff0c;做一些数据处理&#xff0c;完成源地址到目标地址的数据传输功能。 应用协议的本质&#xff1a;基于某种传输协议之…

嵌入式工程师( C / C++ )笔试面试题汇总

注&#xff1a;本文为 “嵌入式工程师笔试面试题” 相关文章合辑。 未整理去重。 如有内容异常&#xff0c;请看原文。 嵌入式必会 C 语言笔试题汇总 Z 沉浮 嵌入式之旅 2021 年 01 月 19 日 00:00 用预处理指令 #define 声明一个常数&#xff0c;用以表明 1 年中有多少秒&a…

29-JavaScript基础语法(函数)

知识目标 理解函数的基本概念&#xff1b;掌握函数的定义和调用&#xff1b;理解函数参数和返回值及作用域&#xff1b;掌握函数高阶用法。 1. 理解函数的基本概念 明确函数在 JavaScript 里是一段可重复使用的代码块&#xff0c;它能接收输入参数&#xff0c;执行特定任务&…

AI答题pk机器人来袭

AI答题PK机器人是一种具备知识问答竞赛功能的人工智能程序。以下为您详细介绍&#xff1a; 一、实时对战&#xff1a;能在答题排位PK升级赛中&#xff0c;与用户进行1V1在线实时PK答题 。比如在一些知识竞赛类APP中&#xff0c;用户可匹配到AI机器人对手&#xff0c;在规定时…

PclSharp ——pcl的c#nuget包

简介&#xff1a; NuGet Gallery | PclSharp 1.8.1.20180820-beta07 下载.NET Framework 4.5.2 Developer Pack&#xff1a; 下载 .NET Framework 4.5.2 Developer Pack Offline Installer 离线安装nupkg&#xff1a; nupkg是visual studio 的NuGet Package的一个包文件 安…