OAK相机如何将 YOLOv9 模型转换成 blob 格式?

编辑:OAK中国
首发:oakchina.cn
喜欢的话,请多多👍⭐️✍
内容可能会不定期更新,官网内容都是最新的,请查看首发地址链接。

Hello,大家好,这里是OAK中国,我是Ashely。

专注科技,专注分享。

最近真的很忙,已经好久不发博客了。这个月有朋友问怎么在OAK相机上部署yolov9,正好给大家出个教程。

1.其他Yolo转换及使用教程请参考
2.检测类的yolo模型建议使用在线转换(地址),如果在线转换不成功,你再根据本教程来做本地转换。

▌.pt 转换为 .onnx

使用下列脚本(将脚本放到 YOLOv9 根目录中)将 pytorch 模型转换为 onnx 模型,若已安装 openvino_dev,则可进一步转换为 OpenVINO 模型:

示例用法:

python export_onnx.py -w <path_to_model>.pt -imgsz 640 

export_onnx.py :

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import argparse
import json
import logging
import math
import os
import platform
import sys
import time
import warnings
from io import BytesIO
from pathlib import Pathimport torch
from torch import nnwarnings.filterwarnings("ignore")FILE = Path(__file__).resolve()
ROOT = FILE.parents[0]  # YOLO root directory
if str(ROOT) not in sys.path:sys.path.append(str(ROOT))  # add ROOT to PATH
if platform.system() != "Windows":ROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relativefrom models.experimental import attempt_load
from models.yolo import DDetect, Detect, DualDDetect, DualDetect, TripleDDetect, TripleDetect
from utils.torch_utils import select_devicetry:from rich import printfrom rich.logging import RichHandlerlogging.basicConfig(level="INFO",format="%(message)s",datefmt="[%X]",handlers=[RichHandler(rich_tracebacks=False,show_path=False,)],)
except ImportError:logging.basicConfig(level="INFO",format="%(asctime)s\t%(levelname)s\t%(message)s",datefmt="[%X]",)class DetectV9(nn.Module):"""YOLOv9 Detect head for detection models"""dynamic = False  # force grid reconstructionexport = False  # export modeshape = Noneanchors = torch.empty(0)  # initstrides = torch.empty(0)  # initdef __init__(self, old_detect):super().__init__()self.nc = old_detect.nc  # number of classesself.nl = old_detect.nl  # number of detection layersself.reg_max = old_detect.reg_max  # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)self.no = old_detect.no  # number of outputs per anchorself.stride = old_detect.stride  # strides computed during buildself.cv2 = old_detect.cv2self.cv3 = old_detect.cv3self.dfl = old_detect.dflself.f = old_detect.fself.i = old_detect.idef forward(self, x):shape = x[0].shape  # BCHWd1 = [torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1) for i in range(self.nl)]box, cls = torch.cat([xi.view(shape[0], self.no, -1) for xi in d1], 2).split((self.reg_max * 4, self.nc), 1)box = self.dfl(box)cls_output = cls.sigmoid()# Get the maxconf, _ = cls_output.max(1, keepdim=True)# Concaty = torch.cat([box, conf, cls_output], dim=1)# Split to 3 channelsoutputs = []start, end = 0, 0for xi in x:end += xi.shape[-2] * xi.shape[-1]outputs.append(y[:, :, start:end].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))start += xi.shape[-2] * xi.shape[-1]return outputsdef bias_init(self):# Initialize Detect() biases, WARNING: requires stride availabilitym = self  # self.model[-1]  # Detect() modulefor a, b, s in zip(m.cv2, m.cv3, m.stride):  # froma[-1].bias.data[:] = 1.0  # boxb[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (.01 objects, 80 classes, 640 img)class DualDetectV9(DetectV9):def __init__(self, old_detect):super().__init__(old_detect)self.cv4 = old_detect.cv4self.cv5 = old_detect.cv5self.dfl2 = old_detect.dfl2def forward(self, x):shape = x[0].shape  # BCHWd2 = [torch.cat((self.cv4[i](x[self.nl + i]), self.cv5[i](x[self.nl + i])), 1) for i in range(self.nl)]box2, cls2 = torch.cat([di.view(shape[0], self.no, -1) for di in d2], 2).split((self.reg_max * 4, self.nc), 1)box2 = self.dfl2(box2)cls_output2 = cls2.sigmoid()# Get the maxconf2, _ = cls_output2.max(1, keepdim=True)# Concaty2 = torch.cat([box2, conf2, cls_output2], dim=1)# Split to 3 channelsoutputs2 = []start2, end2 = 0, 0for _i, xi in enumerate(x[3:]):end2 += xi.shape[-2] * xi.shape[-1]outputs2.append(y2[:, :, start2:end2].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))start2 += xi.shape[-2] * xi.shape[-1]return outputs2def bias_init(self):# Initialize Detect() biases, WARNING: requires stride availabilitym = self  # self.model[-1]  # Detect() modulefor a, b, s in zip(m.cv2, m.cv3, m.stride):  # froma[-1].bias.data[:] = 1.0  # boxb[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (5 objects and 80 classes per 640 image)for a, b, s in zip(m.cv4, m.cv5, m.stride):  # froma[-1].bias.data[:] = 1.0  # boxb[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (5 objects and 80 classes per 640 image)class TripleDetectV9(DualDetectV9):def __init__(self, old_detect):super().__init__(old_detect)self.cv6 = old_detect.cv6self.cv7 = old_detect.cv7self.dfl3 = old_detect.dfl3def forward(self, x):shape = x[0].shape  # BCHWd3 = [torch.cat((self.cv6[i](x[self.nl * 2 + i]), self.cv7[i](x[self.nl * 2 + i])),1,)for i in range(self.nl)]box3, cls3 = torch.cat([di.view(shape[0], self.no, -1) for di in d3], 2).split((self.reg_max * 4, self.nc), 1)box3 = self.dfl3(box3)cls_output3 = cls3.sigmoid()# Get the maxconf3, _ = cls_output3.max(1, keepdim=True)# Concaty3 = torch.cat([box3, conf3, cls_output3], dim=1)# Split to 3 channelsoutputs3 = []start3, end3 = 0, 0for _i, xi in enumerate(x[6:]):end3 += xi.shape[-2] * xi.shape[-1]outputs3.append(y3[:, :, start3:end3].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))start3 += xi.shape[-2] * xi.shape[-1]return outputs3def bias_init(self):# Initialize Detect() biases, WARNING: requires stride availabilitym = self  # self.model[-1]  # Detect() modulefor a, b, s in zip(m.cv2, m.cv3, m.stride):  # froma[-1].bias.data[:] = 1.0  # boxb[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (5 objects and 80 classes per 640 image)for a, b, s in zip(m.cv4, m.cv5, m.stride):  # froma[-1].bias.data[:] = 1.0  # boxb[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (5 objects and 80 classes per 640 image)for a, b, s in zip(m.cv6, m.cv7, m.stride):  # froma[-1].bias.data[:] = 1.0  # boxb[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (5 objects and 80 classes per 640 image)def parse_args():parser = argparse.ArgumentParser(description="Tool for converting Yolov9 models to the blob format used by OAK",formatter_class=argparse.ArgumentDefaultsHelpFormatter,)parser.add_argument("-m","-i","-w","--input_model",type=Path,required=True,help="weights path",)parser.add_argument("-imgsz","--img-size",nargs="+",type=int,default=[640, 640],help="image size",)  # height, widthparser.add_argument("-op", "--opset", type=int, default=12, help="opset version")parser.add_argument("-n","--name",type=str,help="The name of the model to be saved, none means using the same name as the input model",)parser.add_argument("-o","--output_dir",type=Path,help="Directory for saving files, none means using the same path as the input model",)parser.add_argument("-b","--blob",action="store_true",help="OAK Blob export",)parser.add_argument("-s","--spatial_detection",action="store_true",help="Inference with depth information",)parser.add_argument("-sh","--shaves",type=int,help="Inference with depth information",)parser.add_argument("-t","--convert_tool",type=str,help="Which tool is used to convert, docker: should already have docker (https://docs.docker.com/get-docker/) and docker-py (pip install docker) installed; blobconverter: uses an online server to convert the model and should already have blobconverter (pip install blobconverter); local: use openvino-dev (pip install openvino-dev) and openvino 2022.1 ( https://docs.oakchina.cn/en/latest /pages/Advanced/Neural_networks/local_convert_openvino.html#id2) to convert",default="blobconverter",choices=["docker", "blobconverter", "local"],)args = parser.parse_args()args.input_model = args.input_model.resolve().absolute()if args.name is None:args.name = args.input_model.stemif args.output_dir is None:args.output_dir = args.input_model.parentargs.img_size *= 2 if len(args.img_size) == 1 else 1  # expandif args.shaves is None:args.shaves = 5 if args.spatial_detection else 6return argsdef export(input_model, img_size, output_model, opset, **kwargs):t = time.time()# Load PyTorch modeldevice = select_device("cpu")# load FP32 modelmodel = attempt_load(input_model, device=device, inplace=True, fuse=True)labels = model.module.names if hasattr(model, "module") else model.names  # get class nameslabels = labels if isinstance(labels, list) else list(labels.values())# check num classes and labelsassert model.nc == len(labels), f"Model class count {model.nc} != len(names) {len(labels)}"# Replace with the custom Detection Headif isinstance(model.model[-1], (Detect, DDetect)):logging.info("Replacing model.model[-1] with DetectV9")model.model[-1] = DetectV9(model.model[-1])elif isinstance(model.model[-1], (DualDetect, DualDDetect)):logging.info("Replacing model.model[-1] with DualDetectV9")model.model[-1] = DualDetectV9(model.model[-1])elif isinstance(model.model[-1], (TripleDetect, TripleDDetect)):logging.info("Replacing model.model[-1] with TripleDetectV9")model.model[-1] = TripleDetectV9(model.model[-1])num_branches = model.model[-1].nl# Inputimg = torch.zeros(1, 3, *img_size).to(device)  # image size(1,3,320,320) Detectionmodel.eval()model(img)  # dry runs# ONNX exporttry:import onnxprint()logging.info(f"Starting ONNX export with onnx {onnx.__version__}...")output_list = ["output%s_yolov6r2" % (i + 1) for i in range(num_branches)]with BytesIO() as f:torch.onnx.export(model,img,f,verbose=False,opset_version=opset,input_names=["images"],output_names=output_list,)# Checksonnx_model = onnx.load_from_string(f.getvalue())  # load onnx modelonnx.checker.check_model(onnx_model)  # check onnx modeltry:import onnxsimlogging.info("Starting to simplify ONNX...")onnx_model, check = onnxsim.simplify(onnx_model)assert check, "assert check failed"except ImportError:logging.warning("onnxsim is not found, if you want to simplify the onnx, "+ "you should install it:\n\t"+ "pip install -U onnxsim onnxruntime\n"+ "then use:\n\t"+ f'python -m onnxsim "{output_model}" "{output_model}"')except Exception:logging.exception("Simplifier failure")onnx.save(onnx_model, output_model)logging.info(f"ONNX export success, saved as:\n\t{output_model}")except Exception:logging.exception("ONNX export failure")# generate anchors and sidesanchors = []# generate masksmasks = {}logging.info(f"anchors:\n\t{anchors}")logging.info(f"anchor_masks:\n\t{masks}")export_json = output_model.with_suffix(".json")export_json.write_text(json.dumps({"nn_config": {"output_format": "detection","NN_family": "YOLO","input_size": f"{img_size[0]}x{img_size[1]}","NN_specific_metadata": {"classes": model.nc,"coordinates": 4,"anchors": anchors,"anchor_masks": masks,"iou_threshold": 0.3,"confidence_threshold": 0.5,},},"mappings": {"labels": labels},},indent=4,))logging.info(f"Anchors data export success, saved as:\n\t{export_json}")# Finishlogging.info("Export complete (%.2fs).\n" % (time.time() - t))def convert(convert_tool, output_model, shaves, output_dir, name, **kwargs):t = time.time()export_dir: Path = output_dir.joinpath(name + "_openvino")export_dir.mkdir(parents=True, exist_ok=True)export_xml = export_dir.joinpath(name + ".xml")export_blob = export_dir.joinpath(name + ".blob")if convert_tool == "blobconverter":import blobconverterblobconverter.from_onnx(model=str(output_model),data_type="FP16",shaves=shaves,use_cache=False,version="2021.4",output_dir=export_dir,optimizer_params=["--scale=255","--reverse_input_channel",# "--use_new_frontend",],# download_ir=True,)"""with ZipFile(blob_path, "r", ZIP_LZMA) as zip_obj:for name in zip_obj.namelist():zip_obj.extract(name,export_dir,)blob_path.unlink()"""elif convert_tool == "docker":import dockerexport_dir = Path("/io").joinpath(export_dir.name)export_xml = export_dir.joinpath(name + ".xml")export_blob = export_dir.joinpath(name + ".blob")client = docker.from_env()image = client.images.pull("openvino/ubuntu20_dev", tag="2022.3.1")docker_output = client.containers.run(image=image.tags[0],command=f"bash -c \"mo -m {name}.onnx -n {name} -o {export_dir} --static_shape --reverse_input_channels --scale=255 --use_new_frontend && echo 'MYRIAD_ENABLE_MX_BOOT NO' | tee /tmp/myriad.conf >> /dev/null && /opt/intel/openvino/tools/compile_tool/compile_tool -m {export_xml} -o {export_blob} -ip U8 -VPU_NUMBER_OF_SHAVES {shaves} -VPU_NUMBER_OF_CMX_SLICES {shaves} -d MYRIAD -c /tmp/myriad.conf\"",remove=True,volumes=[f"{output_dir}:/io",],working_dir="/io",)logging.info(docker_output.decode("utf8"))else:import subprocess as sp# OpenVINO exportlogging.info("Starting to export OpenVINO...")OpenVINO_cmd = f"mo --input_model {output_model} --output_dir {export_dir} --data_type FP16 --scale 255 --reverse_input_channel"try:sp.check_output(OpenVINO_cmd, shell=True)logging.info(f"OpenVINO export success, saved as {export_dir}")except sp.CalledProcessError:logging.exception("")logging.warning("OpenVINO export failure!")logging.warning(f"By the way, you can try to export OpenVINO use:\n\t{OpenVINO_cmd}")# OAK Blob exportlogging.info("Then you can try to export blob use:")blob_cmd = ("echo 'MYRIAD_ENABLE_MX_BOOT ON' | tee /tmp/myriad.conf"+ f"compile_tool -m {export_xml} -o {export_blob} -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES {shaves} -VPU_NUMBER_OF_CMX_SLICES {shaves} -c /tmp/myriad.conf")logging.info(f"{blob_cmd}")logging.info("compile_tool maybe in the path: /opt/intel/openvino/tools/compile_tool/compile_tool, if you install openvino 2022.1 with apt")logging.info("Convert complete (%.2fs).\n" % (time.time() - t))if __name__ == "__main__":args = parse_args()logging.info(args)print()output_model = args.output_dir / (args.name + ".onnx")export(output_model=output_model, **vars(args))if args.blob:convert(output_model=output_model, **vars(args))

可以使用 Netron 查看模型结构:
在这里插入图片描述

▌转换

openvino 本地转换

onnx -> openvino

mo 是 openvino_dev 2022.1 中脚本,安装命令为 pip install openvino-dev

mo --input_model yolov9-c.onnx --scale=255 --reverse_input_channel

openvino -> blob
compile_tool 是 OpenVINO Runtime 中脚本

<path>/compile_tool -m yolov9-c.xml 
-ip U8 -d MYRIAD 
-VPU_NUMBER_OF_SHAVES 6 
-VPU_NUMBER_OF_CMX_SLICES 6

在线转换

blobconvert 网页 http://blobconverter.luxonis.com/

  • 进入网页,按下图指示操作:
    在这里插入图片描述

  • 修改参数,转换模型:
    在这里插入图片描述

  1. 选择 onnx 模型
  2. 修改 optimizer_params--data_type=FP16 --scale=255 --reverse_input_channel
  3. 修改 shaves6
  4. 转换

blobconverter python 代码:

blobconverter.from_onnx("yolov9-c.onnx",	optimizer_params=["--scale=255","--reverse_input_channel",],shaves=6,)

blobconvert cli

blobconverter --onnx yolov9-c.onnx -sh 6 -o . --optimizer-params "scale=255 --reverse_input_channel"

▌DepthAI 示例

正确解码需要可配置的网络相关参数:

  • setNumClasses – YOLO 检测类别的数量
  • setIouThreshold – iou 阈值
  • setConfidenceThreshold – 置信度阈值,低于该阈值的对象将被过滤掉
# coding=utf-8
import cv2
import depthai as dai
import numpy as npnumClasses = 80
model = dai.OpenVINO.Blob("yolov9-c.blob")
dim = next(iter(model.networkInputs.values())).dims
W, H = dim[:2]output_name, output_tenser = next(iter(model.networkOutputs.items()))
if "yolov6" in output_name:numClasses = output_tenser.dims[2] - 5
else:numClasses = output_tenser.dims[2] // 3 - 5labelMap = [# "class_1","class_2","...""class_%s" % ifor i in range(numClasses)
]# Create pipeline
pipeline = dai.Pipeline()# Define sources and outputs
camRgb = pipeline.create(dai.node.ColorCamera)
detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)
xoutRgb = pipeline.create(dai.node.XLinkOut)
xoutNN = pipeline.create(dai.node.XLinkOut)xoutRgb.setStreamName("image")
xoutNN.setStreamName("nn")# Properties
camRgb.setPreviewSize(W, H)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setInterleaved(False)
camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)# Network specific settings
detectionNetwork.setBlob(model)
detectionNetwork.setConfidenceThreshold(0.5)# Yolo specific parameters
detectionNetwork.setNumClasses(numClasses)
detectionNetwork.setCoordinateSize(4)
detectionNetwork.setAnchors([])
detectionNetwork.setAnchorMasks({})
detectionNetwork.setIouThreshold(0.5)# Linking
camRgb.preview.link(detectionNetwork.input)
camRgb.preview.link(xoutRgb.input)
detectionNetwork.out.link(xoutNN.input)# Connect to device and start pipeline
with dai.Device(pipeline) as device:# Output queues will be used to get the rgb frames and nn data from the outputs defined aboveimageQueue = device.getOutputQueue(name="image", maxSize=4, blocking=False)detectQueue = device.getOutputQueue(name="nn", maxSize=4, blocking=False)frame = Nonedetections = []# nn data, being the bounding box locations, are in <0..1> range - they need to be normalized with frame width/heightdef frameNorm(frame, bbox):normVals = np.full(len(bbox), frame.shape[0])normVals[::2] = frame.shape[1]return (np.clip(np.array(bbox), 0, 1) * normVals).astype(int)def drawText(frame, text, org, color=(255, 255, 255), thickness=1):cv2.putText(frame, text, org, cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), thickness + 3, cv2.LINE_AA)cv2.putText(frame, text, org, cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, thickness, cv2.LINE_AA)def drawRect(frame, topLeft, bottomRight, color=(255, 255, 255), thickness=1):cv2.rectangle(frame, topLeft, bottomRight, (0, 0, 0), thickness + 3)cv2.rectangle(frame, topLeft, bottomRight, color, thickness)def displayFrame(name, frame):color = (128, 128, 128)for detection in detections:bbox = frameNorm(frame, (detection.xmin, detection.ymin, detection.xmax, detection.ymax))drawText(frame=frame,text=labelMap[detection.label],org=(bbox[0] + 10, bbox[1] + 20),)drawText(frame=frame,text=f"{detection.confidence:.2%}",org=(bbox[0] + 10, bbox[1] + 35),)drawRect(frame=frame,topLeft=(bbox[0], bbox[1]),bottomRight=(bbox[2], bbox[3]),color=color,)# Show the framecv2.imshow(name, frame)while True:imageQueueData = imageQueue.tryGet()detectQueueData = detectQueue.tryGet()if imageQueueData is not None:frame = imageQueueData.getCvFrame()if detectQueueData is not None:detections = detectQueueData.detectionsif frame is not None:displayFrame("rgb", frame)if cv2.waitKey(1) == ord("q"):break

▌参考资料

https://docs.oakchina.cn/en/latest/
https://www.oakchina.cn/selection-guide/


OAK中国
| OpenCV AI Kit在中国区的官方代理商和技术服务商
| 追踪AI技术和产品新动态

戳「+关注」获取最新资讯↗↗

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/19495.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

最新消息:腾讯大模型App“腾讯元宝“上线了

&#x1f9d9;‍♂️ 诸位好&#xff0c;吾乃斜杠君&#xff0c;编程界之翘楚&#xff0c;代码之大师。算法如流水&#xff0c;逻辑如棋局。 &#x1f4dc; 吾之笔记&#xff0c;内含诸般技术之秘诀。吾欲以此笔记&#xff0c;传授编程之道&#xff0c;助汝解技术难题。 &#…

Python代码:二十八、密码游戏

1、题目 牛牛和牛妹一起玩密码游戏&#xff0c;牛牛作为发送方会发送一个4位数的整数给牛妹&#xff0c;牛妹接收后将对密码进行破解。 破解方案如下&#xff1a;每位数字都要加上3再除以9的余数代替该位数字&#xff0c;然后将第1位和第3位数字交换&#xff0c;第2位和第4位…

2024年艺术鉴赏与科学教育国际会议(ICAASE 2024)

2024年艺术鉴赏与科学教育国际会议 2024 International Conference on Art Appreciation and Science Education 【1】会议简介 2024年艺术鉴赏与科学教育国际会议是一场集艺术、科学和教育于一体的国际性学术盛会。本次会议旨在推动艺术鉴赏与科学教育领域的深入交流与合作&am…

C语言(字符函数和字符串函数)1

Hi~&#xff01;这里是奋斗的小羊&#xff0c;很荣幸各位能阅读我的文章&#xff0c;诚请评论指点&#xff0c;关注收藏&#xff0c;欢迎欢迎~~ &#x1f4a5;个人主页&#xff1a;小羊在奋斗 &#x1f4a5;所属专栏&#xff1a;C语言 本系列文章为个人学习笔记&#x…

python API自动化(接口测试基础与原理)

1.接口测试概念及应用 什么是接口 接口是前后端沟通的桥梁&#xff0c;是数据传输的通道&#xff0c;包括外部接口、内部接口,内部接口又包括&#xff1a;上层服务与下层服务接口&#xff0c;同级接口 外部接口&#xff1a;比如你要从 别的网站 或 服务器 上获取 资源或信息 &a…

SpringMVC框架学习笔记(四):模型数据 以及 视图和视图解析器

1 模型数据处理-数据放入 request 说明&#xff1a;开发中, 控制器/处理器中获取的数据如何放入 request 域&#xff0c;然后在前端(VUE/JSP/...)取出显 示 1.1 方式 1: 通过 HttpServletRequest 放入 request 域 &#xff08;1&#xff09;前端发送请求 <h1>添加主人…

基于RNN和Transformer的词级语言建模 代码分析 _generate_square_subsequent_mask

基于RNN和Transformer的词级语言建模 代码分析 _generate_square_subsequent_mask flyfish Word-level Language Modeling using RNN and Transformer word_language_model PyTorch 提供的 word_language_model 示例展示了如何使用循环神经网络RNN(GRU或LSTM)和 Transforme…

【AI大模型】如何让大模型变得更聪明?基于时代背景的思考

【AI大模型】如何让大模型变得更聪明 前言 在以前&#xff0c;AI和大模型实际上界限较为清晰。但是随着人工智能技术的不断发展&#xff0c;基于大规模预训练模型的应用在基于AI人工智能的技术支持和帮助上&#xff0c;多个领域展现出了前所未有的能力。无论是自然语言处理、…

JavaScript的垃圾回收机制

No.内容链接1Openlayers 【入门教程】 - 【源代码示例300】 2Leaflet 【入门教程】 - 【源代码图文示例 150】 3Cesium 【入门教程】 - 【源代码图文示例200】 4MapboxGL【入门教程】 - 【源代码图文示例150】 5前端就业宝典 【面试题详细答案 1000】 文章目录 一、垃圾…

匹配字符串

自学python如何成为大佬(目录):https://blog.csdn.net/weixin_67859959/article/details/139049996?spm1001.2014.3001.5501 Python提供了re模块&#xff0c;用于实现正则表达式的操作。在实现时&#xff0c;可以使用re模块提供的方法&#xff08;如search()、match()、finda…

信息系统项目管理师0603:项目整合管理 — 考点总结(可直接理解记忆)

点击查看专栏目录 文章目录 项目整合管理 — 考点总结(可直接理解记忆) 输入、输出、工具和技术 历年考题直接考输入,输出、工具和技术的有17年11月第34、35,19年5月第34、35,20年11月27、28,21年5月第26,28,21年11月第28,22年5月第25,22年11月第22考题 项目章程是正…

CasaOS玩客云安装全平台高速下载器Gopeed并实现远程访问

&#x1f49d;&#x1f49d;&#x1f49d;欢迎来到我的博客&#xff0c;很高兴能够在这里和您见面&#xff01;希望您在这里可以感受到一份轻松愉快的氛围&#xff0c;不仅可以获得有趣的内容和知识&#xff0c;也可以畅所欲言、分享您的想法和见解。 推荐:kwan 的首页,持续学…

基于Python的酒店客房入侵检测系统的设计与实现

基于Python的酒店客房入侵检测系统的设计与实现 开发语言:Python 数据库&#xff1a;MySQL所用到的知识&#xff1a;Django框架工具&#xff1a;pycharm、Navicat、Maven 系统功能实现 酒店客房入侵管理界面 结合上文的结构搭建和用户需求&#xff0c;酒店客房入侵检测系统的…

【Unity Shader入门精要 第12章】屏幕后处理效果(一)

1. 原理和过程 屏幕后处理是绑定摄像机的&#xff0c;通过抓取当前摄像机渲染的图像作为 SrcTextrue&#xff0c;然后按需依次调用处理接口&#xff0c;对 SrcTexture 进行处理&#xff0c;最后将处理完成的 DstTexture 显示到屏幕上&#xff0c;整个过程的调度通过 C# 脚本完…

【机器学习】Adaboost: 强化弱学习器的自适应提升方法

&#x1f308;个人主页: 鑫宝Code &#x1f525;热门专栏: 闲话杂谈&#xff5c; 炫酷HTML | JavaScript基础 ​&#x1f4ab;个人格言: "如无必要&#xff0c;勿增实体" 文章目录 Adaboost: 强化弱学习器的自适应提升方法引言Adaboost基础概念弱学习器与强学习…

【蓝桥杯嵌入式】 第六届国赛

目录 题目 配置 注意事项 代码 - 默写大师 EEPROM读写函数 LED驱动函数 ADC采集 上电初始化 LCD 按键 PWM互补输出 全部代码 hardware.c hardware.h control.c control.h main.c 题目 配置 注意事项 复制LCD的工程&#xff0c;先配置资源 --- 勾选完选项一…

CCIG 2024:合合信息文档解析技术突破与应用前景

目录 背景当前大模型训练和应用面临的问题训练Token耗尽训练语料质量要求高LLM文档问答应用中文档解析不精准 合合信息的文档解析技术1. 具备多文档元素识别能力2. 具备版面分析能力3. 高性能的文档解析4. 高精准、高效率的文档解析文档多板式部分示例 文档解析典型技术难点元素…

案例|开发一个美业小程序,都有什么功能

随着移动互联网的迅猛发展&#xff0c;美业连锁机构纷纷寻求数字化转型&#xff0c;以小程序为载体&#xff0c;提升服务效率&#xff0c;增强客户体验。 线下店现在面临的困境&#xff1a; 客户到店排队时间过长&#xff0c;体验感受差 新客引流难&#xff0c;老用户回头客…

基于EV54Y39A PIC-IOT WA的手指数量检测功能开发(MPLAB+ADC)

目录 项目介绍硬件介绍项目设计开发环境及工程参考总体流程图硬件基本配置光照传感器读取定时器检测逻辑 功能展示项目总结 &#x1f449; 【Funpack3-2】基于EV54Y39A PIC-IOT WA的手指数量检测功能开发 &#x1f449; Github: EmbeddedCamerata/PIC-IOT_finger_recognition 项…

Flutter基础 -- Dart 语言 -- 注释函数表达式

目录 1. 注释 1.1 单行注释 1.2 多行注释 1.3 文档注释 2. 函数 2.1 定义 2.2 可选参数 2.3 可选参数 默认值 2.4 命名参数 默认值 2.5 函数内定义 2.6 Funcation 返回函数对象 2.7 匿名函数 2.8 作用域 3. 操作符 3.1 操作符表 3.2 算术操作符 3.3 相等相关的…