yolov8训练及测试(ubuntu18.04、tensorrt、ros)

1 数据集制作

1.1标注数据

Linux/Ubuntu/Mac
至少需要 Python 2.6 (推荐使用 Python 3 或更高版本 及 PyQt5)
Ubuntu Linux (Python 3 + Qt5)

git clone https://gitcode.com/gh_mirrors/la/labelImg.git
sudo apt-get install pyqt5-dev-tools
cd labelImg
sudo pip3 install -r requirements/requirements-linux-python3.txt
make qt5py3
python3 labelImg.py

运行python3 labelImg.py出错, File "/home/wyh/environment_setting/labelImg-master/libs/labelDialog.py", line 37, in __init__ layout.addWidget(bb, alignment=Qt.AlignmentFlag.AlignLeft) AttributeError: type object 'AlignmentFlag' has no attribute 'AlignLeft'
原因:因为 PyQtPySide 的版本问题
解决:如果确定用的时PYQT5,将layout.addWidget(bb, alignment=Qt.AlignmentFlag.AlignLeft)更改为layout.addWidget(bb, alignment=Qt.AlignLeft)

1.2 建立对应的数据文件夹

在这里插入图片描述
images:图片数据,labels:标注转换后的yolotxt文件,xmls:labelimg标注的xml格式数据,class.txt:标签txt文件

1.3 将标注后的xml转为txt

#! /usr/local/bin/ python
# -*- coding: utf-8 -*-
# .xml文件转换成.txt文件import copy
from xml.etree import Element, SubElement, tostring, ElementTree
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join# 检测目标的类别
classes = ["ore carrier", "passenger ship","container ship", "bulk cargo carrier","general cargo ship", "fishing boat"]CURRENT_DIR = os.path.dirname(os.path.abspath(__file__))def convert(size, box):dw = 1. / size[0]dh = 1. / size[1]x = (box[0] + box[1]) / 2.0    # (x_min + x_max) / 2.0y = (box[2] + box[3]) / 2.0    # (y_min + y_max) / 2.0w = box[1] - box[0]   # x_max - x_minh = box[3] - box[2]   # y_max - y_minx = x * dww = w * dwy = y * dhh = h * dhreturn (x, y, w, h)def convert_annotation(image_id):# .xml格式文件的地址in_file = open('地址1\%s.xml' % (image_id), encoding='UTF-8')# 生成的.txt格式文件的地址out_file = open('地址2\%s.txt' % (image_id), 'w')tree = ET.parse(in_file)root = tree.getroot()size = root.find('size')w = int(size.find('width').text)h = int(size.find('height').text)for obj in root.iter('object'):cls = obj.find('name').textif cls not in classes:continuecls_id = classes.index(cls)xmlbox = obj.find('bndbox')b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),float(xmlbox.find('ymax').text))bb = convert((w, h), b)out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')# .xml格式文件的地址
xml_path = os.path.join(CURRENT_DIR, '地址1/')# xml列表
img_xmls = os.listdir(xml_path)
for img_xml in img_xmls:label_name = img_xml.split('.')[0]print(label_name)convert_annotation(label_name)

将代码中路径更改为对应的路径

2 将yolo数据拆分为train、val、test

import os
import random
import shutildef split_dataset(images_dir, labels_dir, output_dir, split_ratio=(0.8, 0.1, 0.1)):"""将图像和标签数据集划分为训练集、验证集和测试集。:param images_dir: 图像文件夹路径:param labels_dir: 标签文件夹路径:param output_dir: 输出目录路径:param split_ratio: 划分比例 (train, val, test)"""# 确保输出目录存在os.makedirs(output_dir, exist_ok=True)for subdir in ['train', 'val', 'test']:os.makedirs(os.path.join(output_dir, subdir, 'images'), exist_ok=True)os.makedirs(os.path.join(output_dir, subdir, 'labels'), exist_ok=True)# 获取所有图像文件名images = [f for f in os.listdir(images_dir) if f.endswith('.jpg') or f.endswith('.png')]labels = [f.replace('.jpg', '.txt').replace('.png', '.txt') for f in images]# 打乱顺序combined = list(zip(images, labels))random.shuffle(combined)images[:], labels[:] = zip(*combined)# 计算划分点num_train = int(len(images) * split_ratio[0])num_val = int(len(images) * split_ratio[1])# 划分数据集for i, image in enumerate(images):label = labels[i]if i < num_train:subset = 'train'elif i < num_train + num_val:subset = 'val'else:subset = 'test'shutil.copy(os.path.join(images_dir, image), os.path.join(output_dir, subset, 'images', image))shutil.copy(os.path.join(labels_dir, label), os.path.join(output_dir, subset, 'labels', label))# 示例调用
split_dataset('/home/wyh/artrc_catkin/src/artrc_yolov8/datasets/origin_data/images','/home/wyh/artrc_catkin/src/artrc_yolov8/datasets/origin_data/labels','/home/wyh/artrc_catkin/src/artrc_yolov8/datasets/split_data')

运行后如图所示
在这里插入图片描述

3 根据数据集添加yaml文件

import yaml
import os
def create_yaml(output_dir, train_dir, val_dir, test_dir, class_names, num_classes):"""创建 YOLOv8 数据集配置文件。:param output_dir: 输出目录路径:param train_dir: 训练集目录路径:param val_dir: 验证集目录路径:param test_dir: 测试集目录路径:param class_names: 类别名称列表:param num_classes: 类别数量"""data = {'train': train_dir,'val': val_dir,'test': test_dir,'nc': num_classes,'names': class_names}with open(os.path.join(output_dir, 'dataset.yaml'), 'w') as f:yaml.dump(data, f, default_flow_style=False)# 示例调用
create_yaml('/home/wyh/artrc_catkin/src/artrc_yolov8/datasets/split_data','/home/wyh/artrc_catkin/src/artrc_yolov8/datasets/split_data/train/images','/home/wyh/artrc_catkin/src/artrc_yolov8/datasets/split_data/val/images','/home/wyh/artrc_catkin/src/artrc_yolov8/datasets/split_data/test/images',['corrosion','craze', 'hide_craze','surface_attach','surface_corrosion','surface_eye','surface_injure','surface_oil','thunderstrike'], 9)

运行结果如下文件:
在这里插入图片描述

4 训练数据集

cd ultralytics
yolo task=detect mode=train model=yolov8n.pt data=ultralytics/cfg/datasets/dataset.yaml batch=8 epochs=200 imgsz=640 workers=32 device=0

5 训练后使用

5.1 训练后的各中形式数据转换

5.1.1 将.pt转换为onnx

方式一:利用下述pt_to_onnx.py进行转换

#! /usr/local/bin/ python
# -*- coding: utf-8 -*-
from ultralytics import YOLOmodel = YOLO("best.pt")success = model.export(format="onnx", half=False, dynamic=True, opset=17)print("demo")
cd ultralytics
python pt_to_onnx.py

方式二:命令行操作转换

# 到相应的权重文件所在文件夹
cd ultralytics 
setconda
conda activate yolov8
yolo mode=export model=yolov8n.pt format=onnx dynamic=True    #simplify=True
yolo mode=export model=yolov8s.pt format=onnx dynamic=True    # 不同模型

5.1.2将.onnx转换为.trt

cd /environment_setting/tensorrt-alpha/data/yolov8
# 生成trt文件
# 640  ../../../TensorRT-8.4.1.5/bin/trtexec为各路径,根据实际情况填写
../../../TensorRT-8.4.1.5/bin/trtexec   --onnx=best.onnx  --saveEngine=best.trt  --buildOnly --minShapes=images:1x3x640x640 --optShapes=images:4x3x640x640 --maxShapes=images:8x3x640x640
../../../TensorRT-8.4.1.5/bin/trtexec   --onnx=yolov8s.onnx  --saveEngine=yolov8s.trt  --buildOnly --minShapes=images:1x3x640x640 --optShapes=images:4x3x640x640 --maxShapes=images:8x3x640x640
../../../TensorRT-8.4.1.5/bin/trtexec   --onnx=yolov8m.onnx  --saveEngine=yolov8m.trt  --buildOnly --minShapes=images:1x3x640x640 --optShapes=images:4x3x640x640 --maxShapes=images:8x3x640x640

5.2 利用pt文件进行检测

#!/home/wyh/.conda/envs/yolov8/bin/python3.8
# -*- coding: utf-8 -*-
import cv2
import torch
import rospy
import numpy as np
from ultralytics import YOLO
from time import time
from std_msgs.msg import Header
from sensor_msgs.msg import Image
from artrc_yolov8.msg import BoundingBox, BoundingBoxesclass Yolo_Dect:def __init__(self):# load parametersweight_path = rospy.get_param('~weight_path', '')image_topic = rospy.get_param('~image_topic', '/camera/color/image_raw')pub_topic = rospy.get_param('~pub_topic', '/yolov8/BoundingBoxes')self.camera_frame = rospy.get_param('~camera_frame', '')conf = rospy.get_param('~conf', '0.5')self.visualize = rospy.get_param('~visualize', 'True')# which device will be usedif (rospy.get_param('/use_cpu', 'true')):self.device = 'cpu'else:self.device = 'cuda'self.model = YOLO(weight_path)self.model.fuse()self.model.conf = confself.color_image = Image()self.getImageStatus = False# Load class colorself.classes_colors = {}# image subscribeself.color_sub = rospy.Subscriber(image_topic, Image, self.image_callback,queue_size=1, buff_size=52428800)# output publishersself.position_pub = rospy.Publisher(pub_topic,  BoundingBoxes, queue_size=1)self.image_pub = rospy.Publisher('/yolov8/detection_image',  Image, queue_size=1)# Load image and detectself.load_and_detect()def image_callback(self, image):# Existing image callback logicpassdef load_and_detect(self):# Load image from file or a specific sourceimage_path = '/home/wyh/artrc_catkin/src/artrc_yolov8/image/60.jpg'  # Replace with your image pathself.color_image = cv2.imread(image_path)if self.color_image is None:rospy.logerr("Failed to load image from path: %s", image_path)returnself.color_image = cv2.cvtColor(self.color_image, cv2.COLOR_BGR2RGB)results = self.model(self.color_image, show=False, conf=0.3)self.dectshow(results, self.color_image.shape[0], self.color_image.shape[1])cv2.waitKey(3)def dectshow(self, results, height, width):# Existing detection logicself.frame = results[0].plot()print(str(results[0].speed['inference']))fps = 1000.0 / results[0].speed['inference']cv2.putText(self.frame, f'FPS: {int(fps)}', (20, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2, cv2.LINE_AA)self.boundingBoxes = BoundingBoxes()self.boundingBoxes.header = Header(stamp=rospy.Time.now())self.boundingBoxes.image_header = Header(stamp=rospy.Time.now())# 统计数量class_count = {}total_count = 0for result in results[0].boxes:boundingBox = BoundingBox()boundingBox.xmin = np.int64(result.xyxy[0][0].item())boundingBox.ymin = np.int64(result.xyxy[0][1].item())boundingBox.xmax = np.int64(result.xyxy[0][2].item())boundingBox.ymax = np.int64(result.xyxy[0][3].item())boundingBox.Class = results[0].names[result.cls.item()]boundingBox.probability = result.conf.item()self.boundingBoxes.bounding_boxes.append(boundingBox)if boundingBox.Class in class_count:class_count[boundingBox.Class] += 1else:class_count[boundingBox.Class] = 1total_count += 1print("cl:",boundingBox.Class)self.position_pub.publish(self.boundingBoxes)self.publish_image(self.frame, height, width)print("data",self.boundingBoxes)print("Class Count:", class_count)print("total count:",total_count)# if self.visualize:# cv2.imshow('YOLOv8', self.frame)def publish_image(self, imgdata, height, width):image_temp = Image()header = Header(stamp=rospy.Time.now())header.frame_id = self.camera_frameimage_temp.height = heightimage_temp.width = widthimage_temp.encoding = 'bgr8'image_temp.data = np.array(imgdata).tobytes()image_temp.header = headerimage_temp.step = width * 3self.image_pub.publish(image_temp)def main():rospy.init_node('yolov8_ros', anonymous=True)yolo_dect = Yolo_Dect()rospy.spin()if __name__ == "__main__":main()

5.3 利用.onnx文件进行检测

#!/home/wyh/.conda/envs/yolov8/bin/python3.8
# -*- coding: utf-8 -*-
import onnxruntime as rt
import numpy as np
import cv2
import matplotlib.pyplot as plt# 定义类别标签
CLASS_NAMES = ['corrosion','craze', 'hide_craze','surface_attach','surface_corrosion','surface_eye','surface_injure','surface_oil','thunderstrike']  # 请根据你的模型定义实际的类标签COLOR_MAP = {"label_0": (255, 0, 0),       # 红色"label_1": (0, 255, 0),       # 绿色"label_2": (0, 0, 255),       # 蓝色"label_3": (255, 255, 0),     # 黄色"label_4": (255, 0, 255),     # 品红色"label_5": (0, 255, 255),     # 青色"label_6": (128, 0, 128),     # 紫色"label_7": (255, 165, 0),     # 橙色"label_8": (128, 128, 128),   # 灰色
}def nms(pred, conf_thres, iou_thres): conf = pred[..., 4] > conf_thresbox = pred[conf == True] cls_conf = box[..., 5:]cls = []for i in range(len(cls_conf)):cls.append(int(np.argmax(cls_conf[i])))total_cls = list(set(cls))  output_box = []  for i in range(len(total_cls)):clss = total_cls[i] cls_box = []for j in range(len(cls)):if cls[j] == clss:box[j][5] = clsscls_box.append(box[j][:6])cls_box = np.array(cls_box)box_conf = cls_box[..., 4]  box_conf_sort = np.argsort(box_conf) max_conf_box = cls_box[box_conf_sort[len(box_conf) - 1]]output_box.append(max_conf_box) cls_box = np.delete(cls_box, 0, 0) while len(cls_box) > 0:max_conf_box = output_box[len(output_box) - 1]  del_index = []for j in range(len(cls_box)):current_box = cls_box[j]  interArea = getInter(max_conf_box, current_box)  iou = getIou(max_conf_box, current_box, interArea)  if iou > iou_thres:del_index.append(j)  cls_box = np.delete(cls_box, del_index, 0)  if len(cls_box) > 0:output_box.append(cls_box[0])cls_box = np.delete(cls_box, 0, 0)return output_boxdef getIou(box1, box2, inter_area):box1_area = box1[2] * box1[3]box2_area = box2[2] * box2[3]union = box1_area + box2_area - inter_areaiou = inter_area / unionreturn ioudef getInter(box1, box2):box1_x1, box1_y1, box1_x2, box1_y2 = box1[0] - box1[2] / 2, box1[1] - box1[3] / 2, \box1[0] + box1[2] / 2, box1[1] + box1[3] / 2box2_x1, box2_y1, box2_x2, box2_y2 = box2[0] - box2[2] / 2, box2[1] - box1[3] / 2, \box2[0] + box2[2] / 2, box2[1] + box2[3] / 2if box1_x1 > box2_x2 or box1_x2 < box2_x1:return 0if box1_y1 > box2_y2 or box1_y2 < box2_y1:return 0x_list = [box1_x1, box1_x2, box2_x1, box2_x2]x_list = np.sort(x_list)x_inter = x_list[2] - x_list[1]y_list = [box1_y1, box1_y2, box2_y1, box2_y2]y_list = np.sort(y_list)y_inter = y_list[2] - y_list[1]inter = x_inter * y_interreturn inter# 画框并添加标签
def draw(img, xscale, yscale, pred):img_ = img.copy()if len(pred):for detect in pred:label = int(detect[5])  # 获取类别标签label_name = CLASS_NAMES[label]  # 通过类索引获取类名detect_coords = [int((detect[0] - detect[2] / 2) * xscale), int((detect[1] - detect[3] / 2) * yscale),int((detect[0] + detect[2] / 2) * xscale), int((detect[1] + detect[3] / 2) * yscale)]# 获取颜色,如果没有对应的颜色,就使用默认颜色color = COLOR_MAP.get(label_name, (255, 255, 255))  # 默认为白色# 绘制矩形框img_ = cv2.rectangle(img_, (detect_coords[0], detect_coords[1]), (detect_coords[2], detect_coords[3]), color, 2)# 绘制标签img_ = cv2.putText(img_, label_name, (detect_coords[0], detect_coords[1]-5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 1)return img_if __name__ == '__main__':height, width = 640, 640img0 = cv2.imread('/home/wyh/artrc_catkin/src/artrc_yolov8/image/60.jpg')x_scale = img0.shape[1] / widthy_scale = img0.shape[0] / heightimg = img0 / 255.img = cv2.resize(img, (width, height))img = np.transpose(img, (2, 0, 1))data = np.expand_dims(img, axis=0)sess = rt.InferenceSession('/home/wyh/artrc_catkin/src/artrc_yolov8/weights/best.onnx')input_name = sess.get_inputs()[0].namelabel_name = sess.get_outputs()[0].namepred = sess.run([label_name], {input_name: data.astype(np.float32)})[0]pred = np.squeeze(pred)pred = np.transpose(pred, (1, 0))pred_class = pred[..., 4:]pred_conf = np.max(pred_class, axis=-1)pred = np.insert(pred, 4, pred_conf, axis=-1)result = nms(pred, 0.3, 0.45)ret_img = draw(img0, x_scale, y_scale, result)# 使用OpenCV显示图像cv2.imshow('Detection Result', ret_img)cv2.waitKey(0)  # 等待按键事件cv2.destroyAllWindows()  # 关闭所有OpenCV窗口

5.3 利用.trt文件进行检测

#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <std_msgs/Header.h>
#include <opencv2/opencv.hpp>
#include "../include/artrc_yolov8/yolo.h"
#include "../include/artrc_yolov8/yolov8.h"
#include <NvInfer.h>
#include <NvUtils.h>
#include <opencv2/opencv.hpp>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <fstream>
#include "../include/artrc_yolov8/yolov8_trt.h"
#include <mutex>
cv::Mat image_;
namespace artrc_yolov8
{YoloResultData::~YoloResultData(){;}void YoloResultData::init(){ros::NodeHandle nh;img_receive_sub_ = nh.subscribe("/usb_camera/image_raw",1,&YoloResultData::image_receive_callback,this);img_detect_pub_ = nh.advertise<sensor_msgs::Image>("/detect_img", 1);boundingbox_result_pub_ = nh.advertise<artrc_yolov8::boundingbox_result_msgs>("/boundingbox_result",1);}void YoloResultData::processImage() {if (!image_.empty()) {// 在这里处理图像cv::Mat processedImage = image_.clone(); // 例如,你可以对图像进行一些处理// 显示图像// cv::imshow("Processed Image", processedImage);// cv::waitKey(30); // 等待30毫秒} else {ROS_WARN("No image received yet.");}}// 设置检测内参数void YoloResultData::setParameters(utils::InitParameter& initParameters){initParameters.class_names = utils::dataSets::coco80;initParameters.num_class = 80; // for cocoinitParameters.batch_size = 8;initParameters.dst_h = 640;initParameters.dst_w = 640;initParameters.input_output_names = { "images",  "output0" };initParameters.conf_thresh = 0.25f;initParameters.iou_thresh = 0.45f;initParameters.save_path = "";}//  yolo模型预测void YoloResultData::task(YOLOV8& yolo, const utils::InitParameter& param, std::vector<cv::Mat>& imgsBatch, const int& delayTime, const int& batchi,const bool& isShow, const bool& isSave){if (imgsBatch.empty()) {std::cerr << "Input image batch is empty." << std::endl;return;}// std::cout<< "--------------------------------"<< std::endl;std::clock_t start = std::clock();utils::DeviceTimer d_t0; yolo.copy(imgsBatch);	      float t0 = d_t0.getUsedTime();utils::DeviceTimer d_t1; yolo.preprocess(imgsBatch);  float t1 = d_t1.getUsedTime();utils::DeviceTimer d_t2; yolo.infer();				  float t2 = d_t2.getUsedTime();utils::DeviceTimer d_t3; yolo.postprocess(imgsBatch); float t3 = d_t3.getUsedTime();std::clock_t end = std::clock();// 计算时间差double duration = static_cast<double>(end - start) / CLOCKS_PER_SEC;// 输出运行时间// std::cout << "程序运行时间: " << duration << " 秒" << std::endl;// std::cout << "delayTime"<< delayTime << std::endl;if(isShow)utils::show(yolo.getObjectss(), param.class_names, delayTime, imgsBatch);// if(isSave)// 	utils::save(yolo.getObjectss(), param.class_names, param.save_path, imgsBatch, param.batch_size, batchi);// 在终端输出检测结果YoloResultData::result_show(yolo, param, t1, t2, t3);// YoloResultData::result_show(yolo, param);// // std::cout<<"77777777777777777"<<std::endl;for (size_t bi = 0; bi < imgsBatch.size(); bi++){cv_bridge::CvImagePtr cv_ptr(new cv_bridge::CvImage);cv_ptr->image = imgsBatch[bi];cv_ptr->encoding = "bgr8";img_detect_pub_.publish(cv_ptr->toImageMsg());}yolo.reset();}// 显示输出结果void YoloResultData::result_show(const YOLOV8& yolo, const utils::InitParameter& param, float t1, float t2, float t3) // void YoloResultData::result_show(const YOLOV8& yolo, const utils::InitParameter& param) {const auto& objectss = yolo.getObjectss();for (size_t bi = 0; bi < objectss.size(); bi++){for (const auto& box : objectss[bi]){// std::cout<< "preprocess time:"<< t1 / param.batch_size <<";   "  // "infer time:"<< t2 / param.batch_size << ";   "  // "postprocess time:"<<t3 / param.batch_size<<std::endl;// std::cout << "Image " << bi << ": Detected box - "// std::cout << "Label: " << param.class_names[box.label] << ", "// 		<< "Confidence: " << box.confidence << ", "// 		<< "Bounding Box: [" << box.left << ", "// 		<< box.top << ", "// 		<< box.right << ", "// 		<< box.bottom << "]" << std::endl;pub_msg_.label = param.class_names[box.label];pub_msg_.confidence = box.confidence;pub_msg_.xmin = box.left;pub_msg_.xmax = box.right;pub_msg_.ymin = box.top;// 填充边界框数组pub_msg_.bounding_box.clear();  // 确保清空之前的数据pub_msg_.bounding_box.push_back(box.left);pub_msg_.bounding_box.push_back(box.top);pub_msg_.bounding_box.push_back(box.right);pub_msg_.bounding_box.push_back(box.bottom);boundingbox_result_pub_.publish(pub_msg_);}}}// 订阅图像数据void YoloResultData::image_receive_callback(const sensor_msgs::Image& image_msg){cv_bridge::CvImagePtr cv_ptr;try {cv_ptr = cv_bridge::toCvCopy(image_msg, sensor_msgs::image_encodings::BGR8);// 处理图像(例如显示)image_ = cv_ptr->image;// cv::imshow("Image", cv_ptr->image);// cv::waitKey(30); // 等待30毫秒} catch (cv_bridge::Exception& e) {ROS_ERROR("cv_bridge exception: %s", e.what());return;}}
}int main(int argc, char** argv)
{ros::init(argc, argv, "yolov8_ros_node");artrc_yolov8::YoloResultData YoloResultData_node;YoloResultData_node.init();utils::InitParameter param;YoloResultData_node.setParameters(param);std::string model_path = "/home/wyh/artrc_catkin/src/artrc_yolov8/weights/yolov8n.trt";//加载模型std::string video_path = "/home/wyh/artrc_catkin/src/artrc_yolov8/image/行人视频.mp4";std::string image_path = "/home/wyh/artrc_catkin/src/artrc_yolov8/image/6406406.jpg";int camera_id = 0;//  get input 输入源 判断utils::InputStream source;source = utils::InputStream::IMAGE;// source = utils::InputStream::VIDEO;// source = utils::InputStream::CAMERA;// source = utils::InputStream::TOPIC_IMAGE;// update params from command line parserint size = -1; // w or hint batch_size = 8;bool is_show = false;bool is_save = false;int total_batches = 0;int delay_time = 50;// / 从参数服务器获取参数ros::param::get("~size", size);ros::param::get("~batch_size", batch_size);ros::param::get("~show", is_show);// 参数赋值param.dst_h = param.dst_w = size;param.batch_size = batch_size;param.is_show = is_show;// cv::VideoCapture capture(1);cv::VideoCapture capture(1);if (!setInputStream(source, image_path, video_path, camera_id,capture, total_batches, delay_time, param)){sample::gLogError << "read the input data errors!" << std::endl;return -1;}std::vector<unsigned char> trt_file = utils::loadModel(model_path);// // read modelif (trt_file.empty()){	std::cout << "trt_file is empty!" << std::endl;}else{std::cout << "trt_file is load!" << std::endl;}YOLOV8 yolo(param);// // init modelif (!yolo.init(trt_file)){std::cout << "initEngine() ocur errors!" << std::endl;}else{std::cout << "initEngine() ocur success!" << std::endl;	}yolo.check();std::vector<cv::Mat> imgs_batch;imgs_batch.reserve(param.batch_size);int batchi = 0;cv::Mat frame;ros::Rate rate(50);while (ros::ok()){// std::cout << "imgs_batch_" << imgs_batch.size() << ";"<< "batch_size" << param.batch_size << std::endl;if (imgs_batch.size() < param.batch_size) // get input{	if (source == utils::InputStream::VIDEO){capture.read(frame);// std::cout<<"00000_video"<< std::endl;}else if (source == utils::InputStream::CAMERA){capture.read(frame);// std::cout<<"11111_camera"<< std::endl;}else if (source == utils::InputStream::IMAGE){// std::cout<<"22222_image"<< std::endl;// frame = cv::imread(image_path);// 获取图像数据frame = YoloResultData_node.image_;	}else {// std::cout<<"33333_topic"<<std::endl;frame = YoloResultData_node.image_;	}if (!frame.empty()){imgs_batch.emplace_back(frame.clone());}else{int delay_time = 5;sample::gLogWarning << "no more video or camera frame" << std::endl;YoloResultData_node.task(yolo, param, imgs_batch, delay_time, batchi, is_show, is_save);imgs_batch.clear();batchi++;}}else{int delay_time = 1;YoloResultData_node.task(yolo, param, imgs_batch, delay_time, batchi, is_show, is_save);imgs_batch.clear();batchi++;}ros::spinOnce();  // Handle all callbacksrate.sleep();     // Sleep for a while before next loop iteration}// ros::spin();return 0;
}
# 将下述程序参数更改为自己类别
initParameters.class_names = utils::dataSets::coco80;
initParameters.num_class = 80; 
# 将权重文件替换为相应的文件

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/58938.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Nginx 反向代理(解决跨域)

文章目录 前言一、同源策略二、跨域是什么&#xff1f;三、Nginx解决跨域1.前端示例代码2.说明 四、nginx反向代理配置五、启动nginx六、最终效果总结 前言 Nginx反向代理解决跨域 一、同源策略 定义&#xff1a;同源策略&#xff08;Same-Origin Policy&#xff09;是指浏览…

ssm《数据库系统原理》课程平台的设计与实现+vue

系统包含&#xff1a;源码论文 所用技术&#xff1a;SpringBootVueSSMMybatisMysql 免费提供给大家参考或者学习&#xff0c;获取源码看文章最下面 需要定制看文章最下面 目 录 目 录 I 摘 要 III ABSTRACT IV 1 绪论 1 1.1 课题背景 1 1.2 研究现状 1 1.3 研究内容…

多渠道流量获取策略提升网站访问量的有效方法

内容概要 在当今数字时代&#xff0c;企业面临着越来越激烈的竞争&#xff0c;流量获取变得极为重要。多渠道流量获取不仅可以增加网站的访问量&#xff0c;还能够有效提升品牌的知名度和影响力。通过整合多种渠道&#xff0c;企业能够更好地触达目标受众&#xff0c;实现精准…

kafka实时返回浏览数据

在安装完kafka(Docker安装kafka_docker 部署kafka-CSDN博客)&#xff0c;查看容器是否启动&#xff1a; docker ps | grep -E kafka|zookeeper 再用python开启服务 from fastapi import FastAPI, Request from kafka import KafkaProducer import kafka import json import …

【MyBatis源码】BoundSql分析

基础 BoundSql是对SQL语句及参数信息的封装&#xff0c;它是SqlSource解析后的结果。Executor组件并不是直接通过StaticSqlSource对象完成数据库操作的&#xff0c;而是与BoundSql交互。BoundSql是对Executor组件执行SQL信息的封装&#xff0c;具体实现代码如下&#xff1a; …

A Consistent Dual-MRC Framework for Emotion-cause Pair Extraction——论文阅读笔记

前言 这是我第一次向同学院同年级的学生和老师们汇报的第一篇论文,于2022年发表在TOIS上,属于CCF A类,主要内容是将MRC应用到情感原因对抽取中。 论文链接:用于情绪-原因对提取的一致双 MRC 框架 |信息系统上的 ACM Transactions 这里我就不放上我自己翻译的中文版还有我…

【Linux系统】—— 基本指令(一)

【Linux系统】—— 基本指令&#xff08;一&#xff09; 1 「ls」指令1.1 初识 「ls」 指令1.2 「ls -l」1.3 认识文件1.4 「ls -l」显示的内容1.5 如何区分文件类型1.6 「ls -a」1.7 混合使用命令行选项1.8 「ls」查看指定目录下文件1.9 「ls」 指令常用命令行选项 2 「pwd」 …

js,ts控制流程

摘要&#xff1a; 在 JavaScript 和 TypeScript 中&#xff0c;控制流程是指程序执行的顺序和条件判断。以下是一些常见的控制流程结构&#xff0c;包括条件语句、循环语句和函数调用等。 1. 条件语句&#xff1a; if 语句 let condition true;if (condition) {console.log(C…

计组-Cache的基本概念,计算Cache+主存的平均周期

由于寄存器是集成在CPU中且容量极小&#xff0c;所以我们用Cache来提高速度&#xff0c;在无寄存器时其当做访问速度最快的 Cache的命中率: 是指当CPU要处理某个数据时&#xff0c;首先会考虑在Cache里面去读取&#xff0c;当需要读取的数据在Cache里面时&#xff0c;此时这个…

《大数据与人工智能:提升数据质量与数量的利器》

《大数据与人工智能&#xff1a;提升数据质量与数量的利器》 一、大数据与人工智能的融合趋势二、大数据增加数据数量的方法&#xff08;一&#xff09;不同途径的数据增量&#xff08;二&#xff09;数据增强的多样方法 三、人工智能提升数据数量的手段&#xff08;一&#xf…

算法中使用的数据结构解释*

算法中使用的数据结构解释 在算法的执行过程中&#xff0c;需要有能够容纳临时数据的内存数据结构。数据结构的有效实施需要选择适当的数据结构。迭代或递归算法需要专门为其逻辑设计的数据结构。 也有人表述为容器&#xff0c;存放数据的容器。 在递归算法的情况下&#xff0c…

UE4安卓Gradle工程中的libUE4.so的生成原理

流程图 流程图放在最前面&#xff0c;下面是讲解。 libUE4.so 问&#xff1a;在UE4安卓开发中&#xff0c;libUE4.so即是符号表&#xff0c;又是引擎代码native&#xff0c;是吗&#xff1f; 答&#xff1a;是的&#xff0c;libUE4.so在UE4安卓开发中既包含符号表&#xff0c;…

C4.【C++ Cont】C++数据类型和typedef的补充说明

1.数据类型 C同C语言的一样的数据类型不在赘述,参见3.【C语言】内置数据类型,这里只讲不同点 1.在C中,布尔类型包含在头文件iostream中,不用像C语言一样包含stdbool.h 布尔类型变量的定义写法和C语言不同,只能写成 bool a true; bool b false; bool不可写成_Bool或Bool …

Windows部署rabbitmq

本次安装环境&#xff1a; 系统&#xff1a;Windows 11 软件建议版本&#xff1a; erlang OPT 26.0.2rabbitmq 3.12.4 一、下载 1.1 下载erlang 官网下载地址&#xff1a; 1.2 下载rabbitmq 官网下载地址&#xff1a; 建议使用解压版&#xff0c;安装版可能会在安装软件…

前端学习-盒子模型(十八)

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 目录 前言 盒子模型组成 边框 语法 边框简写 代码示例 表格的细线边框 语法 内边距 内边距复合写法 外边距 外边距典型应用 外边距合并 清除内外边距 总结 前…

GHuNeRF: Generalizable Human NeRF from a Monocular Video

研究背景 研究问题&#xff1a;这篇文章要解决的问题是学习一个从单目视频中泛化的人类NeRF模型。尽管现有的泛化人类NeRF已经取得了令人印象深刻的成果&#xff0c;但它们需要多视图图像或视频&#xff0c;这在某些情况下可能不可用。此外&#xff0c;一些基于单目视频的人类…

为啥学习数据结构和算法

基础知识就像是一座大楼的地基&#xff0c;它决定了我们的技术高度。而要想快速做出点事情&#xff0c;前提条件一定是基础能力过硬&#xff0c;“内功”要到位。 想要通关大厂面试&#xff0c;千万别让数据结构和算法拖了后腿 我们学任何知识都是为了“用”的&#xff0c;是为…

离线安装Vue2开发环境

在外网进行Vue2开发后&#xff0c;需要转到内网开发&#xff0c;无法在线依赖库安装&#xff0c;需要迁移node_modules。 1.内外网开发电脑安装同样版本的nodejs 我本地安装的node-v16.17.1-x64.msi&#xff0c;所以在内网环境也要按照node-v16.17.1-x64.msi。 在外网环境使用…

初探Flink的序列化

Flink中的序列化应用场景 程序通常使用(至少)两种不同的数据表示形式[2]&#xff1a; 1. 在内存中&#xff0c;数据保存在对象、结构体、列表、数组、哈希表和树等结构中。 2. 将数据写入文件或通过网络发送时&#xff0c;必须将其序列化为字节序列。 从内存中的表示到字节序列…

【ESP32】ESP-IDF开发 | I2C控制器+I2C主从收发例程

1. 简介 I2C&#xff08;Inter-Integrated Circuit&#xff09;&#xff0c;是由Philips公司在1980年代初开发的一种半双工的同步串行总线&#xff0c;它利用一根时钟线和一根数据线在连接总线的两个器件之间进行信息的传递&#xff0c;为设备之间数据交换提供了一种简单高效的…