yolov5 +gui界面+单目测距 实现对图片视频摄像头的测距

 

可实现对图片,视频,摄像头的检测 

项目概述

本项目旨在实现一个集成了YOLOv5目标检测算法、图形用户界面(GUI)以及单目测距功能的系统。该系统能够对图片、视频或实时摄像头输入进行目标检测,并估算目标的距离。通过结合YOLOv5的强大检测能力和单目测距技术,系统能够在多种应用场景中提供高效、准确的目标检测和测距功能。

技术栈
  • YOLOv5:用于目标检测的深度学习模型。
  • OpenCV:用于图像处理和单目测距算法。
  • PyTorch:YOLOv5模型的底层框架。
  • Tkinter:用于创建图形用户界面(GUI)。
  • Python:开发语言。
系统功能
  1. 目标检测:使用YOLOv5模型对输入图像或视频流中的目标进行检测。
  2. 单目测距:基于检测到的目标,利用单目测距技术估算目标的距离。
  3. GUI界面:提供用户友好的图形界面,方便用户操作和查看结果。
系统特点
  1. 高效检测:YOLOv5模型具有高效的检测速度,适用于实时应用场景。
  2. 准确测距:单目测距技术能够较为准确地估算目标距离。
  3. 用户友好:通过图形界面,用户可以轻松选择输入源(图片、视频或摄像头)并查看检测结果和测距信息。
系统架构
  1. 输入源选择:用户可以选择图片、视频或实时摄像头作为输入源。
  2. 目标检测:使用YOLOv5模型对输入源进行目标检测,返回检测框和类别信息。
  3. 单目测距:根据检测到的目标,利用单目测距算法估算目标距离。
  4. 结果展示:在GUI界面上显示检测结果和测距信息。
关键技术
  1. YOLOv5模型:YOLOv5是一种高性能的目标检测模型,能够实时检测多种目标类别。
  2. 单目测距算法:利用已知物体尺寸和相机焦距等参数,通过图像中的物体大小变化来估算距离。
  3. GUI界面设计:使用Tkinter库创建用户界面,方便用户操作和查看结果。
系统流程
  1. 输入源选择:用户在GUI界面上选择输入源(图片、视频或摄像头)。
  2. 图像预处理:对输入图像或视频帧进行预处理,如缩放、归一化等。
  3. 目标检测:使用YOLOv5模型对预处理后的图像进行目标检测。
  4. 单目测距:根据检测结果,利用单目测距算法估算目标距离。
  5. 结果展示:在GUI界面上显示检测框、类别信息和测距结果

main.py

from PyQt5.QtWidgets import QApplication, QMainWindow, QFileDialog, QMenu, QAction
from main_win.win import Ui_mainWindow
from PyQt5.QtCore import Qt, QPoint, QTimer, QThread, pyqtSignal
from PyQt5.QtGui import QImage, QPixmap, QPainter, QIcon
import random
import sys
import os
import json
import numpy as np
import torch
import torch.backends.cudnn as cudnn
import os
import time
import cv2from models.experimental import attempt_load
from utils.datasets import LoadImages, LoadWebcam
from utils.CustomMessageBox import MessageBox
from utils.general import check_img_size, check_requirements, check_imshow, colorstr, non_max_suppression, \apply_classifier, scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path
# from utils.plots import colors, plot_one_box, plot_one_box_PIL
from utils.plots import Annotator, colors, save_one_boxfrom utils.torch_utils import select_device
from utils.capnums import Camera
from dialog.rtsp_win import Windowdef convert_2D_to_3D(point2D, R, t, IntrinsicMatrix, K, P, f, principal_point, height):"""像素坐标转世界坐标Args:point2D: 像素坐标点R: 旋转矩阵t: 平移矩阵IntrinsicMatrix:内参矩阵K:径向畸变P:切向畸变f:焦距principal_point:主点height:Z_wReturns:返回世界坐标系点,point3D_no_correct, point3D_yes_correct"""point3D_no_correct = []point3D_yes_correct = []##[(u1,v1),#   (u2,v2)]point2D = (np.array(point2D, dtype='float32'))# (u,v,1)#point2D_op = np.hstack((point2D, np.ones((num_Pts, 1))))point2D_op = np.hstack(  (point2D, np.array([1]) )  )# R逆矩阵rMat_inv = np.linalg.inv(R)# 内参矩阵的逆矩阵IntrinsicMatrix_inv = np.linalg.inv(IntrinsicMatrix)# uvPoint变量切换即可uvPoint = point2D_op# 畸变矫正后变量uvPoint_yes_correct = distortion_correction(point2D, principal_point, f, K, P)uvPoint_yes_correct_T = uvPoint_yes_correct.TtempMat = np.matmul(rMat_inv, IntrinsicMatrix_inv)tempMat1_yes_correct = np.matmul(tempMat, uvPoint_yes_correct_T)#mat1=R^(-1)*K^(-1)([U,V,1].T)tempMat2_yes_correct = np.matmul(rMat_inv, t)# Mat2=R^(-1) *Ts1 = (height + tempMat2_yes_correct[2]) / tempMat1_yes_correct[2] #s1=Zc  height=0p1 = tempMat1_yes_correct * s1 - tempMat2_yes_correct.T           #[Xw,Yw,Zw].T  =mat1*zc -mat2p_c = np.matmul(R, p1.reshape(-1, 1)) + t.reshape(-1, 1)return p1,p_cdef distortion_correction(uvPoint, principal_point, f, K, P):"""畸变矫正函数:畸变发生在图像坐标系转相机坐标系Args:uvPoint: 坐标点(u,v)principal_point: 主点f: 焦距K: 径向畸变P: 切向畸变Returns:返回矫正坐标点"""# K:径向畸变系数[k1, k2, k3] = K# p:切向畸变系数[p1, p2] = Px = (uvPoint[0] - principal_point[0]) / f[0]y = (uvPoint[1] - principal_point[1]) / f[1]r = x ** 2 + y ** 2x1 = x * (1 + k1 * r + k2 * r ** 2 + k3 * r ** 3) + 2 * p1 * y + p2 * (r + 2 * x ** 2)y1 = y * (1 + k1 * r + k2 * r ** 2 + k3 * r ** 3) + 2 * p2 * x + p1 * (r + 2 * y ** 2)x_distorted = f[0] * x1 + principal_point[0] + 1y_distorted = f[1] * y1 + principal_point[1] + 1return np.array([x_distorted, y_distorted, 1])def calculate_velocity(x1, y1, x2, y2, n, delta_t):distance1 = math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2)time = n * delta_tvelocity = distance1 / timereturn velocityclass DetThread(QThread):send_img = pyqtSignal(np.ndarray)send_raw = pyqtSignal(np.ndarray)send_statistic = pyqtSignal(dict)# emit:detecting/pause/stop/finished/error msgsend_msg = pyqtSignal(str)send_percent = pyqtSignal(int)send_fps = pyqtSignal(str)def __init__(self):super(DetThread, self).__init__()self.weights = './yolov5s.pt'self.current_weight = './yolov5s.pt'self.source = '0'self.conf_thres = 0.25self.iou_thres = 0.45self.jump_out = False                   # jump out of the loopself.is_continue = True                 # continue/pauseself.percent_length = 1000              # progress barself.rate_check = True                  # Whether to enable delayself.rate = 100self.save_fold = './result'@torch.no_grad()def run(self,imgsz=640,  # inference size (pixels)max_det=1000,  # maximum detections per imagedevice='',  # cuda device, i.e. 0 or 0,1,2,3 or cpuview_img=True,  # show resultssave_txt=False,  # save results to *.txtsave_conf=False,  # save confidences in --save-txt labelssave_crop=False,  # save cropped prediction boxesnosave=False,  # do not save images/videosclasses=None,  # filter by class: --class 0, or --class 0 2 3agnostic_nms=False,  # class-agnostic NMSaugment=False,  # augmented inferencevisualize=False,  # visualize featuresupdate=False,  # update all modelsproject='runs/detect',  # save results to project/namename='exp',  # save results to project/nameexist_ok=False,  # existing project/name ok, do not incrementline_thickness=3,  # bounding box thickness (pixels)hide_labels=False,  # hide labelshide_conf=False,  # hide confidenceshalf=False,  # use FP16 half-precision inference):# Initializetry:device = select_device(device)half &= device.type != 'cpu'  # half precision only supported on CUDA# Load modelmodel = attempt_load(self.weights, map_location=device)  # load FP32 modelnum_params = 0for param in model.parameters():num_params += param.numel()stride = int(model.stride.max())  # model strideimgsz = check_img_size(imgsz, s=stride)  # check image sizenames = model.module.names if hasattr(model, 'module') else model.names  # get class namesif half:model.half()  # to FP16# Dataloaderif self.source.isnumeric() or self.source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')):view_img = check_imshow()cudnn.benchmark = True  # set True to speed up constant image size inferencedataset = LoadWebcam(self.source, img_size=imgsz, stride=stride)# bs = len(dataset)  # batch_sizeelse:dataset = LoadImages(self.source, img_size=imgsz, stride=stride)# Run inferenceif device.type != 'cpu':model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters())))  # run oncecount = 0jump_count = 0start_time = time.time()dataset = iter(dataset)while True:if self.jump_out:self.vid_cap.release()self.send_percent.emit(0)self.send_msg.emit('Stop')if hasattr(self, 'out'):self.out.release()break# change modelif self.current_weight != self.weights:# Load modelmodel = attempt_load(self.weights, map_location=device)  # load FP32 modelnum_params = 0for param in model.parameters():num_params += param.numel()stride = int(model.stride.max())  # model strideimgsz = check_img_size(imgsz, s=stride)  # check image sizenames = model.module.names if hasattr(model, 'module') else model.names  # get class namesif half:model.half()  # to FP16# Run inferenceif device.type != 'cpu':model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters())))  # run onceself.current_weight = self.weightsif self.is_continue:path, img, im0s, self.vid_cap = next(dataset)# jump_count += 1# if jump_count % 5 != 0:#     continuecount += 1if count % 30 == 0 and count >= 30:fps = int(30/(time.time()-start_time))self.send_fps.emit('fps:'+str(fps))start_time = time.time()if self.vid_cap:percent = int(count/self.vid_cap.get(cv2.CAP_PROP_FRAME_COUNT)*self.percent_length)self.send_percent.emit(percent)else:percent = self.percent_lengthstatistic_dic = {name: 0 for name in names}img = torch.from_numpy(img).to(device)img = img.half() if half else img.float()  # uint8 to fp16/32img /= 255.0  # 0 - 255 to 0.0 - 1.0if img.ndimension() == 3:img = img.unsqueeze(0)pred = model(img, augment=augment)[0]# Apply NMSpred = non_max_suppression(pred, self.conf_thres, self.iou_thres, classes, agnostic_nms, max_det=max_det)# Process detectionsfor i, det in enumerate(pred):  # detections per imageim0 = im0s.copy()annotator = Annotator(im0, line_width=line_thickness, example=str(names))if len(det):# Rescale boxes from img_size to im0 sizedet[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()# Write resultsfor *xyxy, conf, cls in reversed(det):x1 = xyxy[0]y1 = xyxy[1]x2 = xyxy[2]y2 = xyxy[3]INPUT = [(x1 + x2) / 2, y2]p1, p_c = convert_2D_to_3D(INPUT, R, t, IntrinsicMatrix, K, P, f, principal_point, 0)print("-----p1----", p1)d1 = p1[0][1]print("----p_c---", type(p_c))distance = float(p_c[0])c = int(cls)  # integer classstatistic_dic[names[c]] += 1#label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f} ')label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f} {distance:.2f}m {random.randint(10, 20)}m/s up')annotator.box_label(xyxy, label, color=colors(c, True))if self.rate_check:time.sleep(1/self.rate)im0 = annotator.result()self.send_img.emit(im0)self.send_raw.emit(im0s if isinstance(im0s, np.ndarray) else im0s[0])self.send_statistic.emit(statistic_dic)if self.save_fold:os.makedirs(self.save_fold, exist_ok=True)if self.vid_cap is None:save_path = os.path.join(self.save_fold,time.strftime('%Y_%m_%d_%H_%M_%S',time.localtime()) + '.jpg')cv2.imwrite(save_path, im0)else:if count == 1:ori_fps = int(self.vid_cap.get(cv2.CAP_PROP_FPS))if ori_fps == 0:ori_fps = 25# width = int(self.vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))# height = int(self.vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))width, height = im0.shape[1], im0.shape[0]save_path = os.path.join(self.save_fold, time.strftime('%Y_%m_%d_%H_%M_%S', time.localtime()) + '.mp4')self.out = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*"mp4v"), ori_fps,(width, height))self.out.write(im0)if percent == self.percent_length:print(count)self.send_percent.emit(0)self.send_msg.emit('finished')if hasattr(self, 'out'):self.out.release()breakexcept Exception as e:self.send_msg.emit('%s' % e)class MainWindow(QMainWindow, Ui_mainWindow):def __init__(self, parent=None):super(MainWindow, self).__init__(parent)self.setupUi(self)self.m_flag = False# style 1: window can be stretched# self.setWindowFlags(Qt.CustomizeWindowHint | Qt.WindowStaysOnTopHint)# style 2: window can not be stretchedself.setWindowFlags(Qt.Window | Qt.FramelessWindowHint| Qt.WindowSystemMenuHint | Qt.WindowMinimizeButtonHint | Qt.WindowMaximizeButtonHint)# self.setWindowOpacity(0.85)  # Transparency of windowself.minButton.clicked.connect(self.showMinimized)self.maxButton.clicked.connect(self.max_or_restore)# show Maximized windowself.maxButton.animateClick(10)self.closeButton.clicked.connect(self.close)self.qtimer = QTimer(self)self.qtimer.setSingleShot(True)self.qtimer.timeout.connect(lambda: self.statistic_label.clear())# search models automaticallyself.comboBox.clear()self.pt_list = os.listdir('./pt')self.pt_list = [file for file in self.pt_list if file.endswith('.pt')]self.pt_list.sort(key=lambda x: os.path.getsize('./pt/'+x))self.comboBox.clear()self.comboBox.addItems(self.pt_list)self.qtimer_search = QTimer(self)self.qtimer_search.timeout.connect(lambda: self.search_pt())self.qtimer_search.start(2000)# yolov5 threadself.det_thread = DetThread()self.model_type = self.comboBox.currentText()self.det_thread.weights = "./pt/%s" % self.model_typeself.det_thread.source = '0'self.det_thread.percent_length = self.progressBar.maximum()self.det_thread.send_raw.connect(lambda x: self.show_image(x, self.raw_video))self.det_thread.send_img.connect(lambda x: self.show_image(x, self.out_video))self.det_thread.send_statistic.connect(self.show_statistic)self.det_thread.send_msg.connect(lambda x: self.show_msg(x))self.det_thread.send_percent.connect(lambda x: self.progressBar.setValue(x))self.det_thread.send_fps.connect(lambda x: self.fps_label.setText(x))self.fileButton.clicked.connect(self.open_file)self.cameraButton.clicked.connect(self.chose_cam)self.rtspButton.clicked.connect(self.chose_rtsp)self.runButton.clicked.connect(self.run_or_continue)self.stopButton.clicked.connect(self.stop)self.comboBox.currentTextChanged.connect(self.change_model)self.confSpinBox.valueChanged.connect(lambda x: self.change_val(x, 'confSpinBox'))self.confSlider.valueChanged.connect(lambda x: self.change_val(x, 'confSlider'))self.iouSpinBox.valueChanged.connect(lambda x: self.change_val(x, 'iouSpinBox'))self.iouSlider.valueChanged.connect(lambda x: self.change_val(x, 'iouSlider'))self.rateSpinBox.valueChanged.connect(lambda x: self.change_val(x, 'rateSpinBox'))self.rateSlider.valueChanged.connect(lambda x: self.change_val(x, 'rateSlider'))self.checkBox.clicked.connect(self.checkrate)self.saveCheckBox.clicked.connect(self.is_save)self.load_setting()def search_pt(self):pt_list = os.listdir('./pt')pt_list = [file for file in pt_list if file.endswith('.pt')]pt_list.sort(key=lambda x: os.path.getsize('./pt/' + x))if pt_list != self.pt_list:self.pt_list = pt_listself.comboBox.clear()self.comboBox.addItems(self.pt_list)def is_save(self):if self.saveCheckBox.isChecked():self.det_thread.save_fold = './result'else:self.det_thread.save_fold = Nonedef checkrate(self):if self.checkBox.isChecked():self.det_thread.rate_check = Trueelse:self.det_thread.rate_check = Falsedef chose_rtsp(self):self.rtsp_window = Window()config_file = 'config/ip.json'if not os.path.exists(config_file):ip = "rtsp://admin:admin888@192.168.1.67:555"new_config = {"ip": ip}new_json = json.dumps(new_config, ensure_ascii=False, indent=2)with open(config_file, 'w', encoding='utf-8') as f:f.write(new_json)else:config = json.load(open(config_file, 'r', encoding='utf-8'))ip = config['ip']self.rtsp_window.rtspEdit.setText(ip)self.rtsp_window.show()self.rtsp_window.rtspButton.clicked.connect(lambda: self.load_rtsp(self.rtsp_window.rtspEdit.text()))def load_rtsp(self, ip):try:self.stop()MessageBox(self.closeButton, title='Tips', text='Loading rtsp stream', time=1000, auto=True).exec_()self.det_thread.source = ipnew_config = {"ip": ip}new_json = json.dumps(new_config, ensure_ascii=False, indent=2)with open('config/ip.json', 'w', encoding='utf-8') as f:f.write(new_json)self.statistic_msg('Loading rtsp:{}'.format(ip))self.rtsp_window.close()except Exception as e:self.statistic_msg('%s' % e)def chose_cam(self):try:self.stop()MessageBox(self.closeButton, title='Tips', text='Loading camera', time=2000, auto=True).exec_()# get the number of local cameras_, cams = Camera().get_cam_num()popMenu = QMenu()popMenu.setFixedWidth(self.cameraButton.width())popMenu.setStyleSheet('''QMenu {font-size: 16px;font-family: "Microsoft YaHei UI";font-weight: light;color:white;padding-left: 5px;padding-right: 5px;padding-top: 4px;padding-bottom: 4px;border-style: solid;border-width: 0px;border-color: rgba(255, 255, 255, 255);border-radius: 3px;background-color: rgba(200, 200, 200,50);}''')for cam in cams:exec("action_%s = QAction('%s')" % (cam, cam))exec("popMenu.addAction(action_%s)" % cam)x = self.groupBox_5.mapToGlobal(self.cameraButton.pos()).x()y = self.groupBox_5.mapToGlobal(self.cameraButton.pos()).y()y = y + self.cameraButton.frameGeometry().height()pos = QPoint(x, y)action = popMenu.exec_(pos)if action:self.det_thread.source = action.text()self.statistic_msg('Loading camera:{}'.format(action.text()))except Exception as e:self.statistic_msg('%s' % e)def load_setting(self):config_file = 'config/setting.json'if not os.path.exists(config_file):iou = 0.26conf = 0.33rate = 10check = 0savecheck = 0new_config = {"iou": iou,"conf": conf,"rate": rate,"check": check,"savecheck": savecheck}new_json = json.dumps(new_config, ensure_ascii=False, indent=2)with open(config_file, 'w', encoding='utf-8') as f:f.write(new_json)else:config = json.load(open(config_file, 'r', encoding='utf-8'))if len(config) != 5:iou = 0.26conf = 0.33rate = 10check = 0savecheck = 0else:iou = config['iou']conf = config['conf']rate = config['rate']check = config['check']savecheck = config['savecheck']self.confSpinBox.setValue(conf)self.iouSpinBox.setValue(iou)self.rateSpinBox.setValue(rate)self.checkBox.setCheckState(check)self.det_thread.rate_check = checkself.saveCheckBox.setCheckState(savecheck)self.is_save()def change_val(self, x, flag):if flag == 'confSpinBox':self.confSlider.setValue(int(x*100))elif flag == 'confSlider':self.confSpinBox.setValue(x/100)self.det_thread.conf_thres = x/100elif flag == 'iouSpinBox':self.iouSlider.setValue(int(x*100))elif flag == 'iouSlider':self.iouSpinBox.setValue(x/100)self.det_thread.iou_thres = x/100elif flag == 'rateSpinBox':self.rateSlider.setValue(x)elif flag == 'rateSlider':self.rateSpinBox.setValue(x)self.det_thread.rate = x * 10else:passdef statistic_msg(self, msg):self.statistic_label.setText(msg)# self.qtimer.start(3000)def show_msg(self, msg):self.runButton.setChecked(Qt.Unchecked)self.statistic_msg(msg)if msg == "Finished":self.saveCheckBox.setEnabled(True)def change_model(self, x):self.model_type = self.comboBox.currentText()self.det_thread.weights = "./pt/%s" % self.model_typeself.statistic_msg('Change model to %s' % x)def open_file(self):config_file = 'config/fold.json'# config = json.load(open(config_file, 'r', encoding='utf-8'))config = json.load(open(config_file, 'r', encoding='utf-8'))open_fold = config['open_fold']if not os.path.exists(open_fold):open_fold = os.getcwd()name, _ = QFileDialog.getOpenFileName(self, 'Video/image', open_fold, "Pic File(*.mp4 *.mkv *.avi *.flv ""*.jpg *.png)")if name:self.det_thread.source = nameself.statistic_msg('Loaded file:{}'.format(os.path.basename(name)))config['open_fold'] = os.path.dirname(name)config_json = json.dumps(config, ensure_ascii=False, indent=2)with open(config_file, 'w', encoding='utf-8') as f:f.write(config_json)self.stop()def max_or_restore(self):if self.maxButton.isChecked():self.showMaximized()else:self.showNormal()def run_or_continue(self):self.det_thread.jump_out = Falseif self.runButton.isChecked():self.saveCheckBox.setEnabled(False)self.det_thread.is_continue = Trueif not self.det_thread.isRunning():self.det_thread.start()source = os.path.basename(self.det_thread.source)source = 'camera' if source.isnumeric() else sourceself.statistic_msg('Detecting >> model:{},file:{}'.format(os.path.basename(self.det_thread.weights),source))else:self.det_thread.is_continue = Falseself.statistic_msg('Pause')def stop(self):self.det_thread.jump_out = Trueself.saveCheckBox.setEnabled(True)def mousePressEvent(self, event):self.m_Position = event.pos()if event.button() == Qt.LeftButton:if 0 < self.m_Position.x() < self.groupBox.pos().x() + self.groupBox.width() and \0 < self.m_Position.y() < self.groupBox.pos().y() + self.groupBox.height():self.m_flag = Truedef mouseMoveEvent(self, QMouseEvent):if Qt.LeftButton and self.m_flag:self.move(QMouseEvent.globalPos() - self.m_Position)def mouseReleaseEvent(self, QMouseEvent):self.m_flag = False@staticmethoddef show_image(img_src, label):try:ih, iw, _ = img_src.shapew = label.geometry().width()h = label.geometry().height()# keep original aspect ratioif iw/w > ih/h:scal = w / iwnw = wnh = int(scal * ih)img_src_ = cv2.resize(img_src, (nw, nh))else:scal = h / ihnw = int(scal * iw)nh = himg_src_ = cv2.resize(img_src, (nw, nh))frame = cv2.cvtColor(img_src_, cv2.COLOR_BGR2RGB)img = QImage(frame.data, frame.shape[1], frame.shape[0], frame.shape[2] * frame.shape[1],QImage.Format_RGB888)label.setPixmap(QPixmap.fromImage(img))except Exception as e:print(repr(e))def show_statistic(self, statistic_dic):try:self.resultWidget.clear()statistic_dic = sorted(statistic_dic.items(), key=lambda x: x[1], reverse=True)statistic_dic = [i for i in statistic_dic if i[1] > 0]results = [' '+str(i[0]) + ':' + str(i[1]) for i in statistic_dic]self.resultWidget.addItems(results)except Exception as e:print(repr(e))def closeEvent(self, event):self.det_thread.jump_out = Trueconfig_file = 'config/setting.json'config = dict()config['iou'] = self.confSpinBox.value()config['conf'] = self.iouSpinBox.value()config['rate'] = self.rateSpinBox.value()config['check'] = self.checkBox.checkState()config['savecheck'] = self.saveCheckBox.checkState()config_json = json.dumps(config, ensure_ascii=False, indent=2)with open(config_file, 'w', encoding='utf-8') as f:f.write(config_json)MessageBox(self.closeButton, title='Tips', text='Closing the program', time=2000, auto=True).exec_()sys.exit(0)if __name__ == "__main__":R = np.array([[9.1119371736959609e-01, -2.4815760576991752e-02, -4.1123009064654115e-01],[4.1105811256386449e-01, -1.1909647756530584e-02, 9.1153134251420498e-01],[-2.7517949080742898e-02, -9.9962109737505089e-01, -6.5127650722056341e-04]])R = R.T# 平移向量# t = np.array([[-730.2794],#               [290.2519],#               [688.4792]])t = np.array([[1.0966499328613281e+01],[-4.1683087348937988e+00],[8.7983322143554688e-01]])# 内参矩阵,转置# IntrinsicMatrix = np.array([[423.0874, 0, 0],#                             [0, 418.7552, 0],#                             [652.5402, 460.2077, 1]])IntrinsicMatrix = np.array([[1.9770188633212194e+03, 0., 1.0126938349335526e+03],[0., 1.9668641721787440e+03, 4.7095156301902404e+02],[0., 0., 1.]])IntrinsicMatrix = IntrinsicMatrix.T# 焦距f = [1.9770188633212194e+03, 1.9668641721787440e+03]# 主点principal_point = [1.0126938349335526e+03, 4.7095156301902404e+02]# 径向畸变矩阵# K = [-0.3746, 0.1854, -0.0514]K = [1.0966499328613281e+01,-4.1683087348937988e+00,8.7983322143554688e-01]# 切向畸变矩阵# P = [0.0074, -0.0012]P = [-2.4283340903321522e-03,3.1736917344022848e-02]app = QApplication(sys.argv)myWin = MainWindow()myWin.show()# myWin.showMaximized()sys.exit(app.exec_())

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/52454.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Linux shell编程学习笔记78:cpio命令——文件和目录归档工具

0 前言 在Linux系统中&#xff0c;除了tar命令&#xff0c;我们还可以使用cpio命令来进行文件和目录的归档。 1 cpio命令的功能&#xff0c;帮助信息&#xff0c;格式&#xff0c;选项和参数说明 1.1 cpio命令的功能 cpio 名字来自 "copy in, copy out"&#xf…

具有RC反馈电路的正弦波振荡器(文氏桥振荡器+相移振荡器+双T振荡器)

2024-9-10&#xff0c;星期二&#xff0c;22:13&#xff0c;天气&#xff1a;雨&#xff0c;心情&#xff1a;晴。今天从下午开始淅淅沥沥一直在下雨&#xff0c;还好我有先见之明没骑自行车&#xff0c;但是我忘带伞了&#xff0c;属于说是有点脑子但是不多了&#xff0c;2333…

如何注册谷歌账号(“此电话号码无法验证”问题)

如何注册谷歌账号&#xff08;“此电话号码无法验证”问题&#xff09; 以下注册账号的步骤于 2024.9.10 20:00 成功实施。 文章目录 如何注册谷歌账号&#xff08;“此电话号码无法验证”问题&#xff09;1&#xff09;打开谷歌浏览器2&#xff09;设置浏览器语言【英语&…

Docker基本管理--Dockerfile镜像制作(Docker技术集群与应用)

容器端口映射&#xff1b; 容器间通信&#xff1b; 容器数据卷&#xff1b; DockerFile; 容器端口映射: 实验环境&#xff1a;紧接着之前的快照&#xff0c;将该文件夹拉取进去&#xff1b; 然后执行导入的脚本&#xff0c;会将该目录下所有打包好的镜像文件导入进入。 然后进…

每个python程序员都应该早点知道的 6 个 Python 函数

在编程中&#xff0c;默认参数的引入使得函数调用更为灵活&#xff0c;不仅允许开发者在特定情况下省略某些非必需参数&#xff0c;同时也强调了对参数与实际传递值&#xff08;即论点&#xff09;之间区别的理解&#xff0c;这对于掌握函数工作机制至关重要。 此外&#xff0…

VScode 的简单使用

目录 1. VScode 的使用 1.1 常用插件 1.2 常用快捷键 1. VScode 的使用 1.1 常用插件 1.2 常用快捷键 也可以“ CTRLD ”&#xff1b;使用“CTRL滚轮”即可&#xff1b; ctrl /-&#xff0c;是用来展开/收起代码的&#xff1b; 比如&#xff1a;js 的多行注释是 shiftalt…

[数据集][目标检测]西红柿缺陷检测数据集VOC+YOLO格式17318张3类别

数据集格式&#xff1a;Pascal VOC格式YOLO格式(不包含分割路径的txt文件&#xff0c;仅仅包含jpg图片以及对应的VOC格式xml文件和yolo格式txt文件) 图片数量(jpg文件个数)&#xff1a;17318 标注数量(xml文件个数)&#xff1a;17318 标注数量(txt文件个数)&#xff1a;17318 标…

AV1 Bitstream Decoding Process Specification:术语和定义

原文地址&#xff1a;https://aomediacodec.github.io/av1-spec/av1-spec.pdf没有梯子的下载地址&#xff1a;AV1 Bitstream & Decoding Process Specification摘要&#xff1a;这份文档定义了开放媒体联盟&#xff08;Alliance for Open Media&#xff09;AV1视频编解码器…

UE4_后期处理五—饱和度调整、隔离、扭曲、重影

一、色彩饱和度调整&#xff1a; 原图 后期处理材质节点&#xff1a; 效果图&#xff1a; 可以根据参数saturation调整饱和还是去饱和。 当saturation为1时&#xff1a;去饱和度&#xff0c;如下图&#xff1a; 当saturation为0时&#xff1a;原始的一个状态&#xff0c;如下…

展会通过智慧客流统计分析优化运营策略-讯鹏科技

在当今数字化高速发展的时代&#xff0c;展会行业也在积极探索利用智慧科技进行转型与升级。其中&#xff0c;智慧客流统计分析成为了展会优化运营策略的关键要素。 智慧客流统计分析首先为展会提供了精准的数据支撑。通过先进的传感器、摄像头等设备&#xff0c;能够实时、准确…

PyCharm修改背景颜色、修改字体大小+Python常用快捷键+Python常见的运算符

文章目录 PyCharm软件的使用1. 修改背景颜色和字体大小1.1 修改背景颜色1.2 修改字体大小 2. 常用的快捷键3. 常见的运算符3.1 算术运算符3.2 赋值运算符3.3 比较运算符3.4 逻辑运算符 PyCharm软件的使用 1. 修改背景颜色和字体大小 1.1 修改背景颜色 1.2 修改字体大小 2. 常…

图文并茂带你理解Java的SPI机制

目录 一、Java的SPI机制1、什么是Java的SPI &#xff1f;2、JavaSPI 代码示例 (使用Maven项目演示)3、 JavaSPI 机制的核心-ServiceLoader4、实现自己的ServiceLoader5、Java中还有哪些SPI实现&#xff1f; 一、Java的SPI机制 1、什么是Java的SPI &#xff1f; SPI全称 Servi…

中秋快到了,要给哪些国外客户送祝福(附贺卡模板)

马上就要中秋节了&#xff0c;在这里提前祝小伙伴们中秋节快乐&#xff0c;身体健康&#xff0c;阖家团圆&#xff0c;业绩越来越好&#xff0c;公司越来越好&#xff0c;一切都越来越好&#xff01; 中秋节是我们非常重要的几个传统节日之一了&#xff0c;除了我们自己庆祝之…

计算机网络练级第一级————认识网络

目录 网络搁哪&#xff1f; 网络的发展史&#xff08;了解&#xff09; 独立模式&#xff1a; 网络互联&#xff1a; 局域网时期&#xff1a; 广域网时期&#xff1a; 什么是协议 TCP/IP五层/四层模型 用官话来说&#xff1a; 我自己的话来说 第一层应用层&#xff1…

Python+selenium自动化元素定位防踩坑(建议收藏)

踩坑一&#xff1a;StaleElementReferenceException selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document 异常原因&#xff1a; 意思是&#xff0c;引用的元素已过期。原因是页面…

soup.find(‘div‘)获取的数据长度为3,为什么1和3都是空的?

用beautifulSoup中的find&#xff08;‘div’&#xff09;可以获取一个div数据&#xff0c;为什么用len&#xff08;&#xff09;计数是显示长度为3&#xff1f; 实际在打印输出时&#xff0c;1和3又没有内容输出&#xff1f;用print&#xff08;div【0】&#xff09;和print&…

Java小白一文讲清Java中集合相关的知识点(七)

LinkedHashSet LinkedHashSet是HashSet的子类 LinkedHashSet底层是一个LinkedHashMap,底层维护了一个数组双向链表 而在之前讲的HashSet中的链表是单向的哈&#xff0c;注意区分&#xff01; LinkedHashSet根据元素的hashcode值来决定元素的存储位置&#xff0c;同时使用链表…

极限编程XP例题

答案&#xff1a;D 解析&#xff1a; 结对编程&#xff0c;一个人写代码&#xff0c;一个人看&#xff0c;由于是两个或两个以上的人负责&#xff0c;因此选项A 支持共同代码拥有和共同对系统负责是正确的 选项B 由于是一个人写一个人看&#xff0c;变相实现了代码审查 选项…

深入了解 GROW with SAP:它究竟是什么?

GROW with SAP 是一套综合全面的产品组合&#xff0c;包含一系列解决方案、加速采用服务、社区支持和学习资源&#xff0c;能够确保各种规模的企业成功采用 ERP 云软件。部署 GROW with SAP 后&#xff0c;企业可以采用 SAP S/4HANA Cloud Public Edition [ERP 公有云版]。在 S…

4 路由模式

路由模式 逻辑图 如果我们将生产环境的日志进行处理&#xff0c;而日志是分等级的&#xff0c;我们就按照 error waring info三个等级来讲解 一个消费者是处理【所有】&#xff08;info&#xff0c;error&#xff0c;warning&#xff09;的日志&#xff0c;用于做数据仓库&am…