【小贪】Kinect V2相机标定、图像获取、图像处理(Python)

完整项目链接:GitHub - xfliu1998/kinectProject

Kinect V2相机标定

使用张正友标定法对Kinect V2相机标定。原理及参考代码见以下链接:

  • 单目相机标定实现–张正友标定法:https://blog.csdn.net/weixin_43763292/article/details/128546103?spm=1001.2014.3001.5506
  • python利用opencv进行相机标定(完全版):https://blog.csdn.net/dgut_guangdian/article/details/107467070?spm=1001.2014.3001.5506
  • Kinect2.0-Python调用-PyKinect2:https://blog.csdn.net/zcz0101/article/details/115718427?spm=1001.2014.3001.5506
  1. 导入需要的包
import os
import time
import datetime
import glob as gb
import h5py
import keyboard
import cv2
import numpy as np
from utils.calibration import Calibrator
from utils.kinect import Kinect
  1. 拍摄不同角度的标定图。取照要求:不同角度不同位置拍摄(10-20)张标定图,棋盘格的视野在整张图的1/4~2/3左右。以下函数默认拍摄20张图,按空格键拍摄照片,按q键结束拍摄,拍的标定图会保存到指定路径。
def get_chess_image(image_num=20):out_path = './data/chess/'if not os.path.exists(out_path):os.makedirs(out_path)camera = cv2.VideoCapture(0)i = 0while 1:(grabbed, img) = camera.read()cv2.imshow('img', img)if cv2.waitKey(1) & keyboard.is_pressed('space'):  # press space to save an imagei += 1firename = str(f'{out_path}img{str(i)}.jpg')cv2.imwrite(firename, img)print('write: ', firename)if cv2.waitKey(1) & 0xFF == ord('q') or i == image_num:  # press q to finishbreak
  1. 对相机标定。函数将相机的内参和畸变系数输出至指定路径,同时可以获得每张标定图的外参矩阵。函数需要用到标定类,代码在utils包中。
def camera_calibrator(shape_inner_corner=(11, 8), size_grid=0.025):''':param shape_inner_corner: checkerboard size = 12*9:param size_grid: the length of the sides of the checkerboard = 25mm'''chess_dir = "./data/chess"out_path = "./data/dedistortion/chess"if not os.path.exists(out_path):os.makedirs(out_path)# create calibratorcalibrator = Calibrator(chess_dir, shape_inner_corner, size_grid)# calibrate the cameramat_intri, coff_dis = calibrator.calibrate_camera()np.save('./data/intrinsic_matrix.npy', mat_intri)np.save('./data/distortion_cofficients.npy', coff_dis)print("intrinsic matrix: \n {}".format(mat_intri))print("distortion cofficients: \n {}".format(coff_dis))  # (k_1, k_2, p_1, p_2, k_3)# dedistortioncalibrator.dedistortion(chess_dir, out_path)return mat_intri, coff_dis
import os
import glob
import cv2
import numpy as npclass Calibrator(object):def __init__(self, img_dir, shape_inner_corner, size_grid, visualization=True):"""--parameters--img_dir: the directory that save images for calibration, strshape_inner_corner: the shape of inner corner, Array of int, (h, w)size_grid: the real size of a grid in calibrator, floatvisualization: whether visualization, bool"""self.img_dir = img_dirself.shape_inner_corner = shape_inner_cornerself.size_grid = size_gridself.visualization = visualizationself.mat_intri = None   # intrinsic matrixself.coff_dis = None    # cofficients of distortion# create the conner in world spacew, h = shape_inner_corner# cp_int: save the coordinate of corner points in world space in 'int' form. like (0,0,0), ...., (10,7,0)cp_int = np.zeros((w * h, 3), np.float32)cp_int[:, :2] = np.mgrid[0:w, 0:h].T.reshape(-1, 2)# cp_world: corner point in world space, save the coordinate of corner points in world spaceself.cp_world = cp_int * size_grid# imagesself.img_paths = []for extension in ["jpg", "png", "jpeg"]:self.img_paths += glob.glob(os.path.join(img_dir, "*.{}".format(extension)))assert len(self.img_paths), "No images for calibration found!"def calibrate_camera(self):w, h = self.shape_inner_corner# criteria: only for subpix calibration, which is not used here# criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)points_world = []   # the points in world spacepoints_pixel = []   # the points in pixel space (relevant to points_world)for img_path in self.img_paths:img = cv2.imread(img_path)gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)# find the corners, cp_img: corner points in pixel spaceret, cp_img = cv2.findChessboardCorners(gray_img, (w, h), None)# if ret is True, saveif ret:# cv2.cornerSubPix(gray_img, cp_img, (11,11), (-1,-1), criteria)points_world.append(self.cp_world)points_pixel.append(cp_img)# view the cornersif self.visualization:cv2.drawChessboardCorners(img, (w, h), cp_img, ret)_, img_name = os.path.split(img_path)cv2.imwrite(os.path.join(self.img_dir, f"dedistortion/coner_detect/{img_name}"), img)cv2.imshow('FoundCorners', img)cv2.waitKey(500)# calibrate the cameraret, mat_intri, coff_dis, v_rot, v_trans = cv2.calibrateCamera(points_world, points_pixel, gray_img.shape[::-1], None, None)# print("ret: {}".format(ret))# print("intrinsic matrix: \n {}".format(mat_intri))# # in the form of (k_1, k_2, p_1, p_2, k_3)# print("distortion cofficients: \n {}".format(coff_dis))# print("rotation vectors: \n {}".format(v_rot))# print("translation vectors: \n {}".format(v_trans))# calculate the error of reprojecttotal_error = 0for i in range(len(points_world)):points_pixel_repro, _ = cv2.projectPoints(points_world[i], v_rot[i], v_trans[i], mat_intri, coff_dis)error = cv2.norm(points_pixel[i], points_pixel_repro, cv2.NORM_L2) / len(points_pixel_repro)total_error += error# print("Average error of reproject: {}".format(total_error / len(points_world)))self.mat_intri = mat_intriself.coff_dis = coff_disreturn mat_intri, coff_disdef dedistortion(self, img_dir, save_dir):if not os.path.exists(save_dir):os.makedirs(save_dir)# if not calibrated, calibrate firstif self.mat_intri is None:assert self.coff_dis is Noneself.calibrate_camera()img_paths = []for extension in ["jpg", "png", "jpeg"]:img_paths += glob.glob(os.path.join(img_dir, "*.{}".format(extension)))for img_path in img_paths:_, img_name = os.path.split(img_path)img = cv2.imread(img_path)h, w = img.shape[0], img.shape[1]newcameramtx, roi = cv2.getOptimalNewCameraMatrix(self.mat_intri, self.coff_dis, (w, h), 1, (w, h))dst = cv2.undistort(img, self.mat_intri, self.coff_dis, None, newcameramtx)# clip the data# x, y, w, h = roi# dst = dst[y:y+h, x:x+w]cv2.imwrite(os.path.join(save_dir, img_name), dst)# print("Dedistorted images have been saved to: {}".format(save_dir))

Kinect V2相机获取图像及图像预处理

  1. 使用Kinect V2相机拍摄RGB-D数据流。设定拍摄目标的名字name,运行函数后延时1s开始拍摄,按ESC键结束拍摄。拍摄的RGB-D数据首先保存到指定路径下的h5文件中,同时输出拍摄的时间。拍摄Kinect图像需要用到Kinect类,代码在utils包中。
def capture_image(name):file_name = './data/h5/' + name + '.h5'if os.path.exists(file_name):os.remove(file_name)open(file_name, "x")f = h5py.File(file_name, 'a')i = 1kinect = Kinect()time.sleep(1)s = time.time()while 1:data = kinect.get_the_data_of_color_depth_infrared_image()# save dataf.create_dataset(str(i), data=data[0])f.create_dataset(str(i+1), data=data[1])i += 2cv2.imshow('kinect', data[0])cv2.waitKey(1)if keyboard.is_pressed('esc'):  # press ESC to exitbreakprint('record time: %f s' % (time.time() - s))return file_name
from pykinect2 import PyKinectV2
from pykinect2.PyKinectV2 import *
from pykinect2 import PyKinectRuntime
import numpy as np
import matplotlib.pyplot as plt
import cv2 as cv
import ctypes
import math
import time
import copy
import keyboardclass Kinect(object):def __init__(self):self._kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Color | PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Infrared)self.color_frame = Noneself.depth_frame = Noneself.infrared_frame = Noneself.w_color = 1920self.h_color = 1080self.w_depth = 512self.h_depth = 424self.csp_type = _ColorSpacePoint * np.int(self.w_color * self.h_color)self.csp = ctypes.cast(self.csp_type(), ctypes.POINTER(_DepthSpacePoint))self.color = Noneself.depth = Noneself.infrared = Noneself.first_time = True# Copying this image directly in python is not as efficient as getting another frame directly from C++"""Get the latest color data"""def get_the_last_color(self):if self._kinect.has_new_color_frame():# the obtained image data is 2d and needs to be converted to the desired formatframe = self._kinect.get_last_color_frame()# return 4 channels, and one channel is not registeredgbra = frame.reshape([self._kinect.color_frame_desc.Height, self._kinect.color_frame_desc.Width, 4])# remove color image data, the default is that the mirror image needs to be flippedcolor_frame = gbra[..., :3][:, ::-1, :]return color_frame"""Get the latest depth data"""def get_the_last_depth(self):if self._kinect.has_new_depth_frame():frame = self._kinect.get_last_depth_frame()depth_frame_all = frame.reshape([self._kinect.depth_frame_desc.Height, self._kinect.depth_frame_desc.Width])self.depth_frame = depth_frame_all[:, ::-1]return self.depth_frame"""Get the latest infrared data"""def get_the_last_infrared(self, Infrared_threshold = 16000):if self._kinect.has_new_infrared_frame():frame = self._kinect.get_last_infrared_frame()image_infrared_all = frame.reshape([self._kinect.infrared_frame_desc.Height, self._kinect.infrared_frame_desc.Width])# image_infrared_all[image_infrared_all > Infrared_threshold] = 0# image_infrared_all = image_infrared_all / Infrared_threshold * 255self.infrared_frame = image_infrared_all[:, ::-1]return self.infrared_frame"""Match the depth pixels into the color image"""def map_depth_points_to_color_points(self, depth_points):depth_points = [self.map_depth_point_to_color_point(x) for x in depth_points]return depth_pointsdef map_depth_point_to_color_point(self, depth_point):global validdepth_point_to_color = copy.deepcopy(depth_point)n = 0while 1:self.get_the_last_depth()self.get_the_last_color()color_point = self._kinect._mapper.MapDepthPointToColorSpace(_DepthSpacePoint(511 - depth_point_to_color[1], depth_point_to_color[0]), self.depth_frame[depth_point_to_color[0], 511 - depth_point_to_color[1]])if math.isinf(float(color_point.y)):n += 1if n >= 10000:color_point = [0, 0]breakelse:color_point = [np.int0(color_point.y), 1920 - np.int0(color_point.x)]  # image coordinates, human eye Anglevalid += 1print('valid number:', valid)breakreturn color_point"""Map an array of color pixels into a depth image"""def map_color_points_to_depth_points(self, color_points):self.get_the_last_depth()self.get_the_last_color()self._kinect._mapper.MapColorFrameToDepthSpace(ctypes.c_uint(512 * 424), self._kinect._depth_frame_data, ctypes.c_uint(1920 * 1080), self.csp)depth_points = [self.map_color_point_to_depth_point(x, True) for x in color_points]return depth_pointsdef map_color_point_to_depth_point(self, color_point, if_call_flg=False):n = 0color_point_to_depth = copy.deepcopy(color_point)color_point_to_depth[1] = 1920 - color_point_to_depth[1]while 1:self.get_the_last_depth()self.get_the_last_color()if not if_call_flg:self._kinect._mapper.MapColorFrameToDepthSpace(ctypes.c_uint(512 * 424), self._kinect._depth_frame_data, ctypes.c_uint(1920 * 1080), self.csp)if math.isinf(float(self.csp[color_point_to_depth[0] * 1920 + color_point_to_depth[1] - 1].y)) \or np.isnan(self.csp[color_point_to_depth[0] * 1920 + color_point_to_depth[1] - 1].y):n += 1if n >= 10000:print('Color mapping depth, invalid points')depth_point = [0, 0]breakelse:try:depth_point = [np.int0(self.csp[color_point_to_depth[0] * 1920 + color_point_to_depth[1] - 1].y),np.int0(self.csp[color_point_to_depth[0] * 1920 + color_point_to_depth[1] - 1].x)]except OverflowError as e:print('Color mapping depth, invalid points')depth_point = [0, 0]breakdepth_point[1] = 512 - depth_point[1]return depth_point"""Get the latest color and depth images as well as infrared images"""def get_the_data_of_color_depth_infrared_image(self):time_s = time.time()if self.first_time:while 1:n = 0self.color = self.get_the_last_color()n += 1self.depth = self.get_the_last_depth()n += 1if self._kinect.has_new_infrared_frame():frame = self._kinect.get_last_infrared_frame()image_infrared_all = frame.reshape([self._kinect.depth_frame_desc.Height, self._kinect.depth_frame_desc.Width])self.infrared = image_infrared_all[:, ::-1]n += 1t = time.time() - time_sif n == 3:self.first_time = Falsebreakelif t > 5:print('No image data is obtained, please check that the Kinect2 connection is normal')breakelse:if self._kinect.has_new_color_frame():frame = self._kinect.get_last_color_frame()gbra = frame.reshape([self._kinect.color_frame_desc.Height, self._kinect.color_frame_desc.Width, 4])gbr = gbra[:, :, :3][:, ::-1, :]self.color = gbrif self._kinect.has_new_depth_frame():frame = self._kinect.get_last_depth_frame()image_depth_all = frame.reshape([self._kinect.depth_frame_desc.Height, self._kinect.depth_frame_desc.Width])depth = image_depth_all[:, ::-1]self.depth = depthif self._kinect.has_new_infrared_frame():frame = self._kinect.get_last_infrared_frame()image_infrared_all = frame.reshape([self._kinect.depth_frame_desc.Height, self._kinect.depth_frame_desc.Width])self.infrared = image_infrared_all[:, ::-1]return self.color, self.depth, self.infraredif __name__ == '__main__':i = 1kinect = Kinect()s = time.time()while 1:data = kinect.get_the_data_of_color_depth_infrared_image()img = data[0]mat_intri = np.load('./data/intrinsic_matrix.npy')coff_dis = np.load('./data/distortion_cofficients.npy')h, w = img.shape[0], img.shape[1]newcameramtx, roi = cv.getOptimalNewCameraMatrix(mat_intri, coff_dis, (w, h), 1, (w, h))dst = cv.undistort(img, mat_intri, coff_dis, None, newcameramtx)dst = cv.cvtColor(dst, cv.COLOR_BGR2RGB)plt.imshow(dst/255)plt.show()"""# store the mapping matrix in an npy filecolor_points = np.zeros((512 * 424, 2), dtype=np.int)  # valid number: 207662k = 0for i in range(424):for j in range(512):color_points[k] = [i, j]k += 1depth_map_color = kinect.map_depth_points_to_color_points(color_points)# turn to 0 that is not in the mapping rangedepth_map_color[..., 0] = np.where(depth_map_color[..., 0] >= 1080, 0, depth_map_color[..., 0])depth_map_color[..., 0] = np.where(depth_map_color[..., 0] < 0, 0, depth_map_color[..., 0])depth_map_color[..., 1] = np.where(depth_map_color[..., 1] >= 1920, 0, depth_map_color[..., 1])depth_map_color[..., 1] = np.where(depth_map_color[..., 1] < 0, 0, depth_map_color[..., 1])# interpolated fill 0 valueszeros = np.array(list(set(np.where(depth_map_color == 0)[0])))for zero in zeros:if zero < 40 * 512 or zero > 360 * 512:continuej = 1while depth_map_color[zero - j].any() == 0 or depth_map_color[zero + j].any() == 0:j += 1depth_map_color[zero][0] = (depth_map_color[zero - j][0] + depth_map_color[zero + j][0]) // 2depth_map_color[zero][1] = (depth_map_color[zero - j][1] + depth_map_color[zero + j][1]) // 2np.save('full_depth_map_color.npy', full_depth_map_color)"""depth_map_color = np.load('./data/full_depth_map_color.npy')   # (424*512, 2)full_depth_map_color = depth_map_colormap_color = dst[full_depth_map_color[..., 0], full_depth_map_color[..., 1]]  # (424*512, 2)map_color = map_color.reshape((424, 512, 3))plt.imshow(map_color/255)plt.show()if keyboard.is_pressed('esc'):break
  1. 彩色图去畸变。利用上文计算的相机参数对每一张彩色图去除畸变,同时保存去畸变后的彩色图。需要用到以下两个函数:
def dedistortion(mat_intri, coff_dis, img):h, w = img.shape[0], img.shape[1]newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mat_intri, coff_dis, (w, h), 1, (w, h))dst = cv2.undistort(img, mat_intri, coff_dis, None, newcameramtx)return dstdef save_image(data, name, type, dir='test'):global numidx = str(num).zfill(6)if dir == 'raw':color_path = './data/' + name + '/color'depth_path = './data/' + name + '/depth'else:color_path = "./test_data/" + name + '/color'depth_path = "./test_data/" + name + '/depth'if not os.path.exists(color_path):os.makedirs(color_path)if not os.path.exists(depth_path):os.makedirs(depth_path)if type == 'color':cv2.imwrite(color_path + '/color-' + idx + '.png', data)else:cv2.imwrite(depth_path + '/depth-' + idx + '.png', data)if dir == 'test':num += 1
  1. 深度图彩色图对齐。kinect相机的颜色相机和深度传感器之间存在距离,导致深度图和颜色图像素不是一一对应,需要对齐,使用以下函数对齐color和depth,对齐后将图像中心裁剪为尺寸(480, 360)。但是对齐后的颜色图可以会出现对齐错误(对齐npy文件存储在data包中,获取方式见kinect.py的main函数)。
def center_crop(img,  crop_size):tw, th = crop_sizeh, w = img.shape[0], img.shape[1]if len(img.shape) == 2:crop_img = img[(h - th) // 2:(h + th) // 2, (w - tw) // 2:(w + tw) // 2]else:crop_img = img[(h - th) // 2:(h + th) // 2, (w - tw) // 2:(w + tw) // 2, :]return crop_imgdef match_color_depth(color, depth):# crop+resize is worsefull_depth_map_color = np.load('data/full_depth_map_color.npy')map_color = color[full_depth_map_color[..., 0], full_depth_map_color[..., 1]]  # (424*512, 2)map_color = map_color.reshape((424, 512, 3))# 512 * 424color = center_crop(map_color, (480, 360))depth = center_crop(depth, (480, 360))# plt.subplot(1, 2, 1)# plt.imshow(color)# plt.subplot(1, 2, 2)# plt.imshow(depth)# plt.show()return color, depth
  1. 输出color视频。将校正后的color图像流输出为指定路径下的视频。
def trans_video(image_path, video_name, fps, res, type):img_path = gb.glob(image_path + "/*.png")videoWriter = cv2.VideoWriter(video_name, cv2.VideoWriter_fourcc(*'DIVX'), fps, res)for path in img_path:img = cv2.imread(path)img = cv2.resize(img, res)videoWriter.write(img)print('transform ' + type + ' video done!')def save_video(name):currentdate = datetime.datetime.now()file_name = str(currentdate.day) + "." + str(currentdate.month) + "." + str(currentdate.hour) + "." + str(currentdate.minute)color_path = './data/' + name + '/color'# depth_path = './data/' + name + '/depth'video_path = './data/' + name + '/video'if not os.path.exists(video_path):os.makedirs(video_path)trans_video(color_path, video_path + '/color-' + file_name + '.avi', 30, (1920, 1080), 'color')# trans_video(depth_path, depth_path + '/depth-' + file_name + '.avi', 30, (512, 424), 'depth')
  1. 完整调用。调用以上程序的主程序如下:
if __name__ == '__main__':# 1. shooting calibration imagesget_chess_image()# 2. camera calibrationmat_intri, coff_dis = camera_calibrator()# mat_intri = np.load('./data/intrinsic_matrix.npy')# coff_dis = np.load('./data/distortion_cofficients.npy')# 3. capture object images to save h5 filename = 'object'file_name = capture_image(name)f = h5py.File(file_name, 'r')num = 0for i in range(1, len(f.keys()), 2):color = f[str(i)][:]depth = f[str(i + 1)][:]# 4. data process: dedistortion; match color and depth images; save color/depth imagesdedistortion_color = dedistortion(mat_intri, coff_dis, color)save_image(dedistortion_color, name, 'color', 'raw')save_image(depth, name, 'depth', 'raw')new_color, new_depth = match_color_depth(dedistortion_color, depth)save_image(new_color, name, 'color', 'test')save_image(new_depth, name, 'depth', 'test')f.close()print('image save done!')# 5. convert to videosave_video(name)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/768351.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

银行监管报送系统介绍(六):客户风险数据报送系统

【概念定义】 银监会决定从2013年起实行新版客户风险统计制度&#xff0c;对各政策性银行、国有商业银行、股份制商业银行进行客户信息汇总统计。 客户风险统计信息&#xff0c;是指新版客户风险统计报送信 息。客户风险统计报送信息包括但不限于对公及同业客户授信和 表内外业…

Gogs - 一款极易搭建的自助 Git 服务

Gogs - 一款极易搭建的自助 Git 服务 1. 使用文档References Gogs https://gogs.io/ https://github.com/gogs/gogs Gogs (/gɑgz/) 项目旨在打造一个以最简便的方式搭建简单、稳定和可扩展的自助 Git 服务。使用 Go 语言开发使得 Gogs 能够通过独立的二进制分发&#xff0c;并…

【循环神经网络rnn】一篇文章讲透

目录 引言 二、RNN的基本原理 代码事例 三、RNN的优化方法 1 长短期记忆网络&#xff08;LSTM&#xff09; 2 门控循环单元&#xff08;GRU&#xff09; 四、更多优化方法 1 选择合适的RNN结构 2 使用并行化技术 3 优化超参数 4 使用梯度裁剪 5 使用混合精度训练 …

C++(类和对象)2

36 友元 1&#xff09;全局函数 全局函数做优元&#xff0c;就是把全局函数复制到类中&#xff0c;加个friend 同上&#xff0c;将class GoodGay前写个friend&#xff0c;就可以访问了 当然&#xff0c;还有成员函数做友元 39 运算符重载-加号 普通加号只知道两个整型撒的…

jetcache 2级缓存模式实现批量清除

需求 希望能够实现清理指定对象缓存的方法&#xff0c;例如缓存了User表&#xff0c;当User表巨大时&#xff0c;通过id全量去清理不现实&#xff0c;耗费资源也巨大。因此需要能够支持清理指定本地和远程缓存的批量方法。 分析 查看jetcache生成的cache接口&#xff0c;并没…

计算机网络⑧ —— IP地址

IP位于TCP/IP参考模型的第三层&#xff0c;也就是⽹络层 ⽹络层的主要作⽤&#xff1a;实现主机与主机之间的通信&#xff0c;也叫点对点通信 问题1&#xff1a;⽹络层(IP)与数据链路层(MAC)有什么关系呢&#xff1f; MAC的作⽤&#xff1a;实现直连的两个设备之间通信。IP的…

Oracle参数文件详解

1、参数文件的作用 参数文件用于存放实例所需要的初始化参数&#xff0c;因为多数初始化参数都具有默认值&#xff0c;所以参数文件实际存放了非默认的初始化参数。 2、参数文件类型 1&#xff09;服务端参数文件&#xff0c;又称为 spfile 二进制的文件&#xff0c;命名规则…

Django 三板斧、静态文件、request方法

【一】三板斧 【1】HttpResponse &#xff08;1&#xff09;介绍 HttpResponse是Django中的一个类&#xff0c;用于构建HTTP响应对象。它允许创建并返回包含特定内容的HTTP响应。 &#xff08;2&#xff09;使用 导入HttpResponse类 from django.http import HttpResponse创…

八股文(1)

管道 匿名管道和命名管道 命名管道的使用是什么&#xff1f;在linux系统如何实现 命名管道&#xff08;Named Pipe&#xff09;&#xff0c;也称为FIFO&#xff08;First In First Out&#xff09;&#xff0c;是一种在UNIX和Linux系统中用于进程间通信&#xff08;IPC&…

sqllabs1-7sql注入

先在?id参数后面判断是否存在sql注入 id1 返回正常 id1 返回报错&#xff08;说明可能存在sql注入&#xff09; id1 and 11 返回正常 id1 and 12 返回正常 id1 and 11 报错 id1 and 12 报错 说明$id后面可能还存在sql语句(源码源码&#xff1a;$sql"S…

HTML:常用标签

1. 标签概念 <!-- 五要素&#xff1a; 文档声明<!DOCTYPE html> 根标签<html></html> 头部元素<head></head> 主体元素<body></body> 注释标签 1.html文件的根标签&#xff0c; <html></html>所有其他标签都要放…

yarn、npm设置淘宝国内镜像

NPM 1. 查询当前镜像 npm get registry 2. 设置为淘宝镜像 npm config set registry https://registry.npm.taobao.org/ (旧地址&#xff0c;不再维护&#xff0c;可以使用) npm config set registry https://registry.npmmirror.com/ (最新地址)3. 设置为官…

第三十一章 配置 Web Gateway 的默认参数 - 事件记录参数

文章目录 第三十一章 配置 Web Gateway 的默认参数 - 事件记录参数 第三十一章 配置 Web Gateway 的默认参数 - 事件记录参数 事件日志级别字段指定 Web Gateway 写入 Web Gateway 事件日志的信息。日志记录选项定义为一串字符&#xff0c;每个字符代表一个日志记录命令。此处…

springboot实现简单的excel导入

前文其实已经实现了较为复杂的excel导入了&#xff0c;这篇博客就给大家介绍简单的excel表格导入方法 以下是我的excel表格&#xff1a; 以下是我的实体类&#xff1a; package com.datapojo.bean;import com.baomidou.mybatisplus.annotation.IdType; import com.baomidou.m…

直接插入排序 希尔排序 选择排序 堆排序

目录 一. 排序的概念及应用 1.1 排序的概念 1.2 常见的排序算法 二. 常见排序算法的实现(从小到大排序) 2.1 插入排序 2.1.1基本思想&#xff1a; 2.1.2 直接插入排序 2.1.3 希尔排序( 缩小增量排序) 2.2 选择排序 2.2.1基本思想&#xff1a; 2.2.2 直接选择排序: 2…

【Node.js】mysql 操作 MySQL 数据库

实际案例 db/index.js const mysql require(mysql)// 创建数据库的连接 const db mysql.createPool({host: localhost,user: root,password: hxg20021126,database: management-pro })module.exports dbLoginController.js const db require(../db/index) const bcrypt …

动态规划——线性dp

数字三角形 // 从上到下 #include <iostream> #include <algorithm> using namespace std; const int N 510, INF 1e9; int n; int a[N][N]; int f[N][N];int main() {scanf("%d", &n);for (int i 1; i < n; i )for (int j 1; j < i; j …

宝塔面板安装sqlite

宝塔面板是一个非常流行的服务器管理面板&#xff0c;它提供了许多方便的功能来管理服务器和网站。但是&#xff0c;默认情况下&#xff0c;宝塔面板不支持SQLite数据库的安装和管理。SQLite是一个轻量级的嵌入式数据库&#xff0c;它在很多应用程序中被广泛使用。如果你需要在…

计算机组成原理 CPU组成与机器指令执行实验

一、实验目的 (1)将微程序控制器同执行部件( 整个数据通路)联机&#xff0c;组成一台模型计算机; (2)用微程序控制器控制模型机数据通路; (3)通过CPU运行九条机器指令(排除中断指令)组成的简单程序&#xff0c;掌握机器指令与微指令的关系&#xff0c;牢固建立计算机的整机概…

深度学习pytorch——2D函数优化实例(持续更新)

课程&#xff1a;课时46 优化问题实战_哔哩哔哩_bilibili 这就是我们今天要求的2D函数&#xff1a; 下图是使用python绘制出来的图像&#xff1a; 但是可以看出有4个最小值&#xff0c;但是还是不够直观&#xff0c;还是看课程里面给的比较好&#xff0c;蓝色是最低点位置&am…