python实现坐标求取_根据相机位姿求指定点的世界坐标及其python实现

Author

shaniadolphin

求解目的

本文将展示位姿估计的一种应用,即通过单目相机对环境进行测量。简单来说,本文的工作就是利用下面的两幅图,在已知P1、P2、P3、P4四点世界坐标的情况下,计算出其它点的世界坐标。

如图所示,一个标准的标定板,标定板每个格子的尺寸是30mm,通过标定四周的4个点P1、P2、P3、P4,旁边有茶罐,有待求点为P5、P6。

77f7c0cd9ec7

这种应用可以利用一个已经尺寸物体,通过两张不同视角的照片求未知物体的尺寸。比如上图中的通过已知的标定板求茶罐的尺寸。而在现实应用中可以用这种方式来求车的尺寸,建筑的高度,货物的体积等等。

求解原理

如下图,根据P1、P2、P3、P4四点的空间坐标,通过openCV的PNP函数,可以计算出两次拍照的相机位姿,从而进一步计算出相机的坐标

math?formula=Oc_1

math?formula=Oc_2。那么将

math?formula=Oc_1

math?formula=P

math?formula=Oc_2

math?formula=P连成直线,获得两条直线方程

math?formula=A

math?formula=B,组成方程组求解得到它们的交点,即为待求目标点的坐标。

77f7c0cd9ec7

1. 求出

math?formula=P点的相机坐标系坐标

math?formula=P_c

关于P点如何从二维映射到三维,参考上图,

math?formula=O_c的坐标通过解

math?formula=PNP已经求出,待求点P在图像中的像素坐标为

math?formula=(u%2Cv)

求出

math?formula=P在相机坐标系中的坐标

math?formula=P_c(也就是上图中的

math?formula=P_c点)。具体的转换公式如下,式中

math?formula=f为相机镜头的焦距

math?formula=(mm),在本次实验中使用的是中一光学的35mm手动镜头。

math?formula=(u%2Cv)为点的像素坐标,其余为相机内参数。

math?formula=%5Cbegin%7Bcases%7Dx_c%3D(u-u_0)*f%2Ff_x%20%5C%5Cy_c%3D(v-v_0)*f%2Ff_y%20%5C%5Cz_c%3Df%20%5Cend%7Bcases%7D

输入拍到的图片的点,包括待求点的像素坐标,示例如下:

P11 = np.array([0, 0, 0])

P12 = np.array([0, 300, 0])

P13 = np.array([210, 0, 0])

P14 = np.array([210, 300, 0])

p11 = np.array([1765, 725])

p12 = np.array([3068, 1254])

p13 = np.array([1249, 1430])

p14 = np.array([2648, 2072])

p4psolver1.Points3D[0] = np.array([P11,P12,P13,P14])

p4psolver1.Points2D[0] = np.array([p11,p12,p13,p14])

#p4psolver1.point2find = np.array([4149, 671])

#p4psolver1.point2find = np.array([675, 835])

p4psolver1.point2find = np.array([691, 336])

读取标定文件中的相机内参,代码如下,在代码里预设了相机的传感器大小,笔者所用的D7000单反是DX画幅的,根据可查到的资料,传感器的规格为23.6mm*15.6mm。

笔者用在本机的镜头是中一的35mm手动定焦镜头。

def getudistmap(self, filename):

with open(filename, 'r',newline='') as csvfile:

spamreader = csv.reader(csvfile, delimiter=',', quotechar='"')

rows = [row for row in spamreader]

self.cameraMatrix = np.zeros((3, 3))

#Dt = np.zeros((4, 1))

size_w = 23.6

size_h = 15.6

imageWidth = int(rows[0][1])

imageHeight = int(rows[0][2])

self.cameraMatrix[0][0] = rows[1][1]

self.cameraMatrix[1][1] = rows[1][2]

self.cameraMatrix[0][2] = rows[1][3]

self.cameraMatrix[1][2] = rows[1][4]

self.cameraMatrix[2][2] = 1

print(len(rows[2]))

if len(rows[2]) == 5:

print('fisheye')

self.distCoefs = np.zeros((4, 1))

self.distCoefs[0][0] = rows[2][1]

self.distCoefs[1][0] = rows[2][2]

self.distCoefs[2][0] = rows[2][3]

self.distCoefs[3][0] = rows[2][4]

scaled_K = self.cameraMatrix * 0.8 # The values of K is to scale with image dimension.

scaled_K[2][2] = 1.0

#newcameramtx = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(self.cameraMatrix, self.distCoefs, (imageWidth, imageHeight), np.eye(3), balance=0)

#map1, map2 = cv2.fisheye.initUndistortRectifyMap(self.cameraMatrix, self.distCoefs, np.eye(3), newcameramtx, (imageWidth, imageHeight), cv2.CV_32FC1)

else:

print('normal')

self.distCoefs = np.zeros((1, 5))

self.distCoefs[0][0] = rows[2][1]

self.distCoefs[0][1] = rows[2][2]

self.distCoefs[0][2] = rows[2][3]

self.distCoefs[0][3] = rows[2][4]

self.distCoefs[0][4] = rows[2][5]

#newcameramtx, roi = cv2.getOptimalNewCameraMatrix(self.cameraMatrix, self.distCoefs, (imageWidth, imageHeight), 1, (imageWidth, imageHeight))

#map1, map2 = cv2.initUndistortRectifyMap(self.cameraMatrix, self.distCoefs, None, newcameramtx, (imageWidth, imageHeight), cv2.CV_32FC1)

print('dim = %d*%d'%(imageWidth, imageHeight))

print('Kt = \n', self.cameraMatrix)

#print('newcameramtx = \n', newcameramtx)

print('Dt = \n', self.distCoefs)

self.f = [self.cameraMatrix[0][0]*(size_w/imageWidth), self.cameraMatrix[1][1]*(size_h/imageHeight)]

#self.f = [350, 350]

print('f = \n', self.f)

#print(map1,'\n',map2.T)

return

然后就可以将像素坐标转换到世界坐标了:

def WordFrame2ImageFrame(self, WorldPoints):

pro_points, jacobian = cv2.projectPoints(WorldPoints, self.rvecs, self.tvecs, self.cameraMatrix, self.distCoefs)

return pro_points

def ImageFrame2CameraFrame(self, pixPoints):

fx = self.cameraMatrix[0][0]

u0 = self.cameraMatrix[0][2]

fy = self.cameraMatrix[1][1]

v0 = self.cameraMatrix[1][2]

zc = (self.f[0]+self.f[1])/2

xc = (pixPoints[0] - u0) * self.f[0] / fx #f=fx*传感器尺寸/分辨率

yc = (pixPoints[1] - v0) * self.f[1] / fy

point = np.array([xc,yc,zc])

return point

2.求出P点在世界坐标系中的方向向量

通过以上运算得到了

math?formula=P_c(x_c%2Cy_c%2Cz_c),但这个点坐标是在相机坐标系中的,需要进一步求解

math?formula=P点在世界坐标系中对应的坐标

math?formula=P_w(x_w%2Cy_w%2Cc_w)

为了将

math?formula=P_c转为

math?formula=P_w,即求出原点

math?formula=O_w在相机坐标系下的坐标,需要使用到解

math?formula=PNP求位姿时得到的三个欧拉角

math?formula=%5Ctheta_x%2C%5Ctheta_y%2C%5Ctheta_z。相机坐标系

math?formula=C按照

math?formula=z轴、

math?formula=y轴、

math?formula=x轴的顺序旋转以上角度后将与世界坐标系

math?formula=W完全平行,在这三次旋转中

math?formula=P_c也会与坐标系一起旋转的,其在世界系

math?formula=W中的位置会发生改变。为了保证

math?formula=C系旋转后

math?formula=P点依然保持在世界坐标系W原本的位置,需要对

math?formula=P_c进行三次反向旋转,旋转后得到点

math?formula=P_c在相机坐标系

math?formula=C中新的坐标值记为

math?formula=P_c%5E%7B'%7D

math?formula=P_c%5E%7B'%7D的值等于世界坐标系中向量

math?formula=OP的值。最终,

math?formula=P点的世界坐标

math?formula=P_w=

math?formula=P_c%5E%7B'%7D的值+

math?formula=Oc的世界坐标值,具体操作如下:

第一次旋转:

原始相机坐标系

math?formula=C

math?formula=z轴旋转了

math?formula=%5Ctheta_z变为

math?formula=C_1系,

math?formula=P_0%3D(x_0%2Cy_0%2Cz_0),将

math?formula=P点绕

math?formula=z轴旋转

math?formula=-%5Ctheta_z,得到

math?formula=P_1%3D(x_1%2Cy_1%2Cz_1),其为

math?formula=C_1系中

math?formula=O_w的坐标。

def CodeRotateByZ(self, x, y, thetaz):#将空间点绕Z轴旋转

x1=x #将变量拷贝一次,保证&x == &outx这种情况下也能计算正确

y1=y

rz = thetaz*3.141592653589793/180

outx = math.cos(rz)*x1 - math.sin(rz)*y1

outy = math.sin(rz)*x1 + math.cos(rz)*y1

return outx,outy

第二次旋转:

math?formula=C_1

math?formula=y轴旋转了

math?formula=%5Ctheta_y变为

math?formula=C_2系,

math?formula=P_1%3D(x_1%2Cy_1%2Cz_1),将

math?formula=P_1点绕

math?formula=y轴旋转

math?formula=-%5Ctheta_y,得到

math?formula=P_2%3D(x_2%2Cy_2%2Cz_2),其为

math?formula=C_2系中

math?formula=O_w的坐标。

def CodeRotateByY(self, x, z, thetay):

x1=x

z1=z

ry = thetay * 3.141592653589793 / 180

outx = math.cos(ry) * x1 + math.sin(ry) * z1

outz = math.cos(ry) * z1 - math.sin(ry) * x1

return outx,outz

第三次旋转:

math?formula=C_2

math?formula=x轴旋转了

math?formula=%5Ctheta_x变为

math?formula=C_3系,

math?formula=P_2%3D(x_2%2Cy_2%2Cz_2),将

math?formula=P_2点绕

math?formula=x轴旋转

math?formula=-%5Ctheta_x,得到

math?formula=P_3%3D(x_3%2Cy_3%2Cz_3),其为

math?formula=C_3系中

math?formula=O_w的坐标。

def CodeRotateByX(self, y, z, thetax):

y1=y

z1=z

rx = (thetax * 3.141592653589793) / 180

outy = math.cos(rx) * y1 - math.sin(rx) * z1

outz = math.cos(rx) * z1 + math.sin(rx) * y1

return outy,outz

此时,世界坐标系中,相机的位置坐标为

math?formula=(-x_3%2C-y_3%2C-z_3)。结合上面的旋转函数,完整的求解代码如下所示:

def solver(self):

retval, self.rvec, self.tvec = cv2.solvePnP(self.Points3D, self.Points2D, self.cameraMatrix, self.distCoefs)

#print('self.rvec:',self.rvec)

#print('self.tvec:',self.tvec)

thetax,thetay,thetaz = self.rotationVectorToEulerAngles(self.rvec, 0)

x = self.tvec[0][0]

y = self.tvec[1][0]

z = self.tvec[2][0]

self.Position_OwInCx = x

self.Position_OwInCy = y

self.Position_OwInCz = z

self.Position_theta = [thetax, thetay, thetaz]

#print('Position_theta:',self.Position_theta)

x, y = self.CodeRotateByZ(x, y, -1 * thetaz)

x, z = self.CodeRotateByY(x, z, -1 * thetay)

y, z = self.CodeRotateByX(y, z, -1 * thetax)

self.Theta_W2C = (-1*thetax, -1*thetay,-1*thetaz)

self.Position_OcInWx = x*(-1)

self.Position_OcInWy = y*(-1)

self.Position_OcInWz = z*(-1)

self.Position_OcInW = np.array([self.Position_OcInWx, self.Position_OcInWy, self.Position_OcInWz])

print('Position_OcInW:', self.Position_OcInW)

通过世界坐标系的相机坐标a1,P点坐标a2,构成第一条直线A。

def SetLineA(self, A1x, A1y, A1z, A2x, A2y, A2z):

self.a1 = np.array([A1x, A1y, A1z])

self.a2 = np.array([A2x, A2y, A2z])

3. 最后,根据两幅图得到的两条直线,计算出P点的世界坐标

对另外一幅图也进行如上操作,获得两条直线A、B,因此求出两条直线A与B的交点即可求出目标点的世界坐标。然而在现实中,由于误差的存在,A与B基本不会相交,因此在计算时需要求他们之间的最近点。

77f7c0cd9ec7

class GetDistanceOf2linesIn3D():

def __init__(self):

print('GetDistanceOf2linesIn3D class')

def dot(self, ax, ay, az, bx, by, bz):

result = ax*bx + ay*by + az*bz

return result

def cross(self, ax, ay, az, bx, by, bz):

x = ay*bz - az*by

y = az*bx - ax*bz

z = ax*by - ay*bx

return x,y,z

def crossarray(self, a, b):

x = a[1]*b[2] - a[2]*b[1]

y = a[2]*b[0] - a[0]*b[2]

z = a[0]*b[1] - a[1]*b[0]

return np.array([x,y,z])

def norm(self, ax, ay, az):

return math.sqrt(self.dot(ax, ay, az, ax, ay, az))

def norm2(self, one):

return math.sqrt(np.dot(one, one))

def SetLineA(self, A1x, A1y, A1z, A2x, A2y, A2z):

self.a1 = np.array([A1x, A1y, A1z])

self.a2 = np.array([A2x, A2y, A2z])

def SetLineB(self, B1x, B1y, B1z, B2x, B2y, B2z):

self.b1 = np.array([B1x, B1y, B1z])

self.b2 = np.array([B2x, B2y, B2z])

def GetDistance(self):

d1 = self.a2 - self.a1

d2 = self.b2 - self.b1

e = self.b1 - self.a1

cross_e_d2 = self.crossarray(e,d2)

cross_e_d1 = self.crossarray(e,d1)

cross_d1_d2 = self.crossarray(d1,d2)

dd = self.norm2(cross_d1_d2)

t1 = np.dot(cross_e_d2, cross_d1_d2)

t2 = np.dot(cross_e_d1, cross_d1_d2)

t1 = t1/(dd*dd)

t2 = t2/(dd*dd)

self.PonA = self.a1 + (self.a2 - self.a1) * t1

self.PonB = self.b1 + (self.b2 - self.b1) * t2

self.distance = self.norm2(self.PonB - self.PonA)

print('distance=', self.distance)

return self.distance

总结与验证

通过以上的讲解,说明了大致的原理和过程。完整的求解代码及结果如下,其中代码中打开的“calibration.csv”是一个标定生成的文件,存取了笔者D7000标定后得到的内参,如表格清单所示。

表格清单:

dim

3696

2448

cameraMatrix

5546.18009098042

5572.883

1821.049

1347.461

distCoefs

-0.12735

0.200792

-0.00209

0.000943

-0.79439

代码清单:

#!/usr/bin/env python3

# coding:utf-8

import cv2

import numpy as np

import time

from PIL import Image,ImageTk

import threading

import os

import re

import subprocess

import random

import math

import csv

import argparse

class PNPSolver():

def __init__(self):

self.COLOR_WHITE = (255,255,255)

self.COLOR_BLUE = (255,0,0)

self.COLOR_LBLUE = (255, 200, 100)

self.COLOR_GREEN = (0,240,0)

self.COLOR_RED = (0,0,255)

self.COLOR_DRED = (0,0,139)

self.COLOR_YELLOW = (29,227,245)

self.COLOR_PURPLE = (224,27,217)

self.COLOR_GRAY = (127,127,127)

self.Points3D = np.zeros((1, 4, 3), np.float32) #存放4组世界坐标位置

self.Points2D = np.zeros((1, 4, 2), np.float32) #存放4组像素坐标位置

self.point2find = np.zeros((1, 2), np.float32)

self.cameraMatrix = None

self.distCoefs = None

self.f = 0

def rotationVectorToEulerAngles(self, rvecs, anglestype):

R = np.zeros((3, 3), dtype=np.float64)

cv2.Rodrigues(rvecs, R)

sy = math.sqrt(R[2,1] * R[2,1] + R[2,2] * R[2,2])

singular = sy < 1e-6

if not singular:

x = math.atan2(R[2,1] , R[2,2])

y = math.atan2(-R[2,0], sy)

z = math.atan2(R[1,0], R[0,0])

else :

x = math.atan2(-R[1,2], R[1,1])

y = math.atan2(-R[2,0], sy)

z = 0

if anglestype == 0:

x = x*180.0/3.141592653589793

y = y*180.0/3.141592653589793

z = z*180.0/3.141592653589793

elif anglestype == 1:

x = x

y = y

z = z

print(x)

return x,y,z

def CodeRotateByZ(self, x, y, thetaz):#将空间点绕Z轴旋转

x1=x #将变量拷贝一次,保证&x == &outx这种情况下也能计算正确

y1=y

rz = thetaz*3.141592653589793/180

outx = math.cos(rz)*x1 - math.sin(rz)*y1

outy = math.sin(rz)*x1 + math.cos(rz)*y1

return outx,outy

def CodeRotateByY(self, x, z, thetay):

x1=x

z1=z

ry = thetay * 3.141592653589793 / 180

outx = math.cos(ry) * x1 + math.sin(ry) * z1

outz = math.cos(ry) * z1 - math.sin(ry) * x1

return outx,outz

def CodeRotateByX(self, y, z, thetax):

y1=y

z1=z

rx = (thetax * 3.141592653589793) / 180

outy = math.cos(rx) * y1 - math.sin(rx) * z1

outz = math.cos(rx) * z1 + math.sin(rx) * y1

return outy,outz

def solver(self):

retval, self.rvec, self.tvec = cv2.solvePnP(self.Points3D, self.Points2D, self.cameraMatrix, self.distCoefs)

thetax,thetay,thetaz = self.rotationVectorToEulerAngles(self.rvec, 0)

x = self.tvec[0][0]

y = self.tvec[1][0]

z = self.tvec[2][0]

self.Position_OwInCx = x

self.Position_OwInCy = y

self.Position_OwInCz = z

self.Position_theta = [thetax, thetay, thetaz]

#print('Position_theta:',self.Position_theta)

x, y = self.CodeRotateByZ(x, y, -1 * thetaz)

x, z = self.CodeRotateByY(x, z, -1 * thetay)

y, z = self.CodeRotateByX(y, z, -1 * thetax)

self.Theta_W2C = ([-1*thetax, -1*thetay,-1*thetaz])

self.Position_OcInWx = x*(-1)

self.Position_OcInWy = y*(-1)

self.Position_OcInWz = z*(-1)

self.Position_OcInW = np.array([self.Position_OcInWx, self.Position_OcInWy, self.Position_OcInWz])

print('Position_OcInW:', self.Position_OcInW)

def WordFrame2ImageFrame(self, WorldPoints):

pro_points, jacobian = cv2.projectPoints(WorldPoints, self.rvecs, self.tvecs, self.cameraMatrix, self.distCoefs)

return pro_points

def ImageFrame2CameraFrame(self, pixPoints):

fx = self.cameraMatrix[0][0]

u0 = self.cameraMatrix[0][2]

fy = self.cameraMatrix[1][1]

v0 = self.cameraMatrix[1][2]

zc = (self.f[0]+self.f[1])/2

xc = (pixPoints[0] - u0) * self.f[0] / fx #f=fx*传感器尺寸/分辨率

yc = (pixPoints[1] - v0) * self.f[1] / fy

point = np.array([xc,yc,zc])

return point

def getudistmap(self, filename):

with open(filename, 'r',newline='') as csvfile:

spamreader = csv.reader(csvfile, delimiter=',', quotechar='"')

rows = [row for row in spamreader]

self.cameraMatrix = np.zeros((3, 3))

#Dt = np.zeros((4, 1))

size_w = 23.6

size_h = 15.6

imageWidth = int(rows[0][1])

imageHeight = int(rows[0][2])

self.cameraMatrix[0][0] = rows[1][1]

self.cameraMatrix[1][1] = rows[1][2]

self.cameraMatrix[0][2] = rows[1][3]

self.cameraMatrix[1][2] = rows[1][4]

self.cameraMatrix[2][2] = 1

print(len(rows[2]))

if len(rows[2]) == 5:

print('fisheye')

self.distCoefs = np.zeros((4, 1))

self.distCoefs[0][0] = rows[2][1]

self.distCoefs[1][0] = rows[2][2]

self.distCoefs[2][0] = rows[2][3]

self.distCoefs[3][0] = rows[2][4]

scaled_K = self.cameraMatrix * 0.8 # The values of K is to scale with image dimension.

scaled_K[2][2] = 1.0

else:

print('normal')

self.distCoefs = np.zeros((1, 5))

self.distCoefs[0][0] = rows[2][1]

self.distCoefs[0][1] = rows[2][2]

self.distCoefs[0][2] = rows[2][3]

self.distCoefs[0][3] = rows[2][4]

self.distCoefs[0][4] = rows[2][5]

print('dim = %d*%d'%(imageWidth, imageHeight))

print('Kt = \n', self.cameraMatrix)

print('Dt = \n', self.distCoefs)

self.f = [self.cameraMatrix[0][0]*(size_w/imageWidth), self.cameraMatrix[1][1]*(size_h/imageHeight)]

print('f = \n', self.f)

return

class GetDistanceOf2linesIn3D():

def __init__(self):

print('GetDistanceOf2linesIn3D class')

def dot(self, ax, ay, az, bx, by, bz):

result = ax*bx + ay*by + az*bz

return result

def cross(self, ax, ay, az, bx, by, bz):

x = ay*bz - az*by

y = az*bx - ax*bz

z = ax*by - ay*bx

return x,y,z

def crossarray(self, a, b):

x = a[1]*b[2] - a[2]*b[1]

y = a[2]*b[0] - a[0]*b[2]

z = a[0]*b[1] - a[1]*b[0]

return np.array([x,y,z])

def norm(self, ax, ay, az):

return math.sqrt(self.dot(ax, ay, az, ax, ay, az))

def norm2(self, one):

return math.sqrt(np.dot(one, one))

def SetLineA(self, A1x, A1y, A1z, A2x, A2y, A2z):

self.a1 = np.array([A1x, A1y, A1z])

self.a2 = np.array([A2x, A2y, A2z])

def SetLineB(self, B1x, B1y, B1z, B2x, B2y, B2z):

self.b1 = np.array([B1x, B1y, B1z])

self.b2 = np.array([B2x, B2y, B2z])

def GetDistance(self):

d1 = self.a2 - self.a1

d2 = self.b2 - self.b1

e = self.b1 - self.a1

cross_e_d2 = self.crossarray(e,d2)

cross_e_d1 = self.crossarray(e,d1)

cross_d1_d2 = self.crossarray(d1,d2)

dd = self.norm2(cross_d1_d2)

t1 = np.dot(cross_e_d2, cross_d1_d2)

t2 = np.dot(cross_e_d1, cross_d1_d2)

t1 = t1/(dd*dd)

t2 = t2/(dd*dd)

self.PonA = self.a1 + (self.a2 - self.a1) * t1

self.PonB = self.b1 + (self.b2 - self.b1) * t2

self.distance = self.norm2(self.PonB - self.PonA)

print('distance=', self.distance)

return self.distance

if __name__ == "__main__":

print("***************************************")

print("test example")

print("***************************************")

parser = argparse.ArgumentParser(description='test')

parser.add_argument('-file', type=str, default = 'calibration.csv')

args = parser.parse_args()

calibrationfile = args.file

p4psolver1 = PNPSolver()

'''

P11 = np.array([0, 0, 0])

P12 = np.array([0, 200, 0])

P13 = np.array([150, 0, 0])

P14 = np.array([150, 200, 0])

p11 = np.array([2985, 1688])

p12 = np.array([5081, 1690])

p13 = np.array([2997, 2797])

p14 = np.array([5544, 2757])

'''

P11 = np.array([0, 0, 0])

P12 = np.array([0, 300, 0])

P13 = np.array([210, 0, 0])

P14 = np.array([210, 300, 0])

p11 = np.array([1765, 725])

p12 = np.array([3068, 1254])

p13 = np.array([1249, 1430])

p14 = np.array([2648, 2072])

p4psolver1.Points3D[0] = np.array([P11,P12,P13,P14])

p4psolver1.Points2D[0] = np.array([p11,p12,p13,p14])

#p4psolver1.point2find = np.array([4149, 671])

#p4psolver1.point2find = np.array([675, 835])

p4psolver1.point2find = np.array([691, 336])

p4psolver1.getudistmap(calibrationfile)

p4psolver1.solver()

p4psolver2 = PNPSolver()

'''

P21 = np.array([0, 0, 0])

P22 = np.array([0, 200, 0])

P23 = np.array([150, 0, 0])

P24 = np.array([150, 200, 0])

p21 = np.array([3062, 3073])

p22 = np.array([3809, 3089])

p23 = np.array([3035, 3208])

p24 = np.array([3838, 3217])

'''

P21 = np.array([0, 0, 0])

P22 = np.array([0, 300, 0])

P23 = np.array([210, 0, 0])

P24 = np.array([210, 300, 0])

p21 = np.array([1307, 790])

p22 = np.array([2555, 797])

p23 = np.array([1226, 1459])

p24 = np.array([2620, 1470])

p4psolver2.Points3D[0] = np.array([P21,P22,P23,P24])

p4psolver2.Points2D[0] = np.array([p21,p22,p23,p24])

#p4psolver2.point2find = np.array([3439, 2691])

#p4psolver2.point2find = np.array([712, 1016])

p4psolver2.point2find = np.array([453, 655])

p4psolver2.getudistmap(calibrationfile)

p4psolver2.solver()

point2find1_CF = p4psolver1.ImageFrame2CameraFrame(p4psolver1.point2find)

#Oc1P_x1 = point2find1_CF[0]

#Oc1P_y1 = point2find1_CF[1]

#Oc1P_z1 = point2find1_CF[2]

Oc1P_1 = np.array(point2find1_CF)

print(Oc1P_1)

Oc1P_1[0], Oc1P_1[1] = p4psolver1.CodeRotateByZ(Oc1P_1[0], Oc1P_1[1], p4psolver1.Theta_W2C[2])

Oc1P_1[0], Oc1P_1[2] = p4psolver1.CodeRotateByY(Oc1P_1[0], Oc1P_1[2], p4psolver1.Theta_W2C[1])

Oc1P_1[1], Oc1P_1[2] = p4psolver1.CodeRotateByX(Oc1P_1[1], Oc1P_1[2], p4psolver1.Theta_W2C[0])

a1 = np.array([p4psolver1.Position_OcInWx, p4psolver1.Position_OcInWy, p4psolver1.Position_OcInWz])

a2 = a1 + Oc1P_1

#a2 = (p4psolver1.Position_OcInWx + Oc1P_1[0], p4psolver1.Position_OcInWy + Oc1P_y1, p4psolver1.Position_OcInWz + Oc1P_z1)

point2find2_CF = p4psolver2.ImageFrame2CameraFrame(p4psolver2.point2find)

#Oc2P_x2 = point2find2_CF[0]

#Oc2P_y2 = point2find2_CF[1]

#Oc2P_z2 = point2find2_CF[2]

Oc2P_2 = np.array(point2find2_CF)

print(Oc2P_2)

Oc2P_2[0], Oc2P_2[1] = p4psolver2.CodeRotateByZ(Oc2P_2[0], Oc2P_2[1], p4psolver2.Theta_W2C[2])

Oc2P_2[0], Oc2P_2[2] = p4psolver2.CodeRotateByY(Oc2P_2[0], Oc2P_2[2], p4psolver2.Theta_W2C[1])

Oc2P_2[1], Oc2P_2[2] = p4psolver2.CodeRotateByX(Oc2P_2[1], Oc2P_2[2], p4psolver2.Theta_W2C[0])

b1 = ([p4psolver2.Position_OcInWx, p4psolver2.Position_OcInWy, p4psolver2.Position_OcInWz])

b2 = b1 + Oc2P_2

#b2 = (p4psolver2.Position_OcInWx + Oc2P_x2, p4psolver2.Position_OcInWy + Oc2P_y2, p4psolver2.Position_OcInWz + Oc2P_z2)

g = GetDistanceOf2linesIn3D()

g.SetLineA(a1[0], a1[1], a1[2], a2[0], a2[1], a2[2])

g.SetLineB(b1[0], b1[1], b1[2], b2[0], b2[1], b2[2])

distance = g.GetDistance()

pt = (g.PonA + g.PonB)/2

print(pt)

A = np.array([241.64926392,-78.7377477,166.08307887])

B = np.array([141.010851,-146.64449841,167.28164652])

print(math.sqrt(np.dot(A-B,A-B)))

A点的世界坐标点是:

distance= 0.13766177937900279

[241.64926392 -78.7377477 166.08307887]

B点的世界坐标点是:

distance= 0.7392672771306183

[ 141.010851 -146.64449841 167.28164652]

计算AB点的距离:

A = np.array([241.64926392,-78.7377477,166.08307887])

B = np.array([141.010851,-146.64449841,167.28164652])

print(math.sqrt(np.dot(A-B,A-B)))

结果为:

121.41191667813395

从数据可以看出茶罐的高度约为171mm(玻璃标定板的高度为4mm),对角的长度约为121mm。

也可以在生成世界坐标数据的时候,在Z轴数据中填入标定板的高度,如下所示:

P11 = np.array([0, 0, 4])

P12 = np.array([0, 300, 4])

P13 = np.array([210, 0, 4])

P14 = np.array([210, 300, 4])

P21 = np.array([0, 0, 4])

P22 = np.array([0, 300, 4])

P23 = np.array([210, 0, 4])

P24 = np.array([210, 300, 4])

即可直接得到对应的结果:

[ 141.010851 -146.64449841 171.28164652]

121.41191667813395

参考文档

#

链接地址

文档名称

1

https://www.cnblogs.com/singlex/p/pose_estimation_3.html

2

https://www.cnblogs.com/singlex/p/6037020.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/245979.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

c# treeview查找并选中节点_最通俗易懂的二叉查找树(BST)详解

原来来自 呆萌数据结构-06二叉查找树​imoegirl.com二叉查找树&#xff08;Binary Search Tree&#xff09;&#xff0c;简写BST&#xff0c;是满足某些条件的特殊二叉树。任何一个节点的左子树上的点&#xff0c;都必须小于当前节点。任何一个节点的右子树上的点&#xff0c;都…

服务器损坏mysql修复_云服务器mysql数据库损坏修复mysql

有的时候因为各种原因导致mysql数据库损坏,我们可以使用mysql自带的mysqlcheck命令来快速修复所有的数据库或者特定的数据库,检查优化并修复所有的数据库.1.先在运行中输入CMD,启动命令行.2.进入Mysql的Bin目录&#xff1a;D:\VHostData\MySQL5.1\bin,(这个路径在数据库的安装目…

bootstrap jquery alert_bootstrap第七课

bootstrap 模态框bootstrap是一个非常酷的前端开发框架&#xff0c;它可以大大的简化我们日常开发当中的功能与样式。它有非常漂亮的css组件和非常实用的控件供我们使用。接下来我们来看看bootstrap的内容吧&#xff01;首先大家要引入bootstrap的css和js可以在这里下载&#x…

java 闭包_公司新来的女实习生问我什么是闭包?

作者&#xff1a;霍语佳来源&#xff1a;前端食堂观感度&#xff1a;?????口味&#xff1a;冰镇西瓜烹饪时间&#xff1a;20min撩妹守则第一条&#xff0c;女孩子都喜欢童话故事。那就先来讲一个童话故事~// 有一个公主// 她生活在一个充满冒险的奇妙世界里// 她遇见了她的…

java org.apache.http_org.apache.http jar包下载-org.apache.http.jar包下载 --pc6下载站

org.apache.http.jar包是一款十分常用的jar包如果没有org.apache.http.jar包Apache与http的链接将会出现错误等现象马上下载org.apache.http.jar包。。相关软件软件大小版本说明下载地址org.apache.http.jar包是一款十分常用的jar包,如果没有org.apache.http.jar包,Apache与htt…

网络连接异常、网站服务器失去响应_网站常见故障解决办法

网站在运行过程中&#xff0c;常常遇到各种服务器问题&#xff0c;虽然有服务器厂商的维护&#xff0c;但是往往耗时耗工小编对常见的服务器问题&#xff0c;进行了归纳整理&#xff0c;下面跟各位分享一下。常见故障分析一、恶意攻击在我平时管理网站时&#xff0c;可能会遭到…

python3 sleep 并发_python异步编程之asyncio(百万并发)

点击上方蓝字关注我们目录[python 异步编程之 asyncio(百万并发)]一、asyncio二、aiohttp前言&#xff1a;python 由于 GIL(全局锁)的存在&#xff0c;不能发挥多核的优势&#xff0c;其性能一直饱受诟病。然而在 IO 密集型的网络编程里&#xff0c;异步处理比同步处理能提升成…

【Spring实战】02 配置多数据源

文章目录 1. 配置数据源信息2. 创建第一个数据源3. 创建第二个数据源4. 创建启动类及查询方法5. 启动服务6. 创建表及做数据7. 查询验证8. 详细代码总结 通过上一节的介绍&#xff0c;我们已经知道了如何使用 Spring 进行数据源的配置以及应用。在一些复杂的应用中&#xff0c;…

windows查看usb信息命令_【VPS】Linux VPS查看系统信息命令大全

本文转自老左笔记&#xff0c;自用mark系统# uname -a # 查看内核/操作系统/CPU信息 # head -n 1 /etc/issue # 查看操作系统版本 # cat /proc/cpuinfo # 查看CPU信息 # hostname # 查看计算机名 # lspci -tv # 列出所有PCI设备 # lsusb -tv # 列出所有USB设备 # lsmod # 列出加…

无法初始化sftp协议。主机是sftp服务器吗?_WinSCP v5.15.3 免费的 开源图形化 SFTP 客户端...

WinSCP 是一个 Windows 环境下使用的 SSH 的开源图形化 SFTP 客户端。同时支持 SCP 协议。它的主要功能是在本地与远程计算机间安全地复制文件&#xff0c;并且可以直接编辑文件。主要功能WinSCP 可以执行所有基本的文件操作&#xff0c;例如下载和上传。同时允许为文件和目录重…

java中组合_java中组合模式详解和使用方法

组合模式(Composite Pattern)&#xff0c;又叫部分整体模式&#xff0c;是用于把一组相似的对象当作一个单一的对象。组合模式依据树形结构来组合对象&#xff0c;用来表示部分以及整体层次。这种类型的设计模式属于结构型模式&#xff0c;它创建了对象组的树形结构。这种模式创…

道客巴巴vip账号共享2020_腾讯视频VIP怎么两个手机通用?

理论上来说&#xff0c;腾讯视频VIP可以同时在3个设备上登录&#xff0c;但只能在2个设备上同时播放视频。这也就意味着&#xff0c;腾讯视频VIP可以在两个手机上同时使用。腾讯视频VIP基本可以分为微信区、QQ区&#xff0c;两者并不互通。近期腾讯视频手机端修改了登录规则&am…

java 字符串是对象吗_解析Java中的String对象的数据类型

解析Java中的String对象的数据类型2007-06-06eNet&Ciweek1. 首先String不属于8种基本数据类型&#xff0c;String是一个对象。因为对象的默认值是null&#xff0c;所以String的默认值也是null&#xff1b;但它又是一种特殊的对象&#xff0c;有其它对象没有的一些特性。2. …

7-7 六度空间 (30分)_现役球员中,谁最可能成下一位30000分先生?3大前十巨星没戏...

想要在NBA联盟得到3万分有多难&#xff1f;从联盟成立至今的70多年中&#xff0c;总得分超过3万分的球员一共只有7位&#xff0c;他们分别是贾巴尔、马龙、詹姆斯、科比、乔丹、诺维茨基和张伯伦&#xff0c;剩下的强如大鲨鱼、艾弗森都没能完成这一壮举&#xff0c;那现役球员…

java右键弹出菜单_javascript自定义右键弹出菜单实现方法

本文实例讲述了javascript自定义右键弹出菜单实现方法。分享给大家供大家参考。具体实现方法如下&#xff1a;无标题页var oPopup window.createPopup();function PopMenu(id){var oPopBody oPopup.document.body;oPopBody.style.backgroundColor "buttonface";oP…

union all动态表_Excel VBA——动态显示图表

本文讲述将柱形图和折线图做成动态图表的方法。所谓动态是指鼠标点到哪个单元格&#xff0c;就显示活动单元格所在列或行的图表&#xff0c;其中折线图可以让数据点依次显示&#xff0c;使得整个图表不再死板&#xff0c;像变 了一样&#xff01;在开始之前&#xff0c;需要先介…

xnio java_java基础篇---新I/O技术(NIO)

在JDK1.4以前&#xff0c;I/O输入输出处理&#xff0c;我们把它称为旧I/O处理&#xff0c;在JDK1.4开始&#xff0c;java提供了一系列改进的输入/输出新特性&#xff0c;这些功能被称为新I/O(NEW I/O)&#xff0c;新添了许多用于处理输入/输出的类&#xff0c;这些类都被放在ja…

picturectrl控件中加载图片并显示_如何在EasyX窗体中显示图片

前提&#xff1a;图片必须是.jpg或.bmp格式的图片。(一)将保存在电脑桌面上的图片显示在EasyX窗体中&#xff0c;图片路径为&#xff1a;C:甥敳獲Administrator.USER-20190823VFDesktop锤头镰刀旗.jpg。(二)程序代码&#xff1a;#include#includeint main(){initgraph(500,300)…

estemplate 导入MySQL_[数据库]es~通过ElasticsearchTemplate进行聚合操作

[数据库]es~通过ElasticsearchTemplate进行聚合操作02020-08-24 17:00:38聚合操作&#xff0c;我们可以对数据进行分组的求和&#xff0c;求数&#xff0c;最大值&#xff0c;最小值&#xff0c;或者其它的自定义的统计功能&#xff0c;es对聚合有着不错的支持&#xff0c;需要…

iis mysql版本切换_MySQL+PHP配置 Windows系统IIS版(转)

1、下载MySQL下载地址&#xff1a;http://dev.mysql.com/downloads/mysql/5.1.html->Windows (x86, 32-bit), MSI Installer Essentials - Recommended(不包含文档)->No thanks, just start my download.(无需登录注册 直接下载)PHP下载地址&#xff1a;www.php.net->…