DLT 直接线性变换
对于单应变换 x i ′ = H x i x_i^{\prime}=Hx_i xi′=Hxi,易知两图中对应的特征点,如何找出所需要的 H H H,为了解决这个问题,可以采用DLT算法
原理
其中采用Least Squares Error去拟合
其中目标是获得最佳参数:使得误差平方最小的参数
p ^ = arg min p ∑ i ∥ f ( x i ; p ) − x i ′ ∥ 2 \hat{\boldsymbol{p}}=\arg\min_{\boldsymbol{p}}\sum_i\|\boldsymbol{f}(\boldsymbol{x}_i;\boldsymbol{p})-\boldsymbol{x}_i^{\prime}\|^2 p^=argpmini∑∥f(xi;p)−xi′∥2
单应变换
f ( x ; p ) = H [ x y 1 ] f(x;p)=H\begin{bmatrix}x\\y\\1\end{bmatrix} f(x;p)=H xy1
即
[ x i ′ y i ′ 1 ] = λ [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] [ x i y i 1 ] x i ′ = λ ( h 11 x i + h 12 y i + h 13 ) y i ′ = λ ( h 21 x i + h 22 y i + h 23 ) 1 = λ ( h 31 x i + h 32 y i + h 33 ) \begin{aligned} \begin{bmatrix}x_i'\\y_i'\\1\end{bmatrix} &=\lambda\begin{bmatrix}h_{11}&h_{12}&h_{13}\\h_{21}&h_{22}&h_{23}\\h_{31}&h_{32}&h_{33}\end{bmatrix}\begin{bmatrix}x_i\\y_i\\1\end{bmatrix} \\ x_i'&=\lambda(h_{11}x_i+h_{12}y_i+h_{13}) \\ y_i^{\prime}&=\lambda(h_{21}x_i+h_{22}y_i+h_{23}) \\ 1&=\lambda(h_{31}x_i+h_{32}y_i+h_{33}) \end{aligned} xi′yi′1 xi′yi′1=λ h11h21h31h12h22h32h13h23h33 xiyi1 =λ(h11xi+h12yi+h13)=λ(h21xi+h22yi+h23)=λ(h31xi+h32yi+h33)
其中 λ \lambda λ为缩放因子,消去可得
x i ′ ( h 31 x i + h 32 y i + h 33 ) = h 11 x i + h 12 y i + h 13 y i ′ ( h 31 x i + h 32 y i + h 33 ) = h 21 x i + h 22 y i + h 23 x_i'(h_{31}x_i+h_{32}y_i+h_{33})=h_{11}x_i+h_{12}y_i+h_{13}\\y_i'(h_{31}x_i+h_{32}y_i+h_{33})=h_{21}x_i+h_{22}y_i+h_{23} xi′(h31xi+h32yi+h33)=h11xi+h12yi+h13yi′(h31xi+h32yi+h33)=h21xi+h22yi+h23
即
[ x i y i 1 0 0 0 − x i ′ x i − x i ′ y i − x i ′ 0 0 0 x i y i 1 − y i ′ x i − y i ′ y i − y i ′ ] [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 ] = [ 0 0 ] \begin{bmatrix}x_i&y_i&1&0&0&0&-x_i^{\prime}x_i&-x_i^{\prime}y_i&-x_i^{\prime}\\0&0&0&x_i&y_i&1&-y_i^{\prime}x_i&-y_i^{\prime}y_i&-y_i^{\prime}\end{bmatrix}\begin{bmatrix}h_{11}\\h_{12}\\h_{13}\\h_{21}\\h_{22}\\h_{23}\\h_{31}\\h_{32}\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix} [xi0yi0100xi0yi01−xi′xi−yi′xi−xi′yi−yi′yi−xi′−yi′] h11h12h13h21h22h23h31h32 =[00]
这只是一个匹配点,扩展到多个匹配点
[ x 1 y 1 1 0 0 0 − x 1 ′ x 1 − x 1 ′ y 1 − x 1 ′ 0 0 0 x 1 y 1 1 − y 1 ′ x 1 − y 1 ′ y 1 − y 1 ′ ⋮ x n y n 1 0 0 0 − x n ′ x n − x n ′ y n − x n ′ 0 0 0 x n y n 1 − y n ′ x n − y n ′ y n − y n ′ ] [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] = [ 0 0 ] \begin{bmatrix}x_1&y_1&1&0&0&0&-x_1^{\prime}x_1&-x_1^{\prime}y_1&-x_1^{\prime}\\0&0&0&x_1&y_1&1&-y_1^{\prime}x_1&-y_1^{\prime}y_1&-y_1^{\prime}\\&&&&\vdots\\x_n&y_n&1&0&0&0&-x_n^{\prime}x_n&-x_n^{\prime}y_n&-x_n^{\prime}\\0&0&0&x_n&y_n&1&-y_n^{\prime}x_n&-y_n^{\prime}y_n&-y_n^{\prime}\end{bmatrix} \begin{bmatrix}h_{11}\\h_{12}\\h_{13}\\h_{21}\\h_{22}\\h_{23}\\h_{31}\\h_{32}\\h_{33}\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix} x10xn0y10yn010100x10xn0y1⋮0yn0101−x1′x1−y1′x1−xn′xn−yn′xn−x1′y1−y1′y1−xn′yn−yn′yn−x1′−y1′−xn′−yn′ h11h12h13h21h22h23h31h32h33 =[00]
其中将矩阵 [ x 1 y 1 1 0 0 0 − x 1 ′ x 1 − x 1 ′ y 1 − x 1 ′ 0 0 0 x 1 y 1 1 − y 1 ′ x 1 − y 1 ′ y 1 − y 1 ′ ⋮ x n y n 1 0 0 0 − x n ′ x n − x n ′ y n − x n ′ 0 0 0 x n y n 1 − y n ′ x n − y n ′ y n − y n ′ ] \begin{bmatrix}x_1&y_1&1&0&0&0&-x_1^{\prime}x_1&-x_1^{\prime}y_1&-x_1^{\prime}\\0&0&0&x_1&y_1&1&-y_1^{\prime}x_1&-y_1^{\prime}y_1&-y_1^{\prime}\\&&&&\vdots\\x_n&y_n&1&0&0&0&-x_n^{\prime}x_n&-x_n^{\prime}y_n&-x_n^{\prime}\\0&0&0&x_n&y_n&1&-y_n^{\prime}x_n&-y_n^{\prime}y_n&-y_n^{\prime}\end{bmatrix} x10xn0y10yn010100x10xn0y1⋮0yn0101−x1′x1−y1′x1−xn′xn−yn′xn−x1′y1−y1′y1−xn′yn−yn′yn−x1′−y1′−xn′−yn′ 记为A,矩阵 [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] \begin{bmatrix}h_{11}\\h_{12}\\h_{13}\\h_{21}\\h_{22}\\h_{23}\\h_{31}\\h_{32}\\h_{33}\end{bmatrix} h11h12h13h21h22h23h31h32h33 记为h。
定义最小二乘问题:最小 ∥ A h ∥ 2 \|Ah\|^2 ∥Ah∥2
- 由于 h h h差一个缩放因子,因此增加约束 ∥ h ∥ = 1 \|h\|=1 ∥h∥=1
- 解: h ^ \widehat{h} h 为 A T A A^\mathrm{T}A ATA的最小特征值对应的特征向量
- 需要4对以上的对应点 ( n ≥ 4 (n\geq4 (n≥4,任意3点不共线)
最小 ∥ A h ∥ 2 \|Ah\|^2 ∥Ah∥2,满足约束: ∥ h ∥ = 1 \|h\|=1 ∥h∥=1 采用拉格朗日乘子法,拉格朗日函数为:
L ( h , λ ) = h T A T A h − λ ( h T h − 1 ) L(h,\lambda)=h^\mathrm{T}A^\mathrm{T}Ah-\lambda(h^\mathrm{T}h-1) L(h,λ)=hTATAh−λ(hTh−1)
L ( h , λ ) L(h,\lambda) L(h,λ)求偏导并等于0∂ L ( h , λ ) ∂ h = ( A T A h + A A T ) h − 2 λ h = 2 A T A h − 2 λ h = 0 \frac{\partial L(h,\lambda)}{\partial h}=(A^{\mathrm{T}}Ah+AA^{\mathrm{T}})h-2\lambda h=2A^{\mathrm{T}}Ah-2\lambda h=0 ∂h∂L(h,λ)=(ATAh+AAT)h−2λh=2ATAh−2λh=0
得到: A T A h = λ h A^\mathrm{T}Ah=\lambda h ATAh=λh
所以, h h h为矩阵 A T A A^\mathrm{T}A ATA的特征值,亦为A的SVD中 U Σ V T U\Sigma V^\mathrm{T} UΣVT的 V V V
步骤
给定两幅图像中特征点对的齐次坐标 x i = ( x i , y i , 1 ) x_i=(x_i,y_i,1) xi=(xi,yi,1)、 x i ′ = ( x i ′ , y i ′ , 1 ) x_i^{\prime}=(x_i^{\prime},y_i^{\prime},1) xi′=(xi′,yi′,1), 计算 H H H, 使得 x i ′ = H x i x_i^{\prime}=Hx_i xi′=Hxi
- 计算矩阵A
- 计算 A A A的特征值分解: A = U D V T A=UDV^\mathrm{T} A=UDVT
- 保留 A A A最小的特征值对应的特征向量 υ \upsilon υ至 h h h
- 将 h h h写成矩阵形式 H = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] H=\begin{bmatrix}h_{11}&h_{12}&h_{13}\\h_{21}&h_{22}&h_{23}\\h_{31}&h_{32}&h_{33}\end{bmatrix} H= h11h21h31h12h22h32h13h23h33
DLT只对线性变换有效。
归一化的DLT
因为DLT算法不具备相似不变性
证明:
设第一次单应矩阵估计的结果为: x d s t = H x s r c x_{dst}=Hx_{src} xdst=Hxsrc
对两张图像分别进行相似变换 T T T并重新进行单应性估计
T d s t x d s t = H ′ ( T s r c x s r c ) ⇒ x d s t = T d s t − 1 H T s r c x s r c \begin{aligned} &\quad T_{dst}x_{dst}=H^{\prime}(T_{src}x_{src})\\ &\Rightarrow x_{dst}=T_{dst}^{-1}HT_{src}x_{src} \end{aligned} Tdstxdst=H′(Tsrcxsrc)⇒xdst=Tdst−1HTsrcxsrc一般情况下, H ≠ T d s t − 1 H T s r c H\neq T_{dst}^{-1}HT_{src} H=Tdst−1HTsrc
DLT算法无法抵抗相似变换的干扰
为了解决这个问题,所以提出了归一化的DLT算法,其思想是:计算一个只包含平移和带缩放的相似变换 T T T, 使得归一化后的原点位于(0,0),所有点到原点的平均距离为 2 \sqrt2 2
x ~ = [ x ~ i y ~ i 1 ] = [ s 0 t x 0 s t y 0 0 1 ] [ x i y i 1 ] \tilde{\boldsymbol{x}}=\begin{bmatrix}\tilde{x}_i\\\tilde{y}_i\\1\end{bmatrix}=\begin{bmatrix}s&0&t_x\\0&s&t_y\\0&0&1\end{bmatrix}\begin{bmatrix}x_i\\y_i\\1\end{bmatrix} x~= x~iy~i1 = s000s0txty1 xiyi1
其中满足
{ 1 N ∑ i = 1 N x ~ i = 1 N ∑ i = 1 N ( s x i + t x ) = 0 1 N ∑ i = 1 N y ~ i = 1 N ∑ i = 1 N ( s y i + t y ) = 0 1 N ∑ i = 1 N ( x ~ i 2 + y ~ i 2 ) = 2 ⟹ { t x = − s N ∑ i = 1 N x i = − s x ˉ t y = − s N ∑ i = 1 N y i = − s y ˉ s = 1 N ∑ i = 1 N ( x ~ i 2 + y ~ i 2 ) = 1 N ∑ i = 1 N ( ( x i − x ˉ ) 2 + ( y i − y ˉ ) 2 ) \begin{cases}\dfrac{1}{N}\sum_{i=1}^N\tilde{x}_i=\dfrac{1}{N}\sum_{i=1}^N(sx_i+t_x)=0\\\dfrac{1}{N}\sum_{i=1}^N\tilde{y}_i=\dfrac{1}{N}\sum_{i=1}^N(sy_i+t_y)=0\\\dfrac{1}{N}\sum_{i=1}^N(\tilde{x}_i^2+\tilde{y}_i^2)=2\end{cases}\implies\begin{cases}t_x=-\dfrac{s}{N}\sum_{i=1}^Nx_i=-s\bar{x}\\t_y=-\dfrac{s}{N}\sum_{i=1}^Ny_i=-s\bar{y}\\s=\dfrac{1}{N}\sum_{i=1}^N(\tilde{x}_i^2+\tilde{y}_i^2)=\dfrac{1}{N}\sum_{i=1}^N((x_i-\bar{x})^2+(y_i-\bar{y})^2)\end{cases} ⎩ ⎨ ⎧N1∑i=1Nx~i=N1∑i=1N(sxi+tx)=0N1∑i=1Ny~i=N1∑i=1N(syi+ty)=0N1∑i=1N(x~i2+y~i2)=2⟹⎩ ⎨ ⎧tx=−Ns∑i=1Nxi=−sxˉty=−Ns∑i=1Nyi=−syˉs=N1∑i=1N(x~i2+y~i2)=N1∑i=1N((xi−xˉ)2+(yi−yˉ)2)
反归一:
H = T ′ − 1 H ~ T , x i ′ = H x i x ~ = T x , x ~ i ′ = T ′ x i ′ \begin{aligned}&H=T^{\prime-1}\widetilde{H}T,\\&x_i^{\prime}=Hx_i\\&\widetilde{x}=Tx,\quad\widetilde{x}_i^{\prime}=T^{\prime}x_i^{\prime}\end{aligned} H=T′−1H T,xi′=Hxix =Tx,x i′=T′xi′