非线性最优化(二)——高斯牛顿法和Levengerg-Marquardt迭代

高斯牛顿法和Levengerg-Marquardt迭代都用来解决非线性最小二乘问题(nonlinear least square)。


From Wiki

The Gauss–Newton algorithm is a method used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.


Description

Given m functions r = (r1, …, rm) of n variables β = (β1, …, βn), with m ≥ n, the Gauss–Newton algorithm iteratively finds the minimum of the sum of squares

S(\boldsymbol \beta)= \sum_{i=1}^m r_i(\boldsymbol \beta)^2.

Starting with an initial guess \boldsymbol \beta^{(0)} for the minimum, the method proceeds by the iterations

\boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_r}^\mathsf{T} \mathbf{J_r} \right)^{-1} \mathbf{ J_r} ^\mathsf{T} \mathbf{r}(\boldsymbol \beta^{(s)})

where, if r and β are column vectors, the entries of the Jacobian matrix are

(\mathbf{J_r})_{ij} = \frac{\partial r_i (\boldsymbol \beta^{(s)})}{\partial \beta_j},

and the symbol ^\mathsf{T} denotes the matrix transpose.

If m = n, the iteration simplifies to

\boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left( \mathbf{J_r} \right)^{-1} \mathbf{r}(\boldsymbol \beta^{(s)})

which is a direct generalization of Newton's method in one dimension.

In data fitting, where the goal is to find the parameters β such that a given model function y = f(xβ) best fits some data points (xiyi), the functions riare the residuals

r_i(\boldsymbol \beta)= y_i - f(x_i, \boldsymbol \beta).

Then, the Gauss-Newton method can be expressed in terms of the Jacobian Jf of the function f as

\boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} + \left(\mathbf{J_f}^\mathsf{T} \mathbf{J_f} \right)^{-1} \mathbf{ J_f} ^\mathsf{T}\mathbf{r}(\boldsymbol \beta^{(s)}).

Notes

The assumption m ≥ n in the algorithm statement is necessary, as otherwise the matrix JrTJr is not invertible (rank(JrTJr)=rank(Jr))and the normal equations (Δ in the "derivation from Newton's method" part) cannot be solved (at least uniquely).

The Gauss–Newton algorithm can be derived by linearly approximating the vector of functions ri. Using Taylor's theorem, we can write at every iteration:

\mathbf{r}(\boldsymbol \beta)\approx \mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta

with \Delta=\boldsymbol \beta-\boldsymbol \beta^s. The task of finding Δ minimizing the sum of squares of the right-hand side, i.e.,

\mathbf{min}\|\mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta\|_2^2,

is a linear least squares problem, which can be solved explicitly, yielding the normal equations in the algorithm.

The normal equations are m linear simultaneous equations in the unknown increments, Δ. They may be solved in one step, using Cholesky decomposition, or, better, the QR factorization of Jr. For large systems, an iterative method, such as the conjugate gradient method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail as JrTJr becomes singular.


Derivation from Newton's method

In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear.

The recurrence relation for Newton's method for minimizing a function S of parameters, \boldsymbol\beta, is

\boldsymbol\beta^{(s+1)} = \boldsymbol\beta^{(s)} - \mathbf H^{-1} \mathbf g \,

where g denotes the gradient vector of S and H denotes the Hessian matrix of S. Since S = \sum_{i=1}^m r_i^2, the gradient is given by

g_j=2\sum_{i=1}^m r_i\frac{\partial r_i}{\partial \beta_j}.

Elements of the Hessian are calculated by differentiating the gradient elements, g_j, with respect to \beta_k

H_{jk}=2\sum_{i=1}^m \left(\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}+r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} \right).

The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by

H_{jk}\approx 2\sum_{i=1}^m J_{ij}J_{ik}

where J_{ij}=\frac{\partial r_i}{\partial \beta_j} are entries of the Jacobian Jr. The gradient and the approximate Hessian can be written in matrix notation as

\mathbf g=2\mathbf{J}_\mathbf{r}^\mathsf{T}\mathbf{r}, \quad \mathbf{H} \approx 2 \mathbf{J}_\mathbf{r}^\mathsf{T}\mathbf{J_r}.\,

These expressions are substituted into the recurrence relation above to obtain the operational equations

\boldsymbol{\beta}^{(s+1)} = \boldsymbol\beta^{(s)}+\Delta;\quad \Delta = -\left( \mathbf{J_r}^\mathsf{T}\mathbf{J_r} \right)^{-1} \mathbf{J_r}^\mathsf{T}\mathbf{r}.

Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation

\left|r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}\right| \ll \left|\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}\right|

that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected.

  1. The function values r_i are small in magnitude, at least around the minimum.
  2. The functions are only "mildly" non linear, so that \frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} is relatively small in magnitude.


Improved version

With the Gauss–Newton method the sum of squares S may not decrease at every iteration. However, since Δ is a descent direction, unless S(\boldsymbol \beta^s) is a stationary point, it holds that S(\boldsymbol \beta^s+\alpha\Delta) < S(\boldsymbol \beta^s) for all sufficiently small \alpha>0. Thus, if divergence occurs, one solution is to employ a fraction, \alpha, of the increment vector, Δ in the updating formula

\boldsymbol \beta^{s+1} = \boldsymbol \beta^s+\alpha\  \Delta.

In other words, the increment vector is too long, but it points in "downhill", so going just a part of the way will decrease the objective function S. An optimal value for \alpha can be found by using a line search algorithm, that is, the magnitude of \alpha is determined by finding the value that minimizes S, usually using a direct search method in the interval 0<\alpha<1.

In cases where the direction of the shift vector is such that the optimal fraction, \alpha, is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm, also known as the "trust region method".[1] The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent,

\left(\mathbf{J^TJ+\lambda D}\right)\Delta=-\mathbf{J}^T \mathbf{r},

where D is a positive diagonal matrix. Note that when D is the identity matrix and \lambda\to+\infty, then \Delta/\lambda\to -\mathbf{J}^T \mathbf{r}, therefore the direction of Δ approaches the direction of the negative gradient -\mathbf{J}^T \mathbf{r}.

The so-called Marquardt parameter, \lambda, may also be optimized by a line search, but this is inefficient as the shift vector must be re-calculated every time \lambda is changed. A more efficient strategy is this. When divergence occurs increase the Marquardt parameter until there is a decrease in S. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of then becomes a standard Gauss–Newton minimization.

The (non-negative) damping factor, \lambda, is adjusted at each iteration. If reduction of S is rapid, a smaller value can be used, bringing the algorithm closer to the Gauss–Newton algorithm, whereas if an iteration gives insufficient reduction in the residual, \lambda can be increased, giving a step closer to the gradient descent direction. Note that the gradient of S with respect to δ equals -2(\mathbf{J}^{T} [\mathbf{y} - \mathbf{f}(\boldsymbol \beta) ] )^T. Therefore, for large values of \lambda, the step will be taken approximately in the direction of the gradient. If either the length of the calculated step, δ, or the reduction of sum of squares from the latest parameter vector, β + δ, fall below predefined limits, iteration stops and the last parameter vector, β, is considered to be the solution.

Levenberg's algorithm has the disadvantage that if the value of damping factor, \lambda, is large, inverting JTJ + \lambdaI is not used at all. Marquardt provided the insight that we can scale each component of the gradient according to the curvature so that there is larger movement along the directions where the gradient is smaller. This avoids slow convergence in the direction of small gradient. Therefore, Marquardt replaced the identity matrix, I, with the diagonal matrix consisting of the diagonal elements of JTJ, resulting in the Levenberg–Marquardt algorithm:

\mathbf{(J^T J + \lambda\, diag(J^T J))\boldsymbol \delta  = J^T [y - f(\boldsymbol \beta)]}\!.

Related algorithms

In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian,\frac{\partial^2 S}{\partial \beta_j \partial\beta_k}, is built up numerically using first derivatives \frac{\partial r_i}{\partial\beta_j} only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss-Newton, Levenberg-Marquardt, etc. fits only to nonlinear least-squares problems.

Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.


本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/549355.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

逆透视变换详解 及 代码实现(一)

逆透视变换详解 及 代码实现&#xff08;一&#xff09; 中主要是原理的说明&#xff1a; 一、世界坐标轴和摄像机坐标轴 从下图中可以看到&#xff0c;世界坐标为(X,Y,Z) 相机坐标为(Xc,Yc,Zc) 而世界坐标变换到相机坐标存在一个旋转矩阵变换R以及一个位移变换T。 根据上图…

C调用C++链接库

C调用C链接库&#xff1a; 1.编写C代码&#xff0c;编写函数的时候&#xff0c;需要加入对C的接口&#xff0c;也就是extern “c" 2.由于C不能直接用"class.function”的形式调用函数&#xff0c;所以C中需要为C写一个接口函数。例如本来要调用student类的talk函数&a…

逆透视变换详解 及 代码实现(二)

根据 逆透视变换详解 及 代码实现(一)的原理 下面我用车上拍摄的车道图像&#xff0c;采用逆透视变换得到的图像&#xff0c;给出代码前我们先看下处理结果。 首先是原始图像&#xff1a; 下图为逆透视变换图像&#xff1a; 下面说具体的实现吧&#xff01;&#xff01; 一、…

[赵星理]《简单男人》--歌曲温暖你的心,激励你前进

简单的男人&#xff0c;简单的歌曲&#xff0c;赵星理《简单男人》送给所有身负家庭责任的人&#xff0c;要让家越来越美&#xff0c;再苦再累也不能后退。加油&#xff01;简单男人词曲&#xff1a;赵星理演唱&#xff1a;赵星理累不累也不许落泪醉不醉苦辣都值得回味要让家越…

SCVMM

通过SCVMM实现并管理虚拟机高可用性 1、 添加群集主机2、 创建虚拟网络3、 创建虚拟机并实现高可用性接着上一篇文章&#xff0c;这次我们来看一下&#xff0c;如果通过SCVMM R2来实现虚拟机的高可用性。首先将群集主机添加到SCVMM 1、 登陆到计算机Win2008R2&#xff0c;打开S…

序列化包含多种不明类型的集合

序列化包含多种不明类型的集合 代码&#xff1a;/Files/zhuqil/Kirin.rar 导言: 你是否曾经想过序列化构造对象&#xff0c;它里面有一个集合&#xff0c;这个集合包含接口或者抽象类&#xff1f;你是否不知道所有的你要序列化的类型&#xff1f;好吧&#xff0c;如果这样&…

修改EIGRP 路径cost 值,以及分析和实现等价与非等价负载均衡

一、拓扑图&#xff1a;二、配置各路由器的IP和EIGRP 协议&#xff0c;并保证邻接关系的形成。1、我要达到的目的是要让R2到192.168.14.0/24这个网段能在R2和R1断开之后&#xff0c;形成网网络的快速收敛。因为根据EIGRP 的次优路径进拓扑关系的形成条件是要满足FC&#xff08;…

Ubuntu 9.10 升级到ext4

最近一直在使用ubuntu系统&#xff0c;当时升级到9&#xff0c;04的时候&#xff0c;也没有在意系统的文件系统变了&#xff1b;当使用一段时间之后&#xff0c;发现系统没有8.10时使用的顺畅&#xff0c;这时才发现9.04之后心内核都支持ext4文件系统&#xff0c;该文件系统要比…

史上最简单的软件破解——5行脚本代码完美破解99%的过期软件

如果你看到了这篇博文&#xff0c;绝对保证不虚此行。仅仅5行脚本代码&#xff0c;即可破解99%的过期软件。 这件事的背景&#xff1a;最近在找了一些学习资料&#xff0c;其中有Rational Rose画的图&#xff0c;好久没用过它了。今天安装好&#xff0c;导入许可文件&#xff…

数据在链路层传播相关时间计算

本来很懵逼的 看到这篇文章基本全懂了 一般这种题目会让我感觉很是煎熬&#xff0c;不知道怎么算。终于打通这类题目&#xff0c;总结到这里。 先看这类题目的常见表述&#xff1a;如图所示&#xff0c;图中路由器采用存储–转发的方式&#xff0c;所有链路的传播速率均为100…

高等数学的函数连续,可导,可微和偏导数连续的关系(多元)

最近在自学机器学习 顺便把高数捡回来 结论&#xff08;一元函数范畴内&#xff09; 可导与连续的关系&#xff1a;可导必连续&#xff0c;连续不一定可导&#xff1b; 可微与连续的关系&#xff1a;可微与可导是一样的&#xff1b; 可积与连续的关系&#xff1a;可积不一定连续…

在控制台中实现“单词竞猜”游戏 C# 猜词游戏

版权声明&#xff1a;本文为博主原创文章&#xff0c;未经博主允许不得转载。 https://blog.csdn.net/u011528448/article/details/24670471 </div><link rel"stylesheet" href"https://csdnimg.cn/release/phoenix/template/css/ck_htmledit_v…

byvoid 神牛的tarjan算法讲解!

[有向图强连通分量] 在有向图G中&#xff0c;如果两个顶点间至少存在一条路径&#xff0c;称两个顶点强连通 (strongly connected)。如果有向图G的每两个顶点都强连通&#xff0c;称G是一个强连通图 。非强连通图有向图的极大强连通子图&#xff0c;称为强连通分量 (strongly …

关于NLP你还不会却必须要学会的事儿—NLP实践教程指南第一编

作者 | Dipanjan (DJ) Sarkar 编译 | 姗姗 出品 | 人工智能头条&#xff08;公众号ID&#xff1a;AI_Thinker&#xff09; 【人工智能头条导读】在研究和处理自然语言处理的很多问题时&#xff0c;除了关注各种各样基础的数据&#xff0c;高级的深度学习模型、算法外&#x…

ORACLE中表死锁的处理

在进行数据库管理的过程中,经常会出现数据表被用户的一些不合理操作而导致表被锁定的情况,以下主要介绍如何查找哪些表被哪个用户所锁定,以及如何解除锁定: 1.查找被锁定的表: select object_name,session_id,os_user_name,oracle_username,process,locked_mode,status from v$…

asp.net文件上传进度条控件(破解版~没有时间限制) 多项自定义

原版只能用30天&#xff0c;这个破解版可以长期用了&#xff08;设置了时间2010-2110&#xff09;. 注册控件&#xff1a;<% Register TagPrefix"fup" Namespace"OboutInc.FileUpload" Assembly"FileUpload" %>调用控件&#xff1a;<fo…

矩阵连乘问题的算法分析

问题描述&#xff1a;给定n个矩阵&#xff1a;A1,A2,...,An&#xff0c;其中Ai与Ai1是可乘的&#xff0c;i1&#xff0c;2...&#xff0c;n-1。确定计算矩阵连乘积的计算次序&#xff0c;使得依此次序计算矩阵连乘积需要的数乘次数最少。输入数据为矩阵个数和每个矩阵规模&…

等一分钟看了会流泪

转载于:https://blog.51cto.com/hljheiwangwei/269127

Silverlight 解谜游戏 之十六 消失的蒙娜丽莎

在《Silverlight 解谜游戏 之三 消除名单》中我们通过在物品轮廓画出Path 来达到消除物品的效果&#xff0c;由于游戏中的物品都是Office 图片的一部分所以无法使其真正消失&#xff0c;本篇我们将添加一个独立于Office 图片的物品&#xff0c;使其能动态消失。 看看题板上多出…

明令禁止工作“996”,是对“生而为人”的基本尊重

离GitHub上996.ICU项目的发布时间已过去好一段时间了&#xff0c;作为一名计算机专业的在读生&#xff0c;对996有一点体会&#xff0c;最直观的体会就是为了提升技术&#xff0c;连续一个学期在实验室工作超过10个小时。 人民日报发文《被“996”围困的年轻人&#xff0c;像是…