Posted on Categories:Convex optimization, 凸优化, 数学代写

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

Another major feasible direction method, which generally achieves a faster convergence rate than the conditional gradient method, is the gradient projection method (originally proposed in [Gol64], [LeP65]), which has the form
$$x_{k+1}=P_X\left(x_k-\alpha_k \nabla f\left(x_k\right)\right),$$
where $\alpha_k>0$ is a stepsize and $P_X(\cdot)$ denotes projection on $X$ (the projection is well defined since $X$ is closed and convex; see Fig. 2.1.6).

To get a sense of the validity of the method, note that from the Projection Theorem (Prop. 1.1.9 in Appendix B), we have
$$\nabla f\left(x_k\right)^{\prime}\left(x_{k+1}-x_k\right) \leq 0,$$
and by the optimality condition for convex functions (cf. Prop. 1,1.8 in Appendix B), the inequality is strict unless $x_k$ is optimal. Thus $x_{k+1}-x_k$ defines a feasible descent direction at $x_k$, and based on this fact, we can show the descent property $f\left(x_{k+1}\right)<f\left(x_k\right)$ when $\alpha_k$ is sufficiently small.
The stepsize $\alpha_k$ is chosen similar to the unconstrained gradient method, i.e., constant, diminishing, or through some kind of reduction rule to ensure cost function descent and guarantee convergence to the optimum; see the convergence analysis of Section 6.1, and [Ber99], Section 2.3, for a detailed discussion and references. Moreover the convergence rate estimates given earlier for unconstrained steepest descent in the positive definite quadratic cost case [cf. Eq. (2.8)] and in the singular case [cf. Eqs. (2.9) and (2.10)] generalize to the gradient projection method under various stepsize rules (see Exercise 2.1 for the former case and [Dun81] for the latter case).

## 数学代写|凸优化代写Convex Optimization代考|Two-Metric Projection Methods

Despite its simplicity, the gradient projection method has some significant drawbacks:
(a) Its rate of convergence is similar to the one of steepest descent, and is often slow. It is possible to overcome this potential drawback by a form of scaling. This can be accomplished with an iteration of the form
$$x_{k+1} \in \arg \min {x \in X}\left{\nabla f\left(x_k\right)^{\prime}\left(x-x_k\right)+\frac{1}{2 \alpha_k}\left(x-x_k\right)^{\prime} H_k\left(x-x_k\right)\right},$$ where $H_k$ is a positive definite symmetric matrix and $\alpha_k$ is a positive stepsize. When $H_k$ is the identity, it can be seen that this iteration gives the same iterate $x{k+1}$ as the unscaled gradient projection iteration (2.18). When $H_k=\nabla^2 f\left(x_k\right)$ and $\alpha_k=1$, we obtain a constrained form of Newton’s method (see nonlinear programming sources for analysis; e.g., [Ber99]).
(b) Depending on the nature of $X$, the projection operation may involve substantial overhead. The projection is simple when $H_k$ is the identity (or more generally, is diagonal), and $X$ consists of simple lower and/or upper bounds on the components of $x$ :
$$X=\left{\left(x^1, \ldots, x^n\right) \mid \underline{b}^i \leq x^i \leq \bar{b}^i, i=1, \ldots, n\right} .$$
This is an important special case where the use of gradient projection is convenient. Then the projection decomposes to $n$ scalar projections, one for each $i=1, \ldots, n$ : the $i$ th component of $x_{k+1}$ is obtained by projection of the $i$ th component of $x_k-\alpha_k \nabla f\left(x_k\right)$,
$$\left(x_k-\alpha_k \nabla f\left(x_k\right)\right)^i,$$
onto the interval of corresponding bounds $\left[\underline{b}^i, \bar{b}^i\right]$, and is very simple. However, for general nondiagonal scaling the overhead for solving the quadratic programming problem (2.19) is substantial even if $X$ has a simple bound structure of Eq. (2.20).

## 凸优化代写

$$x_{k+1}=P_X\left(x_k-\alpha_k \nabla f\left(x_k\right)\right),$$

$$\nabla f\left(x_k\right)^{\prime}\left(x_{k+1}-x_k\right) \leq 0,$$

## 数学代写|凸优化代写Convex Optimization代考|Two-Metric Projection Methods

(a)它的收敛速度与最陡下降的速度相似，而且往往很慢。有可能通过缩放的形式来克服这个潜在的缺点。这可以通过表单的迭代来完成
$$x_{k+1} \in \arg \min {x \in X}\left{\nabla f\left(x_k\right)^{\prime}\left(x-x_k\right)+\frac{1}{2 \alpha_k}\left(x-x_k\right)^{\prime} H_k\left(x-x_k\right)\right},$$其中$H_k$为正定对称矩阵，$\alpha_k$为正步长。当$H_k$为恒等时，可以看到该迭代与未缩放的梯度投影迭代(2.18)得到相同的迭代$x{k+1}$。当$H_k=\nabla^2 f\left(x_k\right)$和$\alpha_k=1$时，我们得到牛顿方法的约束形式(参见非线性规划源进行分析;例:[99])。
(b)视$X$的性质而定，投影业务可能涉及大量间接费用。当$H_k$是恒等式(或者更一般地说，是对角线)时，投影是简单的，并且$X$由$x$的组件的简单下界和/或上界组成:
$$X=\left{\left(x^1, \ldots, x^n\right) \mid \underline{b}^i \leq x^i \leq \bar{b}^i, i=1, \ldots, n\right} .$$

$$\left(x_k-\alpha_k \nabla f\left(x_k\right)\right)^i,$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。