Posted on Categories:Convex optimization, 凸优化, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|凸优化代写Convex Optimization代考|Continuity of Gradient and Directional Derivative

The following exercise provides a basic continuity property of directional derivatives and gradients of convex functions. Let $f: \Re^n \mapsto \Re$ be a convex function, and let $\left{f_k\right}$ be a sequence of convex functions $f_k: \Re^n \mapsto \Re$ with the property that $\lim {k \rightarrow \infty} f_k\left(x_k\right)=f(x)$ for every $x \in \Re^n$ and every sequence $\left{x_k\right}$ that converges to $x$. Show that for any $x \in \Re^n$ and $y \in \Re^n$, and any sequences $\left{x_k\right}$ and $\left{y_k\right}$ converging to $x$ and $y$, respectively, we have $$\limsup {k \rightarrow \infty} f_k^{\prime}\left(x_k ; y_k\right) \leq f^{\prime}(x ; y)$$
Furthermore, if $f$ is differentiable over $\Re^n$, then it is continuously differentiable over $\Re^n$. Solution: From the definition of directional derivative, it follows that for any $\epsilon>0$, there exists an $\alpha>0$ such that
$$\frac{f(x+\alpha y)-f(x)}{\alpha}<f^{\prime}(x ; y)+\epsilon .$$
Hence, using also the equation
$$f^{\prime}(x ; y)=\inf {\alpha>0} \frac{f(x+\alpha y)-f(x)}{\alpha},$$ we have for all sufficiently large $k$, $$f_k^{\prime}\left(x_k ; y_k\right) \leq \frac{f_k\left(x_k+\alpha y_k\right)-f_k\left(x_k\right)}{\alpha}{k \rightarrow \infty} f_k^{\prime}\left(x_k ; y_k\right) \leq f^{\prime}(x ; y)+\epsilon .$$

## 数学代写|凸优化代写Convex Optimization代考|Convergence of Subgradient Method with Diminishing Stepsize Under Weaker Conditions

This exercise shows an enhanced version of Prop. 3.2.6, whereby we assume that for some scalar $c$, we have
$$c^2\left(1+\min {x^* \in X^}\left|x_k-x^\right|^2\right) \geq\left|g_k\right|^2, \quad \forall k$$
in place of the stronger Assumption 3.2.1. Assume also that $X^$ is nonempty and that $$\sum{k=0}^{\infty} \alpha_k=\infty, \quad \sum_{k=0}^{\infty} \alpha_k^2<\infty .$$ Show that $\left{x_k\right}$ converges to some optimal solution. Abbreviated proof: Similar to the proof of Prop. 3.2.6 [cf. Eq. (3.18)], we apply Prop. 3.2.2(a) with $y$ equal to any $x^ \in X^$, and then use the assumption (3.41) to obtain $$\left|x_{k+1}-x^\right|^2 \leq\left(1+\alpha_k^2 c^2\right)\left|x_k-x^\right|^2-2 \alpha_k\left(f\left(x_k\right)-f^\right)+\alpha_k^2 c^2 .$$
In view of the assumption (3.42), the convergence result of Prop. A.4.4 of Appendix A applies, and shows that $\left{x_k\right}$ is bounded and that $\liminf _{k \rightarrow \infty} f\left(x_k\right)=$ $f^*$. From this point the proof follows the one of Prop. 3.2.6.

## 数学代写|凸优化代写Convex Optimization代考|Continuity of Gradient and Directional Derivative

$$D(x)={\alpha(\bar{x}-x) \mid \bar{x} \in X, \alpha>0} .$$

$$f^{\prime}(x ; d) \geq 0, \quad \forall d \in D(x) .$$

$$g^{\prime} d \geq 0, \quad \forall d \in D(x)$$

$$g^{\prime} d \geq 0, \quad \forall d \in \overline{D(x)}$$

$$\max {g \in \partial f(x)} \min {|d| \leq 1, d \in \overline{D(x)}} g^{\prime} d \geq 0$$

## 数学代写|凸优化代写Convex Optimization代考|Convergence of Subgradient Method with Diminishing Stepsize Under Weaker Conditions

$$f(x)= \begin{cases}h(x) & \text { if } x \in X, \ \infty & \text { if } x \notin X,\end{cases}$$

(a)使用第3.1.3和3.1.4节来证明该函数的子微分对于所有$x \in X$都是非空的，并且具有如下形式
$$\partial f(x)=\partial h(x)+N_X(x), \quad \forall x \in X,$$

(b)如果$\alpha_k>0$适用于所有$k$，并且
$$\alpha_k \rightarrow 0, \quad \sum_{k=0}^{\infty} \alpha_k=\infty$$

Posted on Categories:Convex optimization, 凸优化, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|凸优化代写Convex Optimization代考|Convergence Rate of Steepest Descent and Gradient Projection for a Quadratic Cost Function

Let $f$ be the quadratic cost function,
$$f(x)=\frac{1}{2} x^{\prime} Q x-b^{\prime} x$$
where $Q$ is a symmetric positive definite matrix, and let $m$ and $M$ be the minimum and maximum eigenvalues of $Q$, respectively. Consider the minimization of $f$ over a closed convex set $X$ and the gradient projection mapping
$$G(x)=P_X(x-\alpha \nabla f(x))$$
with constant stepsize $\alpha<2 / M$.
(a) Show that $G$ is a contraction mapping and we have
$$|G(x)-G(y)|| \leq \max {|1-\alpha m|,|1-\alpha M|}| x-y |, \quad \forall x, y \in \Re^n,$$
and its unique fixed point is the unique minimum $x^*$ of $f$ over $X$. Solution: First note the nonexpansive property of the projection
$$\left|P_X(x)-P_X(y)\right| \leq|x-y|, \quad \forall x, y \in \Re^n$$
(use a Euclidean geometric argument, or see Section 3.2 for a proof). Use this property and the gradient formula $\nabla f(x)=Q x-b$ to write
\begin{aligned} |G(x)-G(y)| & =\left|P_X(x-\alpha \nabla f(x))-P_X(y-\alpha \nabla f(y))\right| \ & \leq|(x-\alpha \nabla f(x))-(y-\alpha \nabla f(y))| \ & =|(I-\alpha Q)(x-y)| \ & \leq \max {|1-\alpha m|,|1-\alpha M|}|x-y|, \end{aligned}
where $m$ and $M$ are the minimum and maximum eigenvalues of $Q$. Clearly $x^$ is a fixed point of $G$ if and only if $x^=P_X\left(x^-\alpha \nabla f\left(x^\right)\right)$, which by the projection theorem, is true if and only if the necessary and sufficient condition for optimality $\nabla f\left(x^\right)^{\prime}\left(x-x^\right) \geq 0$ for all $x \in X$ is satisfied. Note: In a generalization of this convergence rate estimate to the case of a nonquadratic strongly convex differentiable function $f$, the maximum eigenvalue $M$ is replaced by the Lipschitz constant of $\nabla f$ and the minimum eigenvalue $m$ is replaced by the modulus of strong convexity of $f$; see Section 6.1.

## 数学代写|凸优化代写Convex Optimization代考|Descent Inequality

This exercise deals with an inequality that is fundamental for the convergence analysis of gradient methods. Let $X$ be a convex set, and let $f: \Re^n \mapsto \Re$ be a differentiable function such that for some constant $L>0$, we have
$$|\nabla f(x)-\nabla f(y)| \leq L|x-y|, \quad \forall x, y \in X$$
Show that
$$f(y) \leq f(x)+\nabla f(x)^{\prime}(y-x)+\frac{L}{2}|y-x|^2, \quad \forall x, y \in X$$
Proof: Let $t$ be a scalar parameter and let $g(t)=f(x+t(y-x))$. The chain rule yields $(d g / d t)(t)=\nabla f(x+t(y-x))^{\prime}(y-x)$. Thus, we have
\begin{aligned} f(y) & -f(x)=g(1)-g(0) \ & =\int_0^1 \frac{d g}{d t}(t) d t \ & =\int_0^1(y-x)^{\prime} \nabla f(x+t(y-x)) d t \ & \leq \int_0^1(y-x)^{\prime} \nabla f(x) d t+\left|\int_0^1(y-x)^{\prime}(\nabla f(x+t(y-x))-\nabla f(x)) d t\right| \ & \leq \int_0^1(y-x)^{\prime} \nabla f(x) d t+\int_0^1|y-x| \cdot|\nabla f(x+t(y-x))-\nabla f(x)| d t \ & \leq(y-x)^{\prime} \nabla f(x)+|y-x| \int_0^1 L t|y-x| d t \ & =(y-x)^{\prime} \nabla f(x)+\frac{L}{2}|y-x|^2 \end{aligned}

(Convergence of Steepest Descent with Constant Stepsize)
Let $f: \Re^n \mapsto \Re$ be a differentiable function such that for some constant $L>0$, we have
$$|\nabla f(x)-\nabla f(y)| \leq L|x-y|, \quad \forall x, y \in \Re^n$$
Consider the sequence $\left{x_k\right}$ generated by the steepest descent iteration
$$x_{k+1}=x_k-\alpha \nabla f\left(x_k\right)$$
where $0<\alpha<\frac{2}{L}$. Show that if $\left{x_k\right}$ has a limit point, then $\nabla f\left(x_k\right) \rightarrow 0$, and every limit point $\bar{x}$ of $\left{x_k\right}$ satisfies $\nabla f(\bar{x})=0$. Proof: We use the descent inequality $(2.70)$ to show that the cost function is reduced at each iteration according to
\begin{aligned} f\left(x_{k+1}\right) & =f\left(x_k-\alpha \nabla f\left(x_k\right)\right) \ & \leq f\left(x_k\right)+\nabla f\left(x_k\right)^{\prime}\left(-\alpha \nabla f\left(x_k\right)\right)+\frac{\alpha^2 L}{2}\left|\nabla f\left(x_k\right)\right|^2 \ & =f\left(x_k\right)-\alpha\left(1-\frac{\alpha L}{2}\right)\left|\nabla f\left(x_k\right)\right|^2 \end{aligned}
Thus if there exists a limit point $\bar{x}$ of $\left{x_k\right}$, we have $f\left(x_k\right) \rightarrow f(\bar{x})$ and $\nabla f\left(x_k\right) \rightarrow 0$. This implies that $\nabla f(\bar{x})=0$, since $\nabla f(\cdot)$ is continuous by Eq. $(2.71)$

## 数学代写|凸优化代写Convex Optimization代考|Convergence Rate of Steepest Descent and Gradient Projection for a Quadratic Cost Function

$$f(x)=\frac{1}{2} x^{\prime} Q x-b^{\prime} x$$

$$G(x)=P_X(x-\alpha \nabla f(x))$$

(a)证明$G$是一个收缩映射，我们有
$$|G(x)-G(y)|| \leq \max {|1-\alpha m|,|1-\alpha M|}| x-y |, \quad \forall x, y \in \Re^n,$$

$$\left|P_X(x)-P_X(y)\right| \leq|x-y|, \quad \forall x, y \in \Re^n$$
(使用欧几里得几何论证，或参见第3.2节的证明)。使用这个属性和梯度公式$\nabla f(x)=Q x-b$来写
\begin{aligned} |G(x)-G(y)| & =\left|P_X(x-\alpha \nabla f(x))-P_X(y-\alpha \nabla f(y))\right| \ & \leq|(x-\alpha \nabla f(x))-(y-\alpha \nabla f(y))| \ & =|(I-\alpha Q)(x-y)| \ & \leq \max {|1-\alpha m|,|1-\alpha M|}|x-y|, \end{aligned}

## 数学代写|凸优化代写Convex Optimization代考|Descent Inequality

$$|\nabla f(x)-\nabla f(y)| \leq L|x-y|, \quad \forall x, y \in X$$

$$f(y) \leq f(x)+\nabla f(x)^{\prime}(y-x)+\frac{L}{2}|y-x|^2, \quad \forall x, y \in X$$

\begin{aligned} f(y) & -f(x)=g(1)-g(0) \ & =\int_0^1 \frac{d g}{d t}(t) d t \ & =\int_0^1(y-x)^{\prime} \nabla f(x+t(y-x)) d t \ & \leq \int_0^1(y-x)^{\prime} \nabla f(x) d t+\left|\int_0^1(y-x)^{\prime}(\nabla f(x+t(y-x))-\nabla f(x)) d t\right| \ & \leq \int_0^1(y-x)^{\prime} \nabla f(x) d t+\int_0^1|y-x| \cdot|\nabla f(x+t(y-x))-\nabla f(x)| d t \ & \leq(y-x)^{\prime} \nabla f(x)+|y-x| \int_0^1 L t|y-x| d t \ & =(y-x)^{\prime} \nabla f(x)+\frac{L}{2}|y-x|^2 \end{aligned}

(恒步长最陡下降的收敛性)

$$|\nabla f(x)-\nabla f(y)| \leq L|x-y|, \quad \forall x, y \in \Re^n$$

$$x_{k+1}=x_k-\alpha \nabla f\left(x_k\right)$$

\begin{aligned} f\left(x_{k+1}\right) & =f\left(x_k-\alpha \nabla f\left(x_k\right)\right) \ & \leq f\left(x_k\right)+\nabla f\left(x_k\right)^{\prime}\left(-\alpha \nabla f\left(x_k\right)\right)+\frac{\alpha^2 L}{2}\left|\nabla f\left(x_k\right)\right|^2 \ & =f\left(x_k\right)-\alpha\left(1-\frac{\alpha L}{2}\right)\left|\nabla f\left(x_k\right)\right|^2 \end{aligned}

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Convex optimization, 凸优化, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|凸优化代写Convex Optimization代考|APPROXIMATION METHODS

Approximation methods for minimizing a convex function $f: \Re^n \mapsto \Re$ over a convex set $X$, are based on replacing $f$ and $X$ with approximations $F_k$ and $X_k$, respectively, at each iteration $k$, and finding
$$x_{k+1} \in \arg \min {x \in X_k} F_k(x) .$$ At the next iteration, $F{k+1}$ and $X_{k+1}$ are generated by refining the approximation, based on the new point $x_{k+1}$, and possibly on the earlier points $x_k, \ldots, x_0$. Of course such a method makes sense only if the approximating problems are simpler than the original. There is a great variety of approximation methods, with different aims, and suitable for different circumstances. The present section provides a brief overview and orientation, while Chapters 4-6 provide a detailed analysis.
2.2.1 Polyhedral Approximation
In polyhedral approximation methods, $F_k$ is a polyhedral function that approximates $f$ and $X_k$ is a polyhedral set that approximates $X$. The idea is that the approximate problem is polyhedral, so it may be easier to solve than the original problem. The methods include mechanisms for progressively refining the approximation, thereby obtaining a solution of the original problem in the limit. In some cases, only one of $f$ and $X$ is polyhedrally approximated.

In Chapter 4, we will discuss the two main approaches for polyhedral approximation: outer linearization (also called the cutting plane approach) and inner linearization (also called the simplicial decomposition approach). As the name suggests, outer linearization approximates epi $(f)$ and $X$ from without, $F_k(x) \leq f(x)$ for all $x$, and $X_k \supset X$, using intersections of finite numbers of halfspaces. By contrast, inner linearization approximates epi $(f)$ and $X$ from within, $F_k(x) \geq f(x)$ for all $x$, and $X_k \subset X$, using convex hulls of finite numbers of halflines or points. Figure 2.2.1 illustrates outer and inner linearization of convex sets and functions.

We will show in Sections 4.3 and 4.4 that these two approaches are intimately connected by conjugacy and duality: the dual of an outer approximating problem is an inner approximating problem involving the conjugates of $F_k$ and the indicator function of $X_k$, and reversely. In fact, using this duality, outer and inner approximations may be combined in the same algorithm.

## 数学代写|凸优化代写Convex Optimization代考|2.2.2 Penalty, Augmented Lagrangian, and Interior Point Methods

Generally in optimization problems, the presence of constraints complicates the algorithmic solution, and limits the range of available algorithms. For this reason it is natural to try to eliminate constraints by using approximation of the corresponding indicator functions. In particular, we may replace constraints by penalty functions that prescribe a high cost for their violation. We discussed in Section 1.5 such an approximation scheme, which uses exact nondifferentiable penalty functions. In this section we focus on differentiable penalty functions that are not necessarily exact.

To illustrate this approach, let us consider the equality constrained problem
$$\begin{array}{ll} \operatorname{minimize} & f(x) \ \text { subject to } & x \in X, \quad a_i^{\prime} x=b_i, \quad i=1, \ldots, m . \end{array}$$
We replace this problem with a penalized version
\begin{aligned} & \operatorname{minimize} f(x)+c_k \sum_{i=1}^m P\left(a_i^{\prime} x-b_i\right) \ & \text { subject to } x \in X, \end{aligned}
where $P(\cdot)$ is a scalar penalty function satisfying
$$P(u)=0 \quad \text { if } \quad u=0,$$
and
$$P(u)>0 \quad \text { if } u \neq 0 .$$
The scalar $c_k$ is a positive penalty parameter, so by increasing $c_k$ to $\infty$, the solution $x_k$ of the penalized problem tends to decrease the constraint violation, thereby providing an increasingly accurate approximation to the original problem. An important practical point here is that $c_k$ should be increased gradually, using the optimal solution of each approximating problem to start the algorithm that solves the next approximating problem. Otherwise serious numerical problems occur due to “ill-conditioning.”
A common choice for $P$ is the quadratic penalty function
$$P(u)=\frac{1}{2} u^2$$
in which case the penalized problem (2.51) takes the form
$$\begin{array}{ll} \operatorname{minimize} & f(x)+\frac{c_k}{2}|A x-b|^2 \ \text { subject to } & x \in X, \end{array}$$
where $A x=b$ is a vector representation of the system of equations $a_i^{\prime} x=b_i$, $i=1, \ldots, m$

## 数学代写|凸优化代写Convex Optimization代考|APPROXIMATION METHODS

$$x_{k+1} \in \arg \min {x \in X_k} F_k(x) .$$在下一次迭代中，$F{k+1}$和$X_{k+1}$是基于新的点$x_{k+1}$，也可能基于之前的点$x_k, \ldots, x_0$，通过改进近似生成的。当然，这种方法只有在近似问题比原始问题更简单的情况下才有意义。有各种各样的近似方法，目的不同，适用于不同的情况。本节提供了一个简要的概述和方向，而第4-6章提供了详细的分析。
2.2.1多面体近似

## 数学代写|凸优化代写Convex Optimization代考|Penalty, Augmented Lagrangian, and Interior Point Methods

$$\begin{array}{ll} \operatorname{minimize} & f(x) \ \text { subject to } & x \in X, \quad a_i^{\prime} x=b_i, \quad i=1, \ldots, m . \end{array}$$

\begin{aligned} & \operatorname{minimize} f(x)+c_k \sum_{i=1}^m P\left(a_i^{\prime} x-b_i\right) \ & \text { subject to } x \in X, \end{aligned}

$$P(u)=0 \quad \text { if } \quad u=0,$$

$$P(u)>0 \quad \text { if } u \neq 0 .$$

$$P(u)=\frac{1}{2} u^2$$

$$\begin{array}{ll} \operatorname{minimize} & f(x)+\frac{c_k}{2}|A x-b|^2 \ \text { subject to } & x \in X, \end{array}$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Convex optimization, 凸优化, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

Another variant of incremental gradient is the incremental aggregated gradient method, which has the form
$$x_{k+1}=P_X\left(x_k-\alpha_k \sum_{\ell=0}^{m-1} \nabla f_{i_{k-\ell}}\left(x_{k-\ell}\right)\right),$$
where $f_{i_k}$ is the new component function selected for iteration $k$. Here, the component indexes $i_k$ may either be selected in a cyclic order $\left[i_k=\right.$ $(k$ modulo $m)+1]$, or according to some randomization scheme, consistently with Eq. (2.31). Also for $k<m$, the summation should go up to $\ell=k$, and $\alpha$ should be replaced by a corresponding larger value, such as $\alpha_k=m \alpha /(k+1)$. This method, first proposed in [BHG08], computes the gradient incrementally, one component per iteration, but in place of the single component gradient, it uses an approximation to the total cost gradient $\nabla f\left(x_k\right)$, which is the sum of the component gradients computed in the past $m$ iterations.

There is analytical and experimental evidence that by aggregating the component gradients one may be able to attain a faster asymptotic convergence rate, by ameliorating the effect of approximating the full gradient with component gradients; see the original paper [BHG08], which provides an analysis for quadratic problems, the paper [SLB13], which provides a more general convergence and convergence rate analysis, and extensive computational results, and the papers [Mai13], [Mai14], [DCD14], which describe related methods. The expectation of faster convergence should be tempered, however, because in order for the effect of aggregating the component gradients to fully manifest itself, at least one pass (and possibly quite a few more) through the components must be made, which may be too long if $m$ is very large.

## 数学代写|凸优化代写Convex Optimization代考|Incremental Gradient Method with Momentum

There is an incremental version of the gradient method with momentum or heavy ball method, discussed in Section 2.1.1 [cf. Eq. (2.12)]. It is given by
$$x_{k+1}=x_k-\alpha_k \nabla f_{i_k}\left(x_k\right)+\beta_k\left(x_k-x_{k-1}\right),$$
where $f_{i_k}$ is the component function selected for iteration $k, \beta_k$ is a scalar in $[0,1)$, and we define $x_{-1}=x_0$; see e.g., [MaS94], [Tse98]. As noted earlier, special nonincremental methods with similarities to the one above have optimal iteration complexity properties under certain conditions; cf. Section 6.2. However, there have been no proposals of incremental versions of these optimal complexity methods.

The heavy ball method $(2.36)$ is related with the aggregated gradient method $(2.35)$ when $\beta_k \approx 1$. In particular, when $\alpha_k \equiv \alpha$ and $\beta_k \equiv \beta$, the sequence generated by Eq. (2.36) satisfies
$$x_{k+1}=x_k-\alpha \sum_{\ell=0}^k \beta^{\ell} \nabla f_{i_{k-\ell}}\left(x_{k-\ell}\right)$$

[both iterations (2.35) and (2.37) involve different types of diminishing dependence on past gradient components]. Thus, the heavy ball iteration (2.36) provides an approximate implementation of the incremental aggregated gradient method (2.35), while it does not have the memory storage issue of the latter.

A further way to intertwine the ideas of the aggregated gradient method (2.35) and the heavy ball method (2.36) for the unconstrained case $\left(X=\Re^n\right)$ is to form an infinite sequence of components
$$f_1, f_2, \ldots, f_m, f_1, f_2, \ldots, f_m, f_1, f_2, \ldots,$$
and group together blocks of successive components into batches. One way to implement this idea is to add $p$ preceding gradients (with $1<p<m$ ) to the current component gradient in iteration (2.36), thus iterating according to
$$x_{k+1}=x_k-\alpha_k \sum_{\ell=0}^p \nabla f_{i_{k-\ell}}\left(x_{k-\ell}\right)+\beta_k\left(x_k-x_{k-1}\right)$$

## 凸优化代写

$$x_{k+1}=P_X\left(x_k-\alpha_k \sum_{\ell=0}^{m-1} \nabla f_{i_{k-\ell}}\left(x_{k-\ell}\right)\right),$$

$$y_{k+1}=y_k-\alpha \nabla h_k\left(y_k\right)$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Convex optimization, 凸优化, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|凸优化代写Convex Optimization代考|Linear-Conic Problems

An important special case of conic programming, called linear-conic problem, arises when $\operatorname{dom}(f)$ is an affine set and $f$ is linear over $\operatorname{dom}(f)$, i.e.,
$$f(x)= \begin{cases}c^{\prime} x & \text { if } x \in b+S, \ \infty & \text { if } x \notin b+S\end{cases}$$
where $b$ and $c$ are given vectors, and $S$ is a subspace. Then the primal problem can be written as
\begin{aligned} & \text { minimize } c^{\prime} x \ & \text { subject to } x-b \in S, \quad x \in C \end{aligned}
see Fig. 1.2.1.
To derive the dual problem, we note that
\begin{aligned} f^{\star}(\lambda) & =\sup {x-b \in S}(\lambda-c)^{\prime} x \ & =\sup {y \in S}(\lambda-c)^{\prime}(y+b) \ & = \begin{cases}(\lambda-c)^{\prime} b & \text { if } \lambda-c \in S^{\perp}, \ \infty & \text { if } \lambda-c \notin S^{\perp} .\end{cases} \end{aligned}
It can be seen that the dual problem $\min _{\lambda \in \dot{C}} f^*(\lambda)$ [cf. Eq. (1.18)], after discarding the superfluous term $c^{\prime} b$ from the cost, can be written as
\begin{aligned} & \operatorname{minimize} b^{\prime} \lambda \ & \text { subject to } \lambda-c \in S^{\perp}, \quad \lambda \in \hat{C}, \end{aligned}
where $\hat{C}$ is the dual cone:
$$\hat{C}=\left{\lambda \mid \lambda^{\prime} x \geq 0, \forall x \in C\right}$$
By specializing the conditions of the Conic Duality Theorem (Prop. 1.2.2) to the linear-conic duality context, we obtain the following.

## 数学代写|凸优化代写Convex Optimization代考|Special Forms of Linear-Conic Problems

The primal and dual linear-conic problems (1.19) and (1.20) have been placed in an elegant symmetric form. There are also other useful formats that parallel and generalize similar formats in linear programming. For example, we have the following dual problem pairs:
\begin{aligned} & \min {A x=b, x \in C^{\prime} x} c^{\prime} x \Longleftrightarrow \max {c-A^{\prime} \lambda \in C} b^{\prime} \lambda, \ & \min {A x-b \in C} c^{\prime} x \Longleftrightarrow \max {A^{\prime} \lambda=c, \lambda \in C} b^{\prime} \lambda, \end{aligned}
where $A$ is an $m \times n$ matrix, and $x \in \Re^n, \lambda \in \Re^m, c \in \Re^n, b \in \Re^m$.
To verify the duality relation (1.21), let $\bar{x}$ be any vector such that $A \bar{x}=b$, and let us write the primal problem on the left in the primal conic form (1.19) as
\begin{aligned} & \text { minimize } c^{\prime} x \ & \text { subject to } x-\bar{x} \in \mathrm{N}(A), \quad x \in C, \end{aligned}
where $\mathrm{N}(A)$ is the nullspace of $A$. The corresponding dual conic problem (1.20) is to solve for $\mu$ the problem
\begin{aligned} & \text { minimize } \bar{x}^{\prime} \mu \ & \text { subject to } \mu-c \in \mathrm{N}(A)^{\perp}, \quad \mu \in C \text {. } \end{aligned}
Since $\mathrm{N}(A)^{\perp}$ is equal to $\mathrm{Ra}\left(A^{\prime}\right)$, the range of $A^{\prime}$, the constraints of problem (1.23) can be equivalently written as $c-\mu \epsilon-\operatorname{Ra}\left(A^{\prime}\right)=\operatorname{Ra}\left(A^{\prime}\right), \mu \in \hat{C}$, or
$$c-\mu=A^{\prime} \lambda_{,} \quad \mu \in \tilde{C},$$
for some $\lambda \in \Re^m$. Making the change of variables $\mu=c-A^{\prime} \lambda$, the dual problem (1.23) can be written as
$$\begin{array}{ll} \operatorname{minimize} & \bar{x}^{\prime}\left(c-A^{\prime} \lambda\right) \ \text { subject to } & c-A^{\prime} \lambda \in \hat{C} . \end{array}$$
By discarding the constant $\bar{x}^{\prime} c$ from the cost function, using the fact $A \bar{x}=$ $b$, and changing from minimization to maximization, we see that this dual problem is equivalent to the one in the right-hand side of the duality pair (1.21). The duality relation (1.22) is proved similarly.

We next discuss two important special cases of conic programming: second order cone programming and semidefinite programming. These problems involve two different special cones, and an explicit definition of the affine set constraint. They arise in a variety of applications, and their computational difficulty in practice tends to lie between that of linear and quadratic programming on one hand, and general convex programming on the other hand.

## 数学代写|凸优化代写Convex Optimization代考|Linear-Conic Problems

$$f(x)= \begin{cases}c^{\prime} x & \text { if } x \in b+S, \ \infty & \text { if } x \notin b+S\end{cases}$$

\begin{aligned} & \text { minimize } c^{\prime} x \ & \text { subject to } x-b \in S, \quad x \in C \end{aligned}

\begin{aligned} f^{\star}(\lambda) & =\sup {x-b \in S}(\lambda-c)^{\prime} x \ & =\sup {y \in S}(\lambda-c)^{\prime}(y+b) \ & = \begin{cases}(\lambda-c)^{\prime} b & \text { if } \lambda-c \in S^{\perp}, \ \infty & \text { if } \lambda-c \notin S^{\perp} .\end{cases} \end{aligned}

\begin{aligned} & \operatorname{minimize} b^{\prime} \lambda \ & \text { subject to } \lambda-c \in S^{\perp}, \quad \lambda \in \hat{C}, \end{aligned}

$$\hat{C}=\left{\lambda \mid \lambda^{\prime} x \geq 0, \forall x \in C\right}$$

## 数学代写|凸优化代写Convex Optimization代考|Special Forms of Linear-Conic Problems

\begin{aligned} & \min {A x=b, x \in C^{\prime} x} c^{\prime} x \Longleftrightarrow \max {c-A^{\prime} \lambda \in C} b^{\prime} \lambda, \ & \min {A x-b \in C} c^{\prime} x \Longleftrightarrow \max {A^{\prime} \lambda=c, \lambda \in C} b^{\prime} \lambda, \end{aligned}

\begin{aligned} & \text { minimize } c^{\prime} x \ & \text { subject to } x-\bar{x} \in \mathrm{N}(A), \quad x \in C, \end{aligned}

\begin{aligned} & \text { minimize } \bar{x}^{\prime} \mu \ & \text { subject to } \mu-c \in \mathrm{N}(A)^{\perp}, \quad \mu \in C \text {. } \end{aligned}

$$c-\mu=A^{\prime} \lambda_{,} \quad \mu \in \tilde{C},$$

$$\begin{array}{ll} \operatorname{minimize} & \bar{x}^{\prime}\left(c-A^{\prime} \lambda\right) \ \text { subject to } & c-A^{\prime} \lambda \in \hat{C} . \end{array}$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。