Posted on Categories:Convex optimization, 凸优化, 数学代写

# 数学代写|凸优化代写Convex Optimization代考|Continuity of Gradient and Directional Derivative

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|凸优化代写Convex Optimization代考|Continuity of Gradient and Directional Derivative

The following exercise provides a basic continuity property of directional derivatives and gradients of convex functions. Let $f: \Re^n \mapsto \Re$ be a convex function, and let $\left{f_k\right}$ be a sequence of convex functions $f_k: \Re^n \mapsto \Re$ with the property that $\lim {k \rightarrow \infty} f_k\left(x_k\right)=f(x)$ for every $x \in \Re^n$ and every sequence $\left{x_k\right}$ that converges to $x$. Show that for any $x \in \Re^n$ and $y \in \Re^n$, and any sequences $\left{x_k\right}$ and $\left{y_k\right}$ converging to $x$ and $y$, respectively, we have $$\limsup {k \rightarrow \infty} f_k^{\prime}\left(x_k ; y_k\right) \leq f^{\prime}(x ; y)$$
Furthermore, if $f$ is differentiable over $\Re^n$, then it is continuously differentiable over $\Re^n$. Solution: From the definition of directional derivative, it follows that for any $\epsilon>0$, there exists an $\alpha>0$ such that
$$\frac{f(x+\alpha y)-f(x)}{\alpha}<f^{\prime}(x ; y)+\epsilon .$$
Hence, using also the equation
$$f^{\prime}(x ; y)=\inf {\alpha>0} \frac{f(x+\alpha y)-f(x)}{\alpha},$$ we have for all sufficiently large $k$, $$f_k^{\prime}\left(x_k ; y_k\right) \leq \frac{f_k\left(x_k+\alpha y_k\right)-f_k\left(x_k\right)}{\alpha}{k \rightarrow \infty} f_k^{\prime}\left(x_k ; y_k\right) \leq f^{\prime}(x ; y)+\epsilon .$$

## 数学代写|凸优化代写Convex Optimization代考|Convergence of Subgradient Method with Diminishing Stepsize Under Weaker Conditions

This exercise shows an enhanced version of Prop. 3.2.6, whereby we assume that for some scalar $c$, we have
$$c^2\left(1+\min {x^* \in X^}\left|x_k-x^\right|^2\right) \geq\left|g_k\right|^2, \quad \forall k$$
in place of the stronger Assumption 3.2.1. Assume also that $X^$ is nonempty and that $$\sum{k=0}^{\infty} \alpha_k=\infty, \quad \sum_{k=0}^{\infty} \alpha_k^2<\infty .$$ Show that $\left{x_k\right}$ converges to some optimal solution. Abbreviated proof: Similar to the proof of Prop. 3.2.6 [cf. Eq. (3.18)], we apply Prop. 3.2.2(a) with $y$ equal to any $x^ \in X^$, and then use the assumption (3.41) to obtain $$\left|x_{k+1}-x^\right|^2 \leq\left(1+\alpha_k^2 c^2\right)\left|x_k-x^\right|^2-2 \alpha_k\left(f\left(x_k\right)-f^\right)+\alpha_k^2 c^2 .$$
In view of the assumption (3.42), the convergence result of Prop. A.4.4 of Appendix A applies, and shows that $\left{x_k\right}$ is bounded and that $\liminf _{k \rightarrow \infty} f\left(x_k\right)=$ $f^*$. From this point the proof follows the one of Prop. 3.2.6.

## 数学代写|凸优化代写Convex Optimization代考|Continuity of Gradient and Directional Derivative

$$D(x)={\alpha(\bar{x}-x) \mid \bar{x} \in X, \alpha>0} .$$

$$f^{\prime}(x ; d) \geq 0, \quad \forall d \in D(x) .$$

$$g^{\prime} d \geq 0, \quad \forall d \in D(x)$$

$$g^{\prime} d \geq 0, \quad \forall d \in \overline{D(x)}$$

$$\max {g \in \partial f(x)} \min {|d| \leq 1, d \in \overline{D(x)}} g^{\prime} d \geq 0$$

## 数学代写|凸优化代写Convex Optimization代考|Convergence of Subgradient Method with Diminishing Stepsize Under Weaker Conditions

$$f(x)= \begin{cases}h(x) & \text { if } x \in X, \ \infty & \text { if } x \notin X,\end{cases}$$

(a)使用第3.1.3和3.1.4节来证明该函数的子微分对于所有$x \in X$都是非空的，并且具有如下形式
$$\partial f(x)=\partial h(x)+N_X(x), \quad \forall x \in X,$$