Posted on Categories:Optimization Theory, 优化理论, 数学代写, 最优化

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|最优化作业代写optimization theory代考|Conjugate Direction, Variable Metric

As a motivation we consider the minimization of a function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ having the following special form:
$$f\left(x_1, \ldots, x_n\right)=\sum_{i=1}^n f_i\left(x_i\right)$$
Note that each function $f_i$ in (10.1.1) is a function of only one variable. Then, it is easily seen that $\bar{x} \in \mathbb{R}^n$ minimizes $f$ iff the component $\bar{x}i$ minimizes $f_i, i=1, \ldots, n$. Consequently, the minimization of $f$ can be achieved by successively minimizing along the coordinate axes. Next, consider a quadratic function $f$ : $$f(x)=\frac{1}{2} x^{\top} A x+b^{\top} x$$ where $A$ is a symmetric, positive definite $(n, n)$-matrix. Let $v_1, \ldots, v_n \in \mathbb{R}^n$ be a basis for $\mathbb{R}^n$. Putting $x=\sum{i=1}^n \nu_i v_i$, we obtain:
\begin{aligned} f(x)=\varphi(\nu)=\sum_{i=1}^n \underbrace{\left(b^{\top} v_i\right)}{\beta_i} \nu_i & +\frac{1}{2} \sum{i=1}^n \underbrace{\left(v_i^{\top} A v_i\right)}{\alpha_i} \nu_i^2 \ & +\frac{1}{2} \sum{\substack{i, j \ i \neq j}}\left(v_i^{\top} A v_j\right) \nu_i \nu_j . \end{aligned}
If $v_i^{\top} A v_j=0, i \neq j$, then it follows:
$$f(x)=\varphi(\nu)=\sum_{i=1}^n\left(\frac{1}{2} \alpha_i \nu_i^2+\beta_i \nu_i\right)=: \sum_{i=1}^n \varphi_i\left(\nu_i\right),$$
and, hence, $\varphi(\nu)$ is a function of the type (10.1.1). This gives rise to (or motivates) the following definition.

## 数学代写|最优化作业代写optimization theory代考|Conjugate Gradient-, DFP-, BFGS-Method

For practical applications of the idea of conjugate directions it is important to construct algorithms that automatically generate new conjugate directions from the data known at a specific step in the optimization procedure. This will be studied in the present section.

Lemma 10.2.1 According to Algorithm $\mathcal{A}$, let $x^1, x^2, \ldots, x^{\ell}, \ell \leq n$, be generated, where $v_1, v_2, \ldots, v_{\ell} \in \mathbb{R}^n \backslash{0}$ are pairwise conjugate with respect to A. Then, it holds:
$$D f\left(x^{\ell}\right) v_i=0, \quad i=1,2, \ldots, \ell .$$

Proof. Obviously, we have $x^{\ell}=x^r+\sum_{j=r+1}^{\ell} \lambda_j v_j$. It follows:
$$D f\left(x^{\ell}\right)-D f\left(x^r\right)=\left(x^{\ell}-x^r\right)^{\top} A=\sum_{j=r+1}^{\ell} \lambda_j v_j^{\top} A .$$
Let $r \geq 1$. Since $x^r$ minimizes $f\left(x^{r-1}+\lambda v_r\right)$, we have $D f\left(x^r\right) v_r=0$. Hence, it follows for $1 \leq r<\ell$ :
$$D f\left(x^{\ell}\right) v_r=\left[D f\left(x^{\ell}\right)-D f\left(x^r\right)\right] v_r=\sum_{j=r+1}^{\ell} \lambda_j\left(v_j^{\top} A v_r\right)=0 .$$
Finally, the equation $D f\left(x^{\ell}\right) v_{\ell}=0$ follows from the fact that $x^{\ell}$ minimizes the function $f\left(x^{\ell-1}+\lambda v_{\ell}\right)$.

## 数学代写|最优化作业代写optimization theory代考|Conjugate Direction, Variable Metric

$$f\left(x_1, \ldots, x_n\right)=\sum_{i=1}^n f_i\left(x_i\right)$$

\begin{aligned} f(x)=\varphi(\nu)=\sum_{i=1}^n \underbrace{\left(b^{\top} v_i\right)}{\beta_i} \nu_i & +\frac{1}{2} \sum{i=1}^n \underbrace{\left(v_i^{\top} A v_i\right)}{\alpha_i} \nu_i^2 \ & +\frac{1}{2} \sum{\substack{i, j \ i \neq j}}\left(v_i^{\top} A v_j\right) \nu_i \nu_j . \end{aligned}

$$f(x)=\varphi(\nu)=\sum_{i=1}^n\left(\frac{1}{2} \alpha_i \nu_i^2+\beta_i \nu_i\right)=: \sum_{i=1}^n \varphi_i\left(\nu_i\right),$$

## 数学代写|最优化作业代写optimization theory代考|Conjugate Gradient-, DFP-, BFGS-Method

$$D f\left(x^{\ell}\right) v_i=0, \quad i=1,2, \ldots, \ell .$$

$$D f\left(x^{\ell}\right)-D f\left(x^r\right)=\left(x^{\ell}-x^r\right)^{\top} A=\sum_{j=r+1}^{\ell} \lambda_j v_j^{\top} A .$$

$$D f\left(x^{\ell}\right) v_r=\left[D f\left(x^{\ell}\right)-D f\left(x^r\right)\right] v_r=\sum_{j=r+1}^{\ell} \lambda_j\left(v_j^{\top} A v_r\right)=0 .$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Optimization Theory, 优化理论, 数学代写, 最优化

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|最优化作业代写optimization theory代考|Geometric Interpretation of Karmarkar’s Algorithm

Recall Karmarkar’s Algorithm and suppose that the iterate $x^k \in \stackrel{o}{\Sigma}^{\Sigma}$ has been generated.

The simplex $\Sigma$ will be transformed into itself and the point $x^k$ is shifted into the barycenter, all by means of the transformation $T_k$ :
$$T_k\left(x_1, \ldots, x_n\right)=\frac{n}{\sum_i\left(x_i / z_i\right)}\left(x_1 / z_1, \ldots, x_n / z_n\right), \text { with } z:=x^k$$

Note that $T_k$ maps each stratum of $\Sigma$ into itself; in particular, all vertices of $\Sigma$ are fixed points of $T_k$ (exercise).

The inverse of $T_k$ is easily computed (instead of dividing by $z_i$ we now have to multiply by $z_i$ ):
$$T_k^{-1}\left(y_1, \ldots, y_n\right)=\frac{n}{\sum_i y_i z_i}\left(y_1 z_1, \ldots, y_n z_n\right), \text { with } z:=x^k .$$
The equation $A x=0$ becomes in the $y$-variables: $A T_k^{-1}(y)=0$. From (8.2.2) it follows (after multiplication with $\sum_i y_i z_i / n$ ):
$$A x=0 \text { iff } A D_k y=0,$$
with $D_k$ as defined in (8.1.4).
The function $\left(x_1, \ldots, x_n\right) \longmapsto x_1$ to be minimized, becomes a nonlinear function in the $y$-coordinates:
$$\left(y_1, \ldots, y_n\right) \longmapsto y_1 \cdot\left(\frac{n z_1}{\sum_i y_i z_i}\right) \text {, with } z:=x^k$$

## 数学代写|最优化作业代写optimization theory代考|Proof of Theorem 8.1.2 (Polynomiality)

In order to prove Theorem 8.1.2 we have to estimate how much the objective function decreases in each step of the algorithm. Recall that the linear function $\left(x_1, \ldots, x_n\right) \longmapsto x_1$ transforms awkwardly under the transformation $T_k$

(cf. (8.2.4)). Therefore, a comparable function $f$ is chosen that transforms nicely under $T_k$ :
$$\left.f\left(x_1, \ldots, x_n\right)=x_1^n / x_1 \cdot x_2 \cdots x_n \quad \text { (homogeneous of degree } 0\right) .$$
The function $f$ is well defined on $\stackrel{o}{\Sigma}$. On $\stackrel{o}{\Sigma}$ we obviously have $x_1 \cdot x_2 \cdots x_n \leq 1$, and, consequently,
$$x_1^n \leq f(x), \quad x \in \stackrel{o}{\Sigma}$$
For $f$ the following interesting transformation formula holds:
Lemma 8.3.1 For $x, y \in \stackrel{o}{\Sigma}$ it holds:
$$\frac{f\left(T_k(x)\right)}{f\left(T_k(y)\right)}=\frac{f(x)}{f(y)} .$$
Proof. (Exercise)
Note that $f(1,1, \ldots, 1)=1$, and, hence,
$$f\left(x^{k+1}\right) / f\left(x^k\right)=f(\widetilde{x})$$
where $\tilde{x}$ is the minimizer in Step 1 of Karmarkar’s Algorithm in the $(k+1)$-th iteration. If $\alpha=\frac{1}{2}$, then we will show:
$$f(\widetilde{x}) \leq 2 e^{-1}$$

## 数学代写|最优化作业代写optimization theory代考|Geometric Interpretation of Karmarkar’s Algorithm

$$T_k\left(x_1, \ldots, x_n\right)=\frac{n}{\sum_i\left(x_i / z_i\right)}\left(x_1 / z_1, \ldots, x_n / z_n\right), \text { with } z:=x^k$$

$T_k$的倒数很容易计算(我们现在要乘以$z_i$而不是除以$z_i$):
$$T_k^{-1}\left(y_1, \ldots, y_n\right)=\frac{n}{\sum_i y_i z_i}\left(y_1 z_1, \ldots, y_n z_n\right), \text { with } z:=x^k .$$

$$A x=0 \text { iff } A D_k y=0,$$

$$\left(y_1, \ldots, y_n\right) \longmapsto y_1 \cdot\left(\frac{n z_1}{\sum_i y_i z_i}\right) \text {, with } z:=x^k$$

## 数学代写|最优化作业代写optimization theory代考|Proof of Theorem 8.1.2 (Polynomiality)

(参见(8.2.4))。因此，选择一个类似的函数$f$，它可以很好地在$T_k$下进行转换:
$$\left.f\left(x_1, \ldots, x_n\right)=x_1^n / x_1 \cdot x_2 \cdots x_n \quad \text { (homogeneous of degree } 0\right) .$$

$$x_1^n \leq f(x), \quad x \in \stackrel{o}{\Sigma}$$

$$\frac{f\left(T_k(x)\right)}{f\left(T_k(y)\right)}=\frac{f(x)}{f(y)} .$$

$$f\left(x^{k+1}\right) / f\left(x^k\right)=f(\widetilde{x})$$

$$f(\widetilde{x}) \leq 2 e^{-1}$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Optimization Theory, 优化理论, 数学代写, 最优化

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|最优化作业代写optimization theory代考|Parametric Aspects: The Unconstrained Case

In this section we study the dependence of local minima and their corresponding functional values on additional parameters. The appearance of parameters may represent perturbations of an optimization problem. The crucial tools in such investigations are theorems on implicit functions. For a basic reference see [65].

We start with unconstrained optimization problems. Let $f \in C^2\left(\mathbb{R}^n \times\right.$ $\left.\mathbb{R}^r, \mathbb{R}\right)$. The general point $z \in \mathbb{R}^n \times \mathbb{R}^r$ will be represented as $z=(x, t)$, where $x$ is the state variable and where $t$ plays the role of a parameter. In this way we may regard $f$ as being an $r$-parametric family of functions of $n$ variables. Let $\bar{x} \in \mathbb{R}^n$ be a local minimum for $f(\cdot, \bar{t})$. The necessary optimality condition of first order reads
$$D_x f(\bar{x}, \bar{t})=0$$
where $D_x f$ denotes the row vector of first partial derivatives with respect to $x$.

Formula (3.1.1) represents $n$ equations with $n+r$ variables. In case that the Jacobian matrix $D D_x^{\top} f(\bar{x}, \bar{t})$, an $(n, n+r)$-matrix, has full rank $(=n)$, in virtue of the implicit function theorem we can choose $n$ variables such that the equation $D_x f=0$ defines these variables as an implicit function of the remaining $r$ variables. With respect to the chosen $n$ variables the corresponding $(n, n)$-submatrix of $D D_x^{\top} f(\bar{x}, \bar{t})$ should be nonsingular. For example, let $\bar{x} \in \mathbb{R}^n$ be a local minimum for $f(\cdot, \bar{t})$ which is nondegenerate, i.e. $D_x^2 f(\bar{x}, \bar{t})$ is nonsingular (and, hence, positive definite). Then, the implicit function theorem yields the existence of open neighborhoods $\mathcal{O}, \mathcal{V}$ of $(\bar{x}, \bar{t}), \bar{t}$, and a mapping $x(\cdot) \in C^1\left(\mathcal{V}, \mathbb{R}^n\right)$ such that for all $(x, t) \in \mathcal{O}$ we have:
$$D_x f(x, t)=0 \text { iff } x=x(t) .$$

## 数学代写|最优化作业代写optimization theory代考|Parametric aspects: The Constrained Case

In this section we take constraints into account and again we define the concept of a nondegenerate local minimum. This yields a (stable) system of nonlinear equations which enables us to study the sensitivity of a local minimum with regard to data pertubations.

Let $k \geq 2$ and $f, h_i, g_j \in C^k\left(\mathbb{R}^n \times \mathbb{R}^r, \mathbb{R}\right), i \in I, j \in J$ and $|I|+|J|<\infty$. For each $t \in \mathbb{R}^r$ we have $f(\cdot, t)$ as an objective function and $M(t)$ as a feasible set, where
$$M(t)=\left{x \in \mathbb{R}^n \mid h_i(x, t)=0, i \in I, g_j(x, t) \geq 0, j \in J\right} .$$
Definition 3.2.1 Let $f, h_i, g_j$ be as above. A (feasible) point $\bar{x} \in M(\bar{t})$ is called nondegenerate local minimum for $f(\cdot, \bar{t}){\mid M(\bar{t})}$ if the following conditions are satisfied: (1) LICQ holds at $\bar{x}$. (2) The point $\bar{x}$ is a critical point for $f(\cdot, \bar{t}){\mid M(\bar{t})}$.
Let $\bar{\lambda}i, \bar{\mu}_j, i \in I, j \in J_0(\bar{x}, \bar{t}):=\left{j \in J \mid g_j(\bar{x}, \bar{t})=0\right}$ be the corresponding Lagrange multipliers and $L$ the Lagrange function, i.e. \begin{aligned} D_x f & =\sum{i \in I} \bar{\lambda}i D_x h_i+\sum{j \in J_0(\bar{x}, \bar{t})} \bar{\mu}j D_x g{j \mid(\bar{x}, \bar{t})} \ L(x, t) & =f-\sum_{i \in I} \bar{\lambda}i h_i-\sum{j \in J_0(\bar{x}, \bar{t})} \bar{\mu}j g{j \mid(x, t)} \end{aligned}
(3) $\bar{\mu}j>0, j \in J_0(\bar{x}, \bar{t})$. (4) $D_x^2 L(\bar{x}, \bar{t})$ is positive definite on $T{\bar{x}} M(\bar{t})$, where (cf. (2.1.6))

## 数学代写|最优化作业代写optimization theory代考|Parametric Aspects: The Unconstrained Case

$$M(t)=\left{x \in \mathbb{R}^n \mid h_i(x, t)=0, i \in I, g_j(x, t) \geq 0, j \in J\right} .$$
3.2.1如上所述$f, h_i, g_j$。如果满足以下条件，则称为$f(\cdot, \bar{t}){\mid M(\bar{t})}$的(可行)点$\bar{x} \in M(\bar{t})$为非退化局部最小值:(1)LICQ保持在$\bar{x}$。(2) $\bar{x}$点是$f(\cdot, \bar{t}){\mid M(\bar{t})}$的临界点。

(3) $\bar{\mu}j>0, j \in J_0(\bar{x}, \bar{t})$。(4) $D_x^2 L(\bar{x}, \bar{t})$在$T{\bar{x}} M(\bar{t})$上是肯定的，其中(cf. (2.1.6))

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Optimization Theory, 优化理论, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|优化理论代写Optimization Theory代考|A Summary of the Gradient Projection Iterative Procedure

Let us now formalize the iterative procedure that was used in Example 6.6-1. It will be assumed that the initial point $y^{(0)}$ is admissible and lies in the intersection $Q^{\prime}$ of $q$ linearly independent hyperplanes. To determine the constrained global minimum:

1. Calculate the projection matrix $\mathbf{P}_q$, the gradient vector at the point $\mathbf{y}^{(i)},-\partial f\left(\mathbf{y}^{(i)}\right) / \partial \mathbf{y} \triangleq-\partial f^{(i)} / \partial \mathbf{y}$, the vector $\mathbf{r}$ given by Eq. (6.6-38), and the gradient projection $\mathbf{P}_q\left[-\partial f^{(n)} / \partial \mathbf{y}\right]$. If $\left|\mathbf{P}_q\left[-\partial f^{(n)} / \partial \mathbf{y}\right]\right| \leq \epsilon_1$, and $\mathbf{r} \leq \mathbf{0}$, then $\mathbf{y}^{(i)}$ is the constrained global minimum and the procedure is terminated; otherwise, go to step 2.
2. Determine whether or not a hyperplane should be dropped from $Q^{\prime}$. If $\left|\mathbf{P}q\left[-\partial f^{(i)} / \partial \mathbf{y}\right]\right| \leq \epsilon_1$, drop the hyperplane $H_q$, which corresponds to $r_q>0$, form the projection matrix $\mathbf{P}{q-1}$, and go to step 3.† The other alternative is that the norm of the gradient projection is greater than $\epsilon_1$. In this case, calculate $\beta$ given by (6.6-41). If $r_q>\beta$, drop the hyperplane $H_q$ from $Q^{\prime}$; if $r_q \leq \beta, Q^{\prime}$ remains unchanged.
3. Compute the normalized gradient projection $\mathbf{z}^{(t)}$ given by Eq. (6.6-19), and the maximum allowable step size $\tau_m$, where $\tau_m$ is the minimum positive value of the $\tau_j$ ‘s found by evaluating
$$\tau_j=\frac{v_j-\mathbf{n}_j^T \mathbf{y}^{(i)}}{\mathbf{n}_j^T \mathbf{z}^{(i)}}$$
for $j$ corresponding to all hyperplanes not in the intersection $Q^{\prime}$. The tentative next point $\mathbf{y}^{\prime(1+1)}$ is found from
$$\mathbf{y}^{\prime(l+1)}=\mathbf{y}^{(i)}+\tau_m \mathbf{z}^{(i)}$$
4. Calculate the gradient at the point $\mathbf{y}^{\prime(i+1)}$, if
$$\mathbf{z}^{(i) T}\left[-\frac{\partial f}{\partial \mathbf{y}}\left(\mathbf{y}^{(i+1)}\right)\right] \geq 0$$
set $\mathbf{y}^{(i+1)}=\mathbf{y}^{(i+1)} ;$ since $\mathbf{y}^{(i+1)}$ lies in the intersection of $Q^{\prime}$ and $H_m$ (the hyperplane which corresponds to the step size $\tau_m$ determined in step 3), add $H_m$ to $Q^{\prime}$, and return to step 1 .
On the other hand, if
$$\mathbf{z}^{(i) T}\left[-\frac{\partial f}{\partial \mathbf{y}}\left(\mathbf{y}^{\prime(t+1)}\right)\right]<0,$$
find $\mathbf{y}^{(t+1)}$ by repeated linear interpolation as illustrated in Fig. 6-15; the appropriate equations are (6.6-30) and (6.6-31). The intersection $Q^{\prime}$ remains unchanged, and the computational algorithm begins another iteration by returning to step 1 .

Before establishing the connection between gradient projection and the solution of optimal control problems, let us first mention some additional features of Rosen’s algorithm:†

1. Since at most one hyperplane is added or dropped at each stage in the iterative procedure, the matrix $\left[\mathbf{N}_q^T \mathbf{N}_q\right]^{-1}$, and hence $\mathbf{P}_q$ can be calculated from recurrence relations that do not require matrix inversion.
2. It may occur that a point calculated by the iterative procedure lies in the intersection of $i$ hyperplanes, only $q<i$ of which are linearly independent; the gradient projection method contains provisions for dealing with such situations.
3. The algorithm provides a starting procedure for generating an admissible point (if one exists) from an arbitrary initial guess $\mathbf{y}^{(0)}$.
4. If $f$ is a convex function in the admissible region of $E^K$ and has continuous second partial derivatives with respect to each of the components of $\mathbf{y}$ in the admissible region $R$, then the gradient projection algorithm converges to a global minimum of $f$. If $f$ is not convex in $R$, the algorithm will generally converge to a local minimum. To find the global minimum, one usually resorts to trying several different starting points in order to determine as many local minima as possible; the point $\mathbf{y}^*$ which corresponds to the local minimum having the smallest value of $f$ is then selected as the best possible point.

## 数学代写|优化理论代写Optimization Theory代考|A Summary of the Gradient Projection Iterative Procedure

$$\tau_j=\frac{v_j-\mathbf{n}_j^T \mathbf{y}^{(i)}}{\mathbf{n}_j^T \mathbf{z}^{(i)}}$$

$$\mathbf{y}^{\prime(l+1)}=\mathbf{y}^{(i)}+\tau_m \mathbf{z}^{(i)}$$

$$\mathbf{z}^{(i) T}\left[-\frac{\partial f}{\partial \mathbf{y}}\left(\mathbf{y}^{(i+1)}\right)\right] \geq 0$$

$$\mathbf{z}^{(i) T}\left[-\frac{\partial f}{\partial \mathbf{y}}\left(\mathbf{y}^{\prime(t+1)}\right)\right]<0,$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Optimization Theory, 优化理论, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|优化理论代写Optimization Theory代考|Features of the Quasilinearization Method

To conclude our discussion of quasilinearization, let us summarize the important features of the algorithm.

Initial Guess. An initial state-costate trajectory $\mathbf{x}^{(0)}(t), \mathbf{p}^{(0)}(t), t \in\left[t_0, t_f\right]$, must be selected to begin the iterative procedure. This initial trajectory, which is used for linearizing the nonlinear reduced differential equations, does not necessarily have to satisfy the specified boundary conditions; all subsequent iterates will do so, however. The primary requirement of the initial guess is that it not be so poor that it causes the algorithm to diverge. As usual, the initial guess is made primarily on the basis of whatever physical information is available about the particular problem being solved.

Storage Requirements. From Eq. (6.4-32) with $t=t_0$ it is apparent that $\mathbf{p}^{(t+1)}\left(t_0\right)=\mathbf{c}$ if the values of $\mathbf{p}^{H 1}\left(t_0\right), \ldots, \mathbf{p}^{H n}\left(t_0\right)$, and $\mathbf{p}^p\left(t_0\right)$ are selected as suggested; therefore, once $\mathrm{c}$ is known, the $(i+1)$ st trajectory can be generated by reintegrating Eq. (6.4-29) with the initial conditions $\mathbf{x}^{(i+1)}\left(t_0\right)=\mathbf{x}_0$ and $\mathbf{p}^{(i+1)}\left(t_0\right)=\mathbf{c}$. By doing this, there is no necessity for storing (presumably in piecewise-constant fashion) the $n$ homogeneous solutions and the particular solution; hence we store only the linearizing state-costate trajectory, the specified value of $\mathbf{x}\left(t_0\right)$, the value of $\mathbf{c}, \mathbf{x}^p\left(t_f\right), \mathbf{p}^p\left(t_f\right)$, and $\mathbf{x}^{H J}\left(t_f\right)$, $\mathbf{p}^{H j}\left(t_f\right), j=1,2, \ldots, n$.

Convergence. McGill and Kenneth [M-4] have proved that the sequence of solutions of the linearized equations (6.4-29) converges (with a rate that is at least quadratic) to the solution of the nonlinear differential equations (6.4-25) and (6.4-26) if

1. The functions $\mathbf{a}$ and $\partial \mathscr{H} / \partial \mathbf{x}$ of Eqs. (6.4-25) and (6.4-26) are continuous.
2. The partial derivatives $\partial \mathbf{a} / \partial \mathbf{x}, \partial \mathbf{a} / \partial \mathbf{p}, \partial^2 \mathscr{H} / \partial \mathbf{x}^2$, and $\partial^2 \mathscr{H} / \partial \mathbf{x} \partial \mathbf{p}$ of Eqs. (6.4-27) and (6.4-28) exist and are continuous.
3. The partial derivative functions in 2 satisfy a Lipschitz condition with respect to $[\mathbf{x}(t) \mid \mathbf{p}(t)]^T$.
4. The norm of the deviation of the initial guess from the solutions of (6.4-25) and (6.4-26) is sufficiently small.

## 数学代写|优化理论代写Optimization Theory代考|SUMMARY OF ITERATIVE TECHNIQUES FOR SOLVING TWO-POINT BOUNDARY-VALUE PROBLEMS

So far, in this chapter we have considered three iterative numerical methods for the solution of nonlinear two-point boundary-value problems. The assumption was made that the states and controls are not constrained by any boundaries; if this is not the case, the computational techniques we have discussed must be modified. $\dagger$

In each of the methods we have considered, the philosophy is to solve a sequence of problems in which one or more of the five necessary conditions [Eqs. (6.1-1) through (6.1-4)] is initially violated, but eventually satisfied if the iterative procedure converges. In the steepest descent method the algorithm terminates when $\partial \mathscr{H} / \partial \mathbf{u} \approx \mathbf{0}$ for all $t \in\left[t_0, t_f\right]$, the other four conditions having been satisfied throughout the iterative procedure. Convergence of the method of variation of extremals is indicated when the boundary condition $\mathbf{p}\left(t_f\right)=\partial h\left(\mathbf{x}\left(t_f\right)\right) / \partial \mathbf{x}$ is satisfied. In quasilinearization, trajectories are generated that satisfy the boundary conditions; when a trajectory is obtained that also is a solution of the reduced state-costate equations, the procedure has converged.

As bases for comparing the numerical techniques, we have used the initial guess requirement, storage requirements, convergence properties, computational requirements, stopping criteria, and modifications for fixed end point problems. Table 6-4 summarize these and other characteristics of the three iterative methods.

## 数学代写|优化理论代写Optimization Theory代考|Features of the Quasilinearization Method

2中的偏导数函数对$[\mathbf{x}(t) \mid \mathbf{p}(t)]^T$满足Lipschitz条件。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Optimization Theory, 优化理论, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|优化理论代写Optimization Theory代考|The Steepest Descent Algorithm

The procedure we use to solve optimal control problems by the method of steepest descent is

Select a discrete approximation $\S$ to the nominal control history $\mathbf{u}^{(0)}(t)$, $t \in\left[t_0, t_f\right]$, and store this in the memory of the digital computer. This can be done, for example, by subdividing the interval $\left[t_0, t_f\right]$ into $N$ subintervals (generally of equal duration) and considering the control

$\mathbf{u}^{(0)}$ as being piecewise-constant during each of these subintervals; that is,
$$\mathbf{u}^{(0)}(t)=\mathbf{u}^{(0)}\left(t_k\right), \quad t \in\left[t_k, t_{k+1}\right), \quad k=0,1, \ldots, N-1 .$$
Let the iteration index $i$ be zero.

Using the nominal control history $\mathbf{u}^{(t)}$, integrate the state equations from $t_0$ to $t_f$ with initial conditions $\mathbf{x}\left(t_0\right)=\mathbf{x}_0$ and store the resulting state trajectory $\mathbf{x}^{(i)}$ as a piecewise-constant vector function.

Calculate $\mathbf{p}^{(i)}\left(t_f\right)$ by substituting $\mathbf{x}^{(i)}\left(t_f\right)$ from step 2 into Eq. (6.2-9b). Using this value of $\mathbf{p}^{(n)}\left(t_f\right)$ as the “initial condition” and the piecewise-constant values of $\mathbf{x}^{(t)}$ stored in step 2, integrate the costate equations from $t_f$ to $t_0$, evaluate $\partial \mathscr{H}^{(i)}(t) / \partial \mathrm{u}, t \in\left[t_0, t_f\right]$, and store this function in piecewise-constant fashion. The costate trajectory does not need to be stored.

If
$$\left|\frac{\partial \mathscr{H}^{(i)}}{\partial \mathbf{u}}\right| \leq \gamma$$
where $\gamma$ is a preselected positive constant and
$$\left|\frac{\partial \mathscr{H}^{(t)}}{\partial \mathbf{u}}\right|^2 \triangleq \int_{t_0}^{t_s}\left[\frac{\partial \mathscr{H}^{(i)}}{\partial \mathbf{u}}(t)\right]^T\left[\frac{\partial \mathscr{H}^{(t)}}{\partial \mathbf{u}}(t)\right] d t,$$
terminate the iterative procedure, and output the extremal state and control. If the stopping criterion (6.2-17) is not satisfied, generate a new piecewise-constant control function given by
$$\mathbf{u}^{(i+1)}\left(t_k\right)=\mathbf{u}^{(i)}\left(t_k\right)-\tau \frac{\partial \mathscr{H}^{(i)}}{\partial \mathbf{u}}\left(t_k\right), \quad k=0, \ldots, N-1,$$
where
$$\mathbf{u}^{(i)}(t)=\mathbf{u}^{(i)}\left(t_k\right) \quad \text { for } t \in\left[t_k, t_{k+1}\right), \quad k=0, \ldots, N-1 .$$
Replace $\mathbf{u}^{(i)}\left(t_k\right)$ by $\mathbf{u}^{(i+1)}\left(t_k\right), k=0, \ldots, N-1$, and return to step 2 .

## 数学代写|优化理论代写Optimization Theory代考|An Illustrative Example

To illustrate the mechanics of the steepest descent procedure we have discussed, let us partially solve a simple example. Since all calculations will be done analytically, the piecewise-constant approximations mentioned previously will not be required.

Example 6.2-1. A first-order system is described by the state equation
$$\dot{x}(t)=-x(t)+u(t)$$
with initial condition $x(0)=4.0$. It is desired to find $u(t), t \in[0,1]$, that minimizes the performance measure
$$J=x^2(1)+\int_0^1 \frac{1}{2} u^2(t) d t$$
Notice that this problem is of the linear regulator type discussed in Section 5.2, and, therefore, can be solved without using iterative numerical techniques. The costate equation is
$$\dot{p}(t)=p(t)$$
with the boundary condition $p(1)=2 x(1)$. In addition, the optimal control and its costate must satisfy the relation
$$\frac{\partial \mathscr{H}}{\partial u}=u(t)+p(t)=0$$
As an initial guess for the optimal control, let us select $u^{(0)}(t)=1.0$ throughout the interval $[0,1]$. Integrating the state equation, using this control and the initial condition $x(0)=4.0$, we obtain
$$x^{(0)}(t)=3 \epsilon^{-t}+1$$
hence, $p^{(0)}(1)=2 x^{(0)}(1)=2\left[3 \epsilon^{-1}+1\right]$. Using this value for $p^{(0)}(1)$ and integrating the costate equation backward in time, we obtain
$$p^{(0)}(t)=2 \epsilon^{-1}\left[3 \epsilon^{-1}+1\right] \epsilon^t$$
which makes
$$\frac{\partial \mathscr{H}(0)}{\partial u}(t)=1+2 \epsilon^{-1}\left[3 \epsilon^{-1}+1\right] \epsilon^t .$$

## 数学代写|优化理论代写Optimization Theory代考|The Steepest Descent Algorithm

$\mathbf{u}^{(0)}$ 在每一个子区间内都是分段常数;也就是说，
$$\mathbf{u}^{(0)}(t)=\mathbf{u}^{(0)}\left(t_k\right), \quad t \in\left[t_k, t_{k+1}\right), \quad k=0,1, \ldots, N-1 .$$

$$\left|\frac{\partial \mathscr{H}^{(i)}}{\partial \mathbf{u}}\right| \leq \gamma$$

$$\left|\frac{\partial \mathscr{H}^{(t)}}{\partial \mathbf{u}}\right|^2 \triangleq \int_{t_0}^{t_s}\left[\frac{\partial \mathscr{H}^{(i)}}{\partial \mathbf{u}}(t)\right]^T\left[\frac{\partial \mathscr{H}^{(t)}}{\partial \mathbf{u}}(t)\right] d t,$$

$$\mathbf{u}^{(i+1)}\left(t_k\right)=\mathbf{u}^{(i)}\left(t_k\right)-\tau \frac{\partial \mathscr{H}^{(i)}}{\partial \mathbf{u}}\left(t_k\right), \quad k=0, \ldots, N-1,$$

$$\mathbf{u}^{(i)}(t)=\mathbf{u}^{(i)}\left(t_k\right) \quad \text { for } t \in\left[t_k, t_{k+1}\right), \quad k=0, \ldots, N-1 .$$

## 数学代写|优化理论代写Optimization Theory代考|An Illustrative Example

$$\dot{x}(t)=-x(t)+u(t)$$

$$J=x^2(1)+\int_0^1 \frac{1}{2} u^2(t) d t$$

$$\dot{p}(t)=p(t)$$

$$\frac{\partial \mathscr{H}}{\partial u}=u(t)+p(t)=0$$

$$x^{(0)}(t)=3 \epsilon^{-t}+1$$

$$p^{(0)}(t)=2 \epsilon^{-1}\left[3 \epsilon^{-1}+1\right] \epsilon^t$$

$$\frac{\partial \mathscr{H}(0)}{\partial u}(t)=1+2 \epsilon^{-1}\left[3 \epsilon^{-1}+1\right] \epsilon^t .$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Optimization Theory, 优化理论, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|优化理论代写Optimization Theory代考|MINIMUM-TIME PROBLEMS

In this section we shall consider problems in which the objective is to transfer a system from an arbitrary initial state to a specified target set in minimum time. The target set (which may be moving) will be denoted by $S(t)$, and the minimum time required to reach the target set by $t^$. Mathematically, then, our problem is to transfer a system $$\dot{\mathbf{x}}(t)=\mathbf{a}(\mathbf{x}(t), \mathbf{u}(t), t)$$ from an arbitrary initial state $\mathbf{x}0$ to the target set $S(t)$ and minimize $$J(\mathbf{u})=\int{t_0}^{t_f} d t=t_f-t_0 .$$
Typically, the control variables may be constrained by requirements such as
$$\left|u_i(t)\right| \leq 1, \quad i=1,2, \ldots, m, \quad t \in\left[t_0, t^\right] .$$
Our approach will be to use the minimum principle to determine the optimal control law. $\dagger$
To introduce several important aspects of minimum-time problems, let us consider the following simplified intercept problem.
Example 5.4-1. Figure $5-15$ shows an aircraft that is initially at the point $x=0, y=0$ pursuing a ballistic missile that is initially at the point $x=a>0, y=0$. The missile flies the trajectory
\begin{aligned} & x_M(t)=a+0.1 t^3 \ & y_M(t)=0 \end{aligned}
for $t \geq 0$; thus, in this example the target set $S(t)$ is the position of the missile given by (5.4-4).
Neglecting gravitational and aerodynamic forces, let us model the aircraft as a point mass. Normalizing the mass to unity, we find that the motion of the aircraft in the $x$ direction is described by
$$\ddot{x}(t)=u(t)$$
or, in state form,
\begin{aligned} & \dot{x}_1(t)=x_2(t) \ & \dot{x}_2(t)=u(t), \end{aligned}
where $x_1(t) \triangleq x(t)$ and $x_2(t) \triangleq \dot{x}(t)$. The thrust $u(t)$ is constrained by the relationship
$$|u(t)| \leq 1.0 .$$

## 数学代写|优化理论代写Optimization Theory代考|The Set of Reachable States

If a system can be transferred from some initial state to a target set by applying admissible control histories, then an optimal control exists and may be found by determining the admissible control that causes the system to reach the target set most quickly. A description of the target set is assumed to be known; thus, to investigate the existence of an optimal control it is useful to introduce the concept of reachable states.
If a system with initial state $\mathbf{x}\left(t_0\right)=\mathbf{x}_0$ is subjected to all admissible control histories for a time interval $\left[t_0, t\right]$, the collection of state values $\mathbf{x}(t)$ is called the set of states that are reachable (from $\mathbf{x}_0$ ) at time $t$, or simply the set of reachable states.
Although the set of reachable states depends on $\mathbf{x}0, t_0$, and on $t$, we shall denote this set by $R(t)$. The following example illustrates the concept of reachable states. Example 5.4-2. Find the set of reachable states for the system $$\dot{x}(t)=u(t)$$ where the admissible controls satisfy $$-1 \leq u(t) \leq 1$$ The solution of Eq. (5.4-9) is $$x(t)=x_0+\int{t_0}^t u(\tau) d \tau$$
If only admissible control values are used, Eq. (5.4-11) implies that
$$x_0-\left[t-t_0\right] \leq x(t) \leq x_0+\left[t-t_0\right] .$$
Figure 5-16 shows the reachable sets for $t=t_1, t_2$, and $t_3$, where $t_1<t_2<t_3$.

## 数学代写|优化理论代写Optimization Theory代考|MINIMUM-TIME PROBLEMS

$$\left|u_i(t)\right| \leq 1, \quad i=1,2, \ldots, m, \quad t \in\left[t_0, t^\right] .$$

\begin{aligned} & x_M(t)=a+0.1 t^3 \ & y_M(t)=0 \end{aligned}

$$\ddot{x}(t)=u(t)$$

\begin{aligned} & \dot{x}_1(t)=x_2(t) \ & \dot{x}_2(t)=u(t), \end{aligned}

$$|u(t)| \leq 1.0 .$$

## 数学代写|优化理论代写Optimization Theory代考|The Set of Reachable States

$$x_0-\left[t-t_0\right] \leq x(t) \leq x_0+\left[t-t_0\right] .$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Optimization Theory, 优化理论, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|优化理论代写Optimization Theory代考|Constrained Minimization of Functionals

We are now ready to consider the presence of constraints in variational problems. To simplify the variational equations, it will be assumed that the admissible curves are smooth.

Point Constraints. Let us determine a set of necessary conditions for a function $w^*$ to be an extremal for a functional of the form
$$J(w)=\int_{t_0}^{t t} g(w(t), \dot{w}(t), t) d t$$
$\mathbf{w}$ is an $(n+m) \times 1$ vector of functions $(n, m \geq 1)$ that is required to satisfy $n$ relationships of the form
$$f_i(\mathrm{w}(t), t)=0, \quad i=1,2, \ldots, n,$$
which are called point constraints. Constraints of this type would be present if, for example, the admissible trajectories were required to lie on a specified surface in the $n+m+1$-dimensional $w(t)-t$ space. The presence of these $n$ constraining relations means that only $m$ of the $n+m$ components of $w$ are independent.

We have previously found that the Euler equations must be satisfied regardless of the boundary conditions, so we will ignore, temporarily, terms that enter only into the determination of boundary conditions.

One way to attack this problem might be to solve Eqs. (4.5-25) for $n$ of the components of $\mathbf{w}(t)$ in terms of the remaining $m$ components-which can then be regarded as $m$ independent functions-and use these equations to eliminate the $n$ dependent components of $w(t)$ and $\mathbf{w}(t)$ from $J$. If this can be done, then the equations of Sections 4.2 and 4.3 apply. Unfortunately, the constraining equations (4.5-25) are generally nonlinear algebraic equations, which may be quite difficult to solve.

As an alternative approach we can use Lagrange multipliers. The first step is to form the augmented functional by adjoining the constraining relations to $J$, which yields
\begin{aligned} J_a(\mathbf{w}, \mathbf{p})= & \int_{t_0}^{t_t}\left{g(\mathbf{w}(t), \dot{\mathbf{w}}(t), t)+p_1(t)\left[f_1(\mathbf{w}(t), t)\right]\right. \ & \left.+p_2(t)\left[f_2(\mathbf{w}(t), t)\right]+\cdots+p_n(t)\left[f_n(\mathbf{w}(t), t)\right]\right} d t \ = & \int_{t_0}^{t f}\left{g(\mathbf{w}(t), \dot{w}(t), t)+\mathbf{p}^T(t)[\mathbf{f}(\mathbf{w}(t), t)]\right} d t . \end{aligned}

## 数学代写|优化理论代写Optimization Theory代考|NECESSARY CONDITIONS FOR OPTIMAL CONTROL

Let us now employ the techniques introduced in Chapter 4 to determine necessary conditions for optimal control. As stated in Chapter 1, the problem is to find an admissible control $\mathbf{u}^$ that causes the system $$\dot{\mathbf{x}}(t)=\mathbf{a}(\mathbf{x}(t), \mathbf{u}(t), t)$$ to follow an admissible trajectory $\mathbf{x}^$ that minimizes the performance measure
$$J(\mathbf{u})=h\left(\mathbf{x}\left(t_f\right), t_f\right)+\int_{t_0}^{t_f} g(\mathbf{x}(t), \mathbf{u}(t), t) d t . \dagger$$
We shall initially assume that the admissible state and control regions are not bounded, and that the initial conditions $\mathbf{x}\left(t_0\right)=\mathbf{x}_0$ and the initial time $t_0$ are specified. As usual, $\mathbf{x}$ is the $n \times 1$ state vector and $\mathbf{u}$ is the $m \times 1$ vector of control inputs.

In the terminology of Chapter 4 , we have a problem involving $n+m$ functions which must satisfy the $n$ differential equation constraints (5.1-1). The $m$ control inputs are the independent functions.

The only difference between Eq. (5.1-2) and the functionals considered in Chapter 4 is the term involving the final states and final time. However, assuming that $h$ is a differentiable function, we can write
$$h\left(\mathbf{x}\left(t_f\right), t_f\right)=\int_{s_0}^{t_f} \frac{d}{d t}[h(\mathbf{x}(t), t)] d t+h\left(\mathbf{x}\left(t_0\right), t_0\right)$$
so that the performance measure can be expressed as
$$J(\mathbf{u})=\int_{t_0}^{t_s}\left{g(\mathbf{x}(t), \mathbf{u}(t), t)+\frac{d}{d t}[h(\mathbf{x}(t), t)]\right} d t+h\left(\mathbf{x}\left(t_0\right), t_0\right)$$
Since $\mathbf{x}\left(t_0\right)$ and $t_0$ are fixed, the minimization does not affect the $h\left(\mathbf{x}\left(t_0\right), t_0\right)$ term, so we need consider only the functional
$$J(\mathbf{u})=\int_{t_0}^{t_s}\left{g(\mathbf{x}(t), \mathbf{u}(t), t)+\frac{d}{d t}[h(\mathbf{x}(t), t)]\right} d t$$

## 数学代写|优化理论代写Optimization Theory代考|Constrained Minimization of Functionals

$$J(w)=\int_{t_0}^{t t} g(w(t), \dot{w}(t), t) d t$$
$\mathbf{w}$是满足表单$n$关系所需的函数$(n, m \geq 1)$的$(n+m) \times 1$向量
$$f_i(\mathrm{w}(t), t)=0, \quad i=1,2, \ldots, n,$$

\begin{aligned} J_a(\mathbf{w}, \mathbf{p})= & \int_{t_0}^{t_t}\left{g(\mathbf{w}(t), \dot{\mathbf{w}}(t), t)+p_1(t)\left[f_1(\mathbf{w}(t), t)\right]\right. \ & \left.+p_2(t)\left[f_2(\mathbf{w}(t), t)\right]+\cdots+p_n(t)\left[f_n(\mathbf{w}(t), t)\right]\right} d t \ = & \int_{t_0}^{t f}\left{g(\mathbf{w}(t), \dot{w}(t), t)+\mathbf{p}^T(t)[\mathbf{f}(\mathbf{w}(t), t)]\right} d t . \end{aligned}

## 数学代写|优化理论代写Optimization Theory代考|NECESSARY CONDITIONS FOR OPTIMAL CONTROL

$$J(\mathbf{u})=h\left(\mathbf{x}\left(t_f\right), t_f\right)+\int_{t_0}^{t_f} g(\mathbf{x}(t), \mathbf{u}(t), t) d t . \dagger$$

Eq.(5.1-2)和第4章中考虑的泛函之间的唯一区别是涉及最终状态和最终时间的术语。然而，假设$h$是一个可微函数，我们可以写
$$h\left(\mathbf{x}\left(t_f\right), t_f\right)=\int_{s_0}^{t_f} \frac{d}{d t}[h(\mathbf{x}(t), t)] d t+h\left(\mathbf{x}\left(t_0\right), t_0\right)$$

$$J(\mathbf{u})=\int_{t_0}^{t_s}\left{g(\mathbf{x}(t), \mathbf{u}(t), t)+\frac{d}{d t}[h(\mathbf{x}(t), t)]\right} d t+h\left(\mathbf{x}\left(t_0\right), t_0\right)$$

$$J(\mathbf{u})=\int_{t_0}^{t_s}\left{g(\mathbf{x}(t), \mathbf{u}(t), t)+\frac{d}{d t}[h(\mathbf{x}(t), t)]\right} d t$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Optimization Theory, 优化理论, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|优化理论代写Optimization Theory代考|Closeness of Functions

If two points are said to be close to one another, a geometric interpretation springs immediately to mind. But what do we mean when we say two functions are close to one another? To give a precise meaning to the term “close” we next introduce the concept of a norm.
DEFINITION $4-5$
The norm in n-dimensional Euclidean space is a rule of correspondence that assigns to each point $\mathbf{q}$ a real number. The norm of $\mathbf{q}$, denoted by $|\mathbf{q}|$, satisfies the following properties:

$|\mathbf{q}| \geq 0$ and $|\mathbf{q}|=0$ if and only if $\mathbf{q}=\mathbf{0}$.

$|\alpha q|=|\alpha| \cdot|\mathbf{q}|$ for all real numbers $\alpha$.

$\left|q^{(1)}+q^{(2)}\right| \leq\left|q^{(1)}\right|+\left|q^{(2)}\right|$.
When we say that two points $\mathbf{q}^{(1)}$ and $\mathbf{q}^{(2)}$ are close together, we mean that
$\left|\mathbf{q}^{(1)}-\mathbf{q}^{(2)}\right|$ is small.
Example 4.1-5. What is a suitable norm for two-dimensional Euclidean space? It is easily verified that
$$|\mathbf{q}|_2 \triangleq \sqrt{q_1^2+q_2^2} \text {, or }|\mathbf{q}|_1 \triangleq\left|q_1\right|+\left|q_2\right|$$
satisfies properties (4.1-14). Now suppose that a point $\mathbf{q}^{(1)}$ is specified and it is required that $\left|\mathbf{q}^{(2)}-\mathbf{q}^{(1)}\right|<\delta$. What are the acceptable locations for $q^{(2)}$ ? If $|q|_2$ is used as the norm, $\mathbf{q}^{(2)}$ must lie within the circle centered at $\mathbf{q}^{(1)}$ having radius $\delta$ as shown in Fig. 4-2(a). On the other hand, if $|\mathbf{q}|_1$ is used as the norm, the acceptable locations for $\mathbf{q}^{(2)}$ are as shown in Fig, 4-2(b).

Next, let us define the norm of a function.
DEFINITION 4-6
The norm of a function is a rule of correspondence that assigns to each function $\mathbf{x} \in \Omega$, defined for $t \in\left[t_0, t_f\right]$, a real number. The norm of $\mathbf{x}$, denoted by $|\mathbf{x}|$, satisfies the following properties:

1. $|\mathbf{x}| \geq 0$ and $|\mathbf{x}|=0$ if and only if $\mathbf{x}(t)=\mathbf{0}$ for all $t \in\left[t_0, t_f\right]$
$(4.1-15 a)$
2. $|\alpha \mathbf{x}|=|\alpha| \cdot|\mathbf{x}|$ for all real numbers $\alpha$.
3. $\left|\mathbf{x}^{(1)}+\mathbf{x}^{(2)}\right| \leq\left|\mathbf{x}^{(1)}\right|+\left|\mathbf{x}^{(2)}\right|$.
$(4.1-15 c)$

## 数学代写|优化理论代写Optimization Theory代考|The Increment of a Functional

In order to consider extreme values of a function, we now define the concept of an increment.
DEFINITION 4-7
If $\mathbf{q}$ and $\mathbf{q}+\Delta \mathbf{q}$ are elements for which the function $f$ is defined, then the increment of $f$, denoted by $\Delta f$, is
$$\Delta f \triangleq f(\mathbf{q}+\Delta \mathbf{q})-f(\mathbf{q}) .$$
Notice that $\Delta f$ depends on both $\mathbf{q}$ and $\Delta \mathbf{q}$, in general, so to be more explicit we would write $\Delta f(\mathbf{q}, \Delta \mathbf{q})$.
Example 4.1-7. Consider the function
$$f(\mathbf{q})=q_1^2+2 q_1 q_2 \text { for all real } q_1, q_2 .$$
The increment of $f$ is
\begin{aligned} \Delta f= & f(\mathbf{q}+\Delta \mathbf{q})-f(\mathbf{q})=\left[q_1+\Delta q_1\right]^2 \ & \left.+2\left[q_1+\Delta q_1\right] q_2+\Delta q_2\right]-\left[q_1^2+2 q_1 q_2\right] \ = & 2 q_1 \Delta q_1+\left[\Delta q_1\right]^2+2 \Delta q_1 q_2+2 \Delta q_2 q_1+2 \Delta q_1 \Delta q_2 \end{aligned}
In an analogous manner, we next define the increment of a functional.
DEFINITION 4-8
If $\mathbf{x}$ and $\mathbf{x}+\delta \mathbf{x}$ are functions for which the functional $J$ is defined, then the increment of $J$, denoted by $\Delta J$, is
$$\Delta J \triangleq J(\mathbf{x}+\delta \mathbf{x})-J(\mathbf{x}) .$$
Again, to be more explicit, we would write $\Delta J(\mathbf{x}, \delta \mathbf{x})$ to emphasize that the increment depends on the functions $\mathbf{x}$ and $\delta \mathbf{x}$. $\delta \mathbf{x}$ is called the variation of the function $\mathbf{x}$.

## 学代写|优化理论代写Optimization Theory代考|Closeness of Functions

n维欧几里得空间中的范数是一个对应的规则，它赋予每个点$\mathbf{q}$一个实数。$\mathbf{q}$的范数用$|\mathbf{q}|$表示，满足以下性质:

$|\mathbf{q}| \geq 0$$|\mathbf{q}|=0当且仅当\mathbf{q}=\mathbf{0}。 |\alpha q|=|\alpha| \cdot|\mathbf{q}| 对于所有实数\alpha。 \left|q^{(1)}+q^{(2)}\right| \leq\left|q^{(1)}\right|+\left|q^{(2)}\right|． 当我们说两点\mathbf{q}^{(1)}和\mathbf{q}^{(2)}在一起时，我们的意思是 \left|\mathbf{q}^{(1)}-\mathbf{q}^{(2)}\right|很小。 例4.1-5。二维欧几里得空间合适的范数是什么?很容易证实$$ |\mathbf{q}|_2 \triangleq \sqrt{q_1^2+q_2^2} \text {, or }|\mathbf{q}|_1 \triangleq\left|q_1\right|+\left|q_2\right| $$满足属性(4.1-14)。现在假设指定了一个点\mathbf{q}^{(1)}，并且要求\left|\mathbf{q}^{(2)}-\mathbf{q}^{(1)}\right|<\delta。q^{(2)}可接受的位置是什么?如果使用|q|_2作为范数，则\mathbf{q}^{(2)}必须位于以\mathbf{q}^{(1)}为圆心、半径为\delta的圆内，如图4-2(a)所示。另一方面，如果以|\mathbf{q}|_1为标准，则\mathbf{q}^{(2)}的可接受位置如图4-2(b)所示。 接下来，让我们定义函数的范数。 定义4-6 函数的范数是一个对应的规则，它分配给每个函数\mathbf{x} \in \Omega，为t \in\left[t_0, t_f\right]定义一个实数。\mathbf{x}的范数用|\mathbf{x}|表示，满足以下性质: |\mathbf{x}| \geq 0$$|\mathbf{x}|=0$当且仅当$\mathbf{x}(t)=\mathbf{0}$适用于所有人 $t \in\left[t_0, t_f\right]$
$(4.1-15 a)$

$|\alpha \mathbf{x}|=|\alpha| \cdot|\mathbf{x}|$ 对于所有实数$\alpha$。

$\left|\mathbf{x}^{(1)}+\mathbf{x}^{(2)}\right| \leq\left|\mathbf{x}^{(1)}\right|+\left|\mathbf{x}^{(2)}\right|$．
$(4.1-15 c)$

## 数学代写|优化理论代写Optimization Theory代考|The Increment of a Functional

$$\Delta f \triangleq f(\mathbf{q}+\Delta \mathbf{q})-f(\mathbf{q}) .$$

$$f(\mathbf{q})=q_1^2+2 q_1 q_2 \text { for all real } q_1, q_2 .$$
$f$的增量为
\begin{aligned} \Delta f= & f(\mathbf{q}+\Delta \mathbf{q})-f(\mathbf{q})=\left[q_1+\Delta q_1\right]^2 \ & \left.+2\left[q_1+\Delta q_1\right] q_2+\Delta q_2\right]-\left[q_1^2+2 q_1 q_2\right] \ = & 2 q_1 \Delta q_1+\left[\Delta q_1\right]^2+2 \Delta q_1 q_2+2 \Delta q_2 q_1+2 \Delta q_1 \Delta q_2 \end{aligned}

$$\Delta J \triangleq J(\mathbf{x}+\delta \mathbf{x})-J(\mathbf{x}) .$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Optimization Theory, 优化理论, 数学代写

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数学代写|优化理论代写Optimization Theory代考|MINIMUM-TIME PROBLEMS

In this section we shall consider problems in which the objective is to transfer a system from an arbitrary initial state to a specified target set in minimum time. The target set (which may be moving) will be denoted by
$\dagger$ Performing the differentiation $\partial \mathscr{H} / \partial x_2$ formally also results in the presence of two unit impulse functions, which occur at $x_2^*(t)= \pm 2$; however, these terms are such that either the impulse functions or their coefficients are zero for all $t \in\left[t_0, t_f\right]$, so the impulses do not affect the solution.
$S(t)$, and the minimum time required to reach the target set by $t^$. Mathematically, then, our problem is to transfer a system $$\dot{\mathbf{x}}(t)=\mathbf{a}(\mathbf{x}(t), \mathbf{u}(t), t)$$ from an arbitrary initial state $\mathbf{x}0$ to the target set $S(t)$ and minimize $$J(\mathbf{u})=\int{t_0}^{t_t} d t=t_f-t_0$$
Typically, the control variables may be constrained by requirements such as
$$\left|u_i(t)\right| \leq 1, \quad i=1,2, \ldots, m, \quad t \in\left[t_0, t^\right]$$
Our approach will be to use the minimum principle to determine the optimal control law.†
To introduce several important aspects of minimum-time problems, let us consider the following simplified intercept problem.

## 数学代写|优化理论代写Optimization Theory代考|The Set of Reachable States

If a system can be transferred from some initial state to a target set by applying admissible control histories, then an optimal control exists and may be found by determining the admissible control that causes the system to reach the target set most quickly. A description of the target set is assumed to be known; thus, to investigate the existence of an optimal control it is useful to introduce the concept of reachable states.

If a system with initial state $\mathbf{x}\left(t_0\right)=\mathbf{x}_0$ is subjected to all admissible control histories for a time interval $\left[t_0, t\right]$, the collection of state values $\mathbf{x}(t)$ is called the set of states that are reachable (from $\mathbf{x}_0$ ) at time $t$, or simply the set of reachable states.

Although the set of reachable states depends on $\mathbf{x}0, t_0$, and on $t$, we shall denote this set by $R(t)$. The following example illustrates the concept of reachable states. Example 5.4-2. Find the set of reachable states for the system $$\dot{x}(t)=u(t)$$ where the admissible controls satisfy $$-1 \leq u(t) \leq 1$$ The solution of Eq. (5.4-9) is $$x(t)=x_0+\int{t_0}^t u(\tau) d \tau$$
If only admissible control values are used, Eq. (5.4-11) implies that
$$x_0-\left[t-t_0\right] \leq x(t) \leq x_0+\left[t-t_0\right] .$$

## 数学代写|优化理论代写Optimization Theory代考|MINIMUM-TIME PROBLEMS

$\dagger$执行微分$\partial \mathscr{H} / \partial x_2$也会得到两个单位脉冲函数，它们出现在$x_2^*(t)= \pm 2$;然而，这些项使得脉冲函数或它们的系数对所有$t \in\left[t_0, t_f\right]$都为零，所以脉冲不影响解。
$S(t)$，以及达到$t^$设定的目标所需的最短时间。从数学上讲，我们的问题是将系统$$\dot{\mathbf{x}}(t)=\mathbf{a}(\mathbf{x}(t), \mathbf{u}(t), t)$$从任意初始状态$\mathbf{x}0$转移到目标集$S(t)$并最小化$$J(\mathbf{u})=\int{t_0}^{t_t} d t=t_f-t_0$$

$$\left|u_i(t)\right| \leq 1, \quad i=1,2, \ldots, m, \quad t \in\left[t_0, t^\right]$$

## 数学代写|优化理论代写Optimization Theory代考|The Set of Reachable States

$$x_0-\left[t-t_0\right] \leq x(t) \leq x_0+\left[t-t_0\right] .$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。