Posted on Categories:CS代写, Machine Learning, 机器学习, 计算机代写

# 计算机代写|机器学习代写Machine Learning代考|ENGG3300 Principal Component Analysis

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 计算机代写|机器学习代写Machine Learning代考|Principal Component Analysis

Principal Component Analysis (PCA) is one of the most commonly used dimensionality reduction methods. Before introducing the technical details, let us consider the following question: for the samples in an orthogonal feature space, how can we use a hyperplane (i.e., a line in two-dimensional space generalized to high-dimensional space) to represent the samples? Intuitively, if such a hyperplane exists, then it perhaps needs to have the following properties:

Minimum reconstruction error: the samples should have short distances to this hyperplane;

Maximum variance: the projections of samples onto the hyperplane should stay away from each other.

Interestingly, the above two properties lead to two equivalent derivations of PCA. First, let us derive PCA by minimizing the reconstruction error.

Suppose the samples are zero-centered (i.e., $\sum_i \boldsymbol{x}i=\mathbf{0}$ ). Let $\left{\mathbf{w}_1, \mathbf{w}_2, \ldots, \mathbf{w}_d\right}$ denote the new coordinate system after projection, where $\mathbf{w}_i$ is an orthonormal basis vector, that is, $\left|\mathbf{w}_i\right|_2=1$ and $\mathbf{w}_i^{\mathrm{T}} \mathbf{w}_j=0(i \neq j)$. If some of the coordinates are removed from the new coordinate system (i.e., the dimension is reduced to $d^{\prime}{i 1} ; z_{i 2} ; \ldots ; z_{i d^{\prime}}\right)$, where $z_{i j}=\mathbf{w}j^{\mathrm{T}} \boldsymbol{x}_i$ is the coordinate of $\boldsymbol{x}_i$ in the $j$ th dimension of the lower dimensional coordinate system. If we reconstruct $\boldsymbol{x}_i$ from $z_i$, then we have $\hat{\boldsymbol{x}}_i=\sum{j=1}^{d^{\prime}} z_{i j} \mathbf{w}_j$.

## 计算机代写|机器学习代写Machine Learning代考|Kernelized PCA

Linear dimensionality reduction methods transform a highdimensional space into a low-dimensional space via a linear mapping. In practice, however, non-linear mappings are often needed to find the proper low-dimensional embedding. – Figure $10.4$ shows an example of embedding data points to an S-shaped surface in a three-dimensional space, where the data points are sampled from a squared region of a twodimensional space. If we apply linear dimensionality reduction methods to the three-dimensional space, we will lose the original low-dimensional structure. We call the original lowdimensional space, from which the data points are sampled, as the intrinsic low-dimensional space.

A general approach to non-linear dimensionality reduction is to kernelize linear dimensionality reduction methods via kernel tricks. Next, we give a demonstration with the representative Kernelized PCA (KPCA) (Schölkopf et al. 1998).

Suppose that we project data from the high-dimensional feature space to a hyperplane spanned by $\mathbf{W}=\left(\mathbf{w}1, \mathbf{w}_2, \ldots, \mathbf{w}_d\right)$. Then, according to (10.17), we have the following for $\mathbf{w}_j$ : $$\left(\sum{i=1}^m z_i z_i^{\mathrm{T}}\right) \mathbf{w}_j=\lambda_j \mathbf{w}_j,$$

where $z_i$ is the image of $x_i$ in the high-dimensional feature space. Then, we have
\begin{aligned} \mathbf{w}j &=\frac{1}{\lambda_j}\left(\sum{i=1}^m z_i z_i^{\mathrm{T}}\right) \mathbf{w}j=\sum{i=1}^m z_i \frac{z_i^{\mathrm{T}} \mathbf{w}j}{\lambda_j} \ &=\sum{i=1}^m z_i \alpha_i^j, \end{aligned}
where $\alpha_i^j=\frac{1}{\lambda_j} z_i^{\mathrm{T}} \mathbf{w}j$ is the $j$ th component of $\boldsymbol{\alpha}_i$. Suppose that $z_i$ is obtained by mapping the original sample $\boldsymbol{x}_i$ via $\phi$, that is, $z_i=\phi\left(\boldsymbol{x}_i\right), i=1,2, \ldots, m$. If the explicit form of $\phi$ is known, then we can use it to map the samples to the high-dimensional feature space, and then apply PCA. Equation (10.19) can be rewritten as $$\left(\sum{i=1}^m \phi\left(\boldsymbol{x}i\right) \phi\left(\boldsymbol{x}_i\right)^{\mathrm{T}}\right) \mathbf{w}_j=\lambda_j \mathbf{w}_j,$$ and $(10.20)$ can be rewritten as $$\mathbf{w}_j=\sum{i=1}^m \phi\left(\boldsymbol{x}_i\right) \alpha_i^j$$

## 计算机代写|机器学习代写Machine Learning代考|Boosting

Boosting 是一系列将弱学习器转换为强学习器的算法。Boosting 算法从训练一个其学习器开始，然后根据基学习器的结果调整 训练样本的分布，使错误分类的样本受到后续基学习器的更多关注。训练完第一个基学习器后，用调整后的训练样本训练第二个基 学习器，结果再次用于调整川拣样本分布。重笣这样的过程，直到其础学习者的数量达到预定义的值 $T$ ，最后对这些其础学习器 进行加权和组合。

$$H(\boldsymbol{x})=\sum_{t=1}^T \alpha_t h_t(\boldsymbol{x})$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。