Posted on Categories:Data Analysis, 数据分析, 数据科学代写

# 数据科学代写|数据分析代写Data Analysis代考|Matrix Factorization Problems

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 数据科学代写|数据分析代写Data Analysis代考|Matrix Factorization Problems

There are a variety of data analysis problems that require estimating a low-rank matrix from some sparse collection of data. Such problems can be formulated as natural extension of least squares to problems in which the data $a_j$ are naturally represented as matrices rather than vectors.

Changing notation slightly, we suppose that each $A_j$ is an $n \times p$ matrix, and we seek another $n \times p$ matrix $X$ that solves
$$\min X \frac{1}{2 m} \sum{j=1}^m\left(\left\langle A_j, X\right\rangle \quad y_j\right)^2,$$
where $\langle A, B\rangle:=\operatorname{trace}\left(A^T B\right)$. Here we can think of the $A_j$ as “probing” the unknown matrix $X$. Commonly considered types of observations are random linear combinations (where the elements of $A_j$ are selected i.i.d. from some distribution) or single element observations (in which each $A_j$ has 1 in a single location and zeros elsewhere). A regularized version of (1.6), leading to solutions $X$ that are low rank, is
$$\min X \frac{1}{2 m} \sum{j=1}^m\left(\left\langle A_j, X\right\rangle \quad y_j\right)^2+\lambda|X|_,$$ where $|X|_$ is the nuclear norm, which is the sum of singular values of $X$ (Recht et al., 2010). The nuclear norm plays a role analogous to the $\ell_1$ norm in (1.5), where as the $\ell_1$ norm favors sparse vectors, the nuclear norm favors lowrank matrices. Although the nuclear norm is a somewhat complex nonsmooth function, it is at least convex so that the formulation (1.7) is also convex. This formulation can be shown to yield a statistically valid solution when the true $X$ is low rank and the observation matrices $A_j$ satisfy a “restricted isometry property,” commonly satisfied by random matrices but not by matrices with just one nonzero element. The formulation is also valid in a different context, in which the true $X$ is incoherent (roughly speaking, it does not have a few elements that are much larger than the others), and the observations $A_j$ are of single elements (Candès and Recht, 2009).

## 数据科学代写|数据分析代写Data Analysis代考|Support Vector Machines

Classification via support vector machines (SVM) is a classical optimization problem in machine learning, tracing its origins to the 1960s. Given the input data $\left(a_j, y_j\right)$ with $a_j \in \mathbb{R}^n$ and $y_j \in{1,1}$, SVM seeks a vector $x \in \mathbb{R}^n$ and a scalar $\beta \in \mathbb{R}$ such that
$$\begin{array}{lll} a_j^T x & \beta \geq 1 & \text { when } y_j=+1, \ a_j^T x & \beta \leq 1 & \text { when } y_j=1 . \end{array}$$
Any pair $(x, \beta)$ that satisfies these conditions defines a separating hyperplane in $\mathbb{R}^n$, that separates the “positive” cases $\left{a_j \mid y_j=+1\right}$ from the “negative” cases $\left{a_j \mid y_j=-1\right}$. Among all separating hyperplanes, the one that minimizes $|x|^2$ is the one that maximizes the margin between the two classes that is, the hyperplane whose distance to the nearest point $a_j$ of either class is greatest.

We can formulate the problem of finding a separating hyperplane as an optimization problem by defining an objective with the summation form (1.2):
$$H(x, \beta)=\frac{1}{m} \sum_{j=1}^m \max \left(1-y_j\left(a_j^T x-\beta\right), 0\right) .$$
Note that the $j$ th term in this summation is zero if the conditions (1.9) are satisfied, and it is positive otherwise. Even if no pair $(x, \beta)$ exists for which $H(x, \beta)=0$, a value $(x, \beta)$ that minimizes (1.2) will be the one that comes as close as possible to satisfying (1.9) in some sense. A term $\lambda|x|_2^2$ (for some parameter $\lambda>0$ ) is often added to (1.10), yielding the following regularized version:
$$H(x, \beta)=\frac{1}{m} \sum_{j=1}^m \max \left(1 \quad y_j\left(a_j^T x \quad \beta\right), 0\right)+\frac{1}{2} \lambda|x|_2^2 .$$

## 数据科学代写|数据分析代写Data Analysis代考|Matrix Factorization Problems

$$\min X \frac{1}{2 m} \sum j=1^m\left(\left\langle A_j, X\right\rangle \quad y_j\right)^2$$

$$\min X \frac{1}{2 m} \sum j=1^m\left(\left\langle A_j, X\right\rangle \quad y_j\right)^2+\lambda|X|$$

## 数据科学代写|数据分析代写Data Analysis代考|Support Vector Machines

$$a_j^T x \quad \beta \geq 1 \quad \text { when } y_j=+1, a_j^T x \quad \beta \leq 1 \quad \text { when } y_j=1 .$$

$\backslash$ left 缺少或无法识别的分隔符 来自“负面”案例
$\backslash$ left 缺少或无法识别的分隔符 . 在所有分离超平面中，最小化的超平面 $|x|^2$ 是最 大化两个类之间的边距的超平面，即到最近点的距离的超平面 $a_j$ 任何一类都是最伟大的。

$$H(x, \beta)=\frac{1}{m} \sum_{j=1}^m \max \left(1-y_j\left(a_j^T x-\beta\right), 0\right) .$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。