Posted on Categories:Linear Regression, 数据科学代写, 线性回归, 统计代写, 统计代考

# 统计代写|线性回归代写Linear Regression代考|Random Vectors

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|线性回归代写Linear Regression代考|Random Vectors

The concepts of a random vector, the expected value of a random vector, and the covariance of a random vector are needed before covering generalized least squares. Recall that for random variables $Y_i$ and $Y_j$, the covariance of $Y_i$ and $Y_j$ is $\operatorname{Cov}\left(Y_i, Y_j\right) \equiv \sigma_{i, j}=E\left[\left(Y_i-E\left(Y_i\right)\right)\left(Y_j-E\left(Y_j\right)\right]=E\left(Y_i Y_j\right)-E\left(Y_i\right) E\left(Y_j\right)\right.$ provided the second moments of $Y_i$ and $Y_j$ exist.

Definition 4.1. $\boldsymbol{Y}=\left(Y_1, \ldots, Y_n\right)^T$ is an $n \times 1$ random vector if $Y_i$ is a random variable for $i=1, \ldots, n$. $\boldsymbol{Y}$ is a discrete random vector if each $Y_i$ is discrete, and $\boldsymbol{Y}$ is a continuous random vector if each $Y_i$ is continuous. A random variable $Y_1$ is the special case of a random vector with $n=1$.
Definition 4.2. The population mean of a random $n \times 1$ vector $\boldsymbol{Y}=$ $\left(Y_1, \ldots, Y_n\right)^T$ is
$$E(\boldsymbol{Y})=\left(E\left(Y_1\right), \ldots, E\left(Y_n\right)\right)^T$$
provided that $E\left(Y_i\right)$ exists for $i=1, \ldots, n$. Otherwise the expected value does not exist. The $n \times n$ population covariance matrix
$$\operatorname{Cov}(\boldsymbol{Y})=E\left[(\boldsymbol{Y}-E(\boldsymbol{Y}))(\boldsymbol{Y}-E(\boldsymbol{Y}))^T\right]=\left(\sigma_{i, j}\right)$$
where the $i j$ entry of $\operatorname{Cov}(\boldsymbol{Y})$ is $\operatorname{Cov}\left(Y_i, Y_j\right)=\sigma_{i, j}$ provided that each $\sigma_{i, j}$ exists. Otherwise $\operatorname{Cov}(\boldsymbol{Y})$ does not exist.

The covariance matrix is also called the variance-covariance matrix and variance matrix. Sometimes the notation $\operatorname{Var}(\boldsymbol{Y})$ is used. Note that $\operatorname{Cov}(\boldsymbol{Y})$ is a symmetric positive semidefinite matrix. If $\boldsymbol{Z}$ and $\boldsymbol{Y}$ are $n \times 1$ random vectors, $\boldsymbol{a}$ a conformable constant vector, and $\boldsymbol{A}$ and $\boldsymbol{B}$ are conformable constant matrices, then
$$E(\boldsymbol{a}+\boldsymbol{Y})=\boldsymbol{a}+E(\boldsymbol{Y}) \text { and } E(\boldsymbol{Y}+\boldsymbol{Z})=E(\boldsymbol{Y})+E(\boldsymbol{Z})$$

and
$$E(\boldsymbol{A} \boldsymbol{Y})=\boldsymbol{A} E(\boldsymbol{Y}) \text { and } E(\boldsymbol{A} \boldsymbol{Y} \boldsymbol{B})=\boldsymbol{A} E(\boldsymbol{Y}) \boldsymbol{B}$$
Also
$$\operatorname{Cov}(\boldsymbol{a}+\boldsymbol{A} \boldsymbol{Y})=\operatorname{Cov}(\boldsymbol{A} \boldsymbol{Y})=\boldsymbol{A} \operatorname{Cov}(\boldsymbol{Y}) \boldsymbol{A}^T .$$

## 统计代写|线性回归代写Linear Regression代考|GLS, WLS, and FGLS

Definition 4.3. Suppose that the response variable and at least one of the predictor variables is quantitative. Then the generalized least squares (GLS) model is
$$\boldsymbol{Y}=\boldsymbol{X} \boldsymbol{\beta}+\boldsymbol{e},$$
where $\boldsymbol{Y}$ is an $n \times 1$ vector of dependent variables, $\boldsymbol{X}$ is an $n \times p$ matrix of predictors, $\boldsymbol{\beta}$ is a $p \times 1$ vector of unknown coefficients, and $\boldsymbol{e}$ is an $n \times 1$ vector of unknown errors. Also $E(\boldsymbol{e})=\mathbf{0}$ and $\operatorname{Cov}(\boldsymbol{e})=\sigma^2 \boldsymbol{V}$ where $\boldsymbol{V}$ is a known $n \times n$ positive definite matrix.
Definition 4.4. The GLS estimator
$$\hat{\boldsymbol{\beta}}{G L S}=\left(\boldsymbol{X}^T \boldsymbol{V}^{-1} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^T \boldsymbol{V}^{-1} \boldsymbol{Y} .$$ The fitted values are $\hat{\boldsymbol{Y}}{G L S}=\boldsymbol{X} \hat{\boldsymbol{\beta}}{G L S}$. Definition 4.5. Suppose that the response variable and at least one of the predictor variables is quantitative. Then the weighted least squares (WLS) model with weights $w_1, \ldots, w_n$ is the special case of the GLS model where $\boldsymbol{V}$ is diagonal: $\boldsymbol{V}=\operatorname{diag}\left(\mathrm{v}_1, \ldots, \mathrm{v}{\mathbf{n}}\right)$ and $w_i=1 / v_i$. Hence
$$\begin{gathered} \boldsymbol{Y}=\boldsymbol{X} \boldsymbol{\beta}+\boldsymbol{e}, \ E(\boldsymbol{e})=\mathbf{0} \text {, and } \operatorname{Cov}(\boldsymbol{e})=\sigma^2 \operatorname{diag}\left(\mathrm{v}1, \ldots, \mathrm{v}{\mathrm{n}}\right)=\sigma^2 \operatorname{diag}\left(1 / \mathrm{w}1, \ldots, 1 / \mathrm{w}{\mathrm{n}}\right) . \end{gathered}$$
Definition 4.6. The WLS estimator
$$\hat{\boldsymbol{\beta}}_{W L S}=\left(\boldsymbol{X}^T \boldsymbol{V}^{-1} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^T \boldsymbol{V}^{-1} \boldsymbol{Y}$$

## 统计代写|线性回归代写Linear Regression代考|Random Vectors

$\operatorname{Cov}\left(Y_i, Y_j\right) \equiv \sigma_{i, j}=E\left[\left(Y_i-E\left(Y_i\right)\right)\left(Y_j-E\left(Y_j\right)\right]=E\left(Y_i Y_j\right)-E\left(Y_i\right) E\left(Y_j\right)\right.$ 提供的第二个 时刻 $Y_i$ 和 $Y_j$ 存在。

$$E(\boldsymbol{Y})=\left(E\left(Y_1\right), \ldots, E\left(Y_n\right)\right)^T$$

$$\operatorname{Cov}(\boldsymbol{Y})=E\left[(\boldsymbol{Y}-E(\boldsymbol{Y}))(\boldsymbol{Y}-E(\boldsymbol{Y}))^T\right]=\left(\sigma_{i, j}\right)$$

$$E(\boldsymbol{a}+\boldsymbol{Y})=\boldsymbol{a}+E(\boldsymbol{Y}) \text { and } E(\boldsymbol{Y}+\boldsymbol{Z})=E(\boldsymbol{Y})+E(\boldsymbol{Z})$$

$$E(\boldsymbol{A} \boldsymbol{Y})=\boldsymbol{A} E(\boldsymbol{Y}) \text { and } E(\boldsymbol{A} \boldsymbol{Y})=\boldsymbol{A} E(\boldsymbol{Y}) \boldsymbol{B}$$

$$\operatorname{Cov}(\boldsymbol{a}+\boldsymbol{A} \boldsymbol{Y})=\operatorname{Cov}(\boldsymbol{A} \boldsymbol{Y})=\boldsymbol{A} \operatorname{Cov}(\boldsymbol{Y}) \boldsymbol{A}^T$$

## 统计代写|线性回归代写Linear Regression代考|GLS, WLS, and FGLS

$$\boldsymbol{Y}=\boldsymbol{X} \boldsymbol{\beta}+\boldsymbol{e}$$

$$\hat{\boldsymbol{\beta}} G L S=\left(\boldsymbol{X}^T \boldsymbol{V}^{-1} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^T \boldsymbol{V}^{-1} \boldsymbol{Y}$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。