Posted on Categories:Linear Regression, 数据科学代写, 线性回归, 统计代写, 统计代考

# 统计代写|线性回归代写Linear Regression代考|ESTIMATING σ 2

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|线性回归代写Linear Regression代考|ESTIMATING σ 2

Since the variance $\sigma^2$ is essentially the average squared size of the $e_i^2$, we should expect that its estimator $\hat{\sigma}^2$ is obtained by averaging the squared residuals. Under the assumption that the errors are uncorrelated random variables with zero means and common variance $\sigma^2$, an unbiased estimate of $\sigma^2$ is obtained by dividing $R S S=\sum \hat{e}_i^2$ by its degrees of freedom (df), where residual $\mathrm{df}=$ number of cases minus the number of parameters in the mean function. For simple regression, residual df $=n-2$, so the estimate of $\sigma^2$ is given by
$$\hat{\sigma}^2=\frac{R S S}{n-2}$$
This quantity is called the residual mean square. In general, any sum of squares divided by its df is called a mean square. The residual sum of squares can be computed by squaring the residuals and adding them up. It can also be computed from the formula (Problem 2.9)
$$R S S=S Y Y-\frac{S X Y^2}{S X X}=S Y Y-\hat{\beta}_1^2 S X X$$
Using the summaries for Forbes’ data given at (2.6), we find
\begin{aligned} R S S & =427.79402-\frac{475.31224^2}{530.78235} \ & =2.15493 \ \sigma^2 & =\frac{2.15493}{17-2}=0.14366 \end{aligned}
The square root of $\hat{\sigma}^2, \hat{\sigma}=\sqrt{0.14366}=0.37903$ is often called the standard error of regression. It is in the same units as is the response variable.

## 统计代写|线性回归代写Linear Regression代考|PROPERTIES OF LEAST SQUARES ESTIMATES

The oLs estimates depend on data only through the statistics given in Table 2.1 . This is both an advantage, making computing easy, and a disadvantage, since any two data sets for which these are identical give the same fitted regression, even if a straight-line model is appropriate for one but not the other, as we have seen in Anscombe’s examples in Section 1.4. The estimates $\hat{\beta}_0$ and $\hat{\beta}_1$ can both be written as linear combinations of $y_1, \ldots, y_n$, for example, writing $c_i=\left(x_i-\bar{x}\right) / S X X$ (see Appendix A.3)
$$\hat{\beta}_1=\sum\left(\frac{x_i-\bar{x}}{S X X}\right) y_i=\sum c_i y_i$$
The fitted value at $x=\bar{x}$ is
$$\widehat{\mathrm{E}}(Y \mid X=\bar{x})=\bar{y}-\hat{\beta}_1 \bar{x}+\hat{\beta}_1 \bar{x}=\bar{y}$$
so the fitted line must pass through the point $(\bar{x}, \bar{y})$, intuitively the center of the data. Finally, as long as the mean function includes an intercept, $\sum \hat{e}_i=0$. Mean functions without an intercept will usually have $\sum \hat{e}_i \neq 0$.

Since the estimates $\hat{\beta}_0$ and $\hat{\beta}_1$ depend on the random $e_i \mathrm{~s}$, the estimates are also random variables. If all the $e_i$ have zero mean and the mean function is correct, then, as shown in Appendix A.4, the least squares estimates are unbiased,
\begin{aligned} & E\left(\hat{\beta}_0\right)=\beta_0 \ & E\left(\hat{\beta}_1\right)=\beta_1 \end{aligned}
The variance of the estimators, assuming $\operatorname{Var}\left(e_i\right)=\sigma^2, i=1, \ldots, n$, and $\operatorname{Cov}\left(e_i, e_j\right)=0, i \neq j$, are from Appendix A.4,
\begin{aligned} & \operatorname{Var}\left(\hat{\beta}_1\right)=\sigma^2 \frac{1}{S X X} \ & \operatorname{Var}\left(\hat{\beta}_0\right)=\sigma^2\left(\frac{1}{n}+\frac{\bar{x}^2}{S X X}\right) \end{aligned}
The two estimates are correlated, with covariance
$$\operatorname{Cov}\left(\hat{\beta}_0, \hat{\beta}_1\right)=-\sigma^2 \frac{\bar{x}}{S X X}$$

## 统计代写|线性回归代写Linear Regression代考|ESTIMATING σ 2

$$\hat{\sigma}^2=\frac{R S S}{n-2}$$

$$R S S=S Y Y-\frac{S X Y^2}{S X X}=S Y Y-\hat{\beta}_1^2 S X X$$

\begin{aligned} R S S & =427.79402-\frac{475.31224^2}{530.78235} \ & =2.15493 \ \sigma^2 & =\frac{2.15493}{17-2}=0.14366 \end{aligned}
$\hat{\sigma}^2, \hat{\sigma}=\sqrt{0.14366}=0.37903$的平方根通常被称为回归的标准误差。它和响应变量的单位是一样的。

## 统计代写|线性回归代写Linear Regression代考|PROPERTIES OF LEAST SQUARES ESTIMATES

oLs估计仅取决于表2.1所列统计数据的数据。这既是一个优点，使计算变得容易，也是一个缺点，因为任何两个相同的数据集都会给出相同的拟合回归，即使直线模型适用于其中一个而不适用于另一个，正如我们在1.4节中的Anscombe示例中所看到的那样。估算值$\hat{\beta}_0$和$\hat{\beta}_1$都可以写成$y_1, \ldots, y_n$的线性组合，例如写成$c_i=\left(x_i-\bar{x}\right) / S X X$(参见附录A.3)。
$$\hat{\beta}_1=\sum\left(\frac{x_i-\bar{x}}{S X X}\right) y_i=\sum c_i y_i$$
$x=\bar{x}$处的拟合值为
$$\widehat{\mathrm{E}}(Y \mid X=\bar{x})=\bar{y}-\hat{\beta}_1 \bar{x}+\hat{\beta}_1 \bar{x}=\bar{y}$$

\begin{aligned} & E\left(\hat{\beta}_0\right)=\beta_0 \ & E\left(\hat{\beta}_1\right)=\beta_1 \end{aligned}

\begin{aligned} & \operatorname{Var}\left(\hat{\beta}_1\right)=\sigma^2 \frac{1}{S X X} \ & \operatorname{Var}\left(\hat{\beta}_0\right)=\sigma^2\left(\frac{1}{n}+\frac{\bar{x}^2}{S X X}\right) \end{aligned}

$$\operatorname{Cov}\left(\hat{\beta}_0, \hat{\beta}_1\right)=-\sigma^2 \frac{\bar{x}}{S X X}$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。