Posted on Categories:Regression Analysis, 回归分析, 统计代写, 统计代考

# 统计代写|回归分析代写Regression Analysis代考|Multicollinearity

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|回归分析代写Regression Analysis代考|Multicollinearity

Multicollinearity $(\mathrm{MC})$ refers to the $X$ variables being “collinear” to varying degrees. In the case of two $X$ variables, $X_1$ and $X_2$, collinearity means that the two variables are close to linearly related. A “perfect” multicollinearity means that they are perfectly linearly related. See Figure 8.4.
Often, “multicollinearity” with just two $X$ variables is called simply “collinearity.” Figure 8.4, right panel, illustrates the meaning of the term “collinear.”
With more $X$ variables, it is not so easy to visualize multicollinearity. But if one of the $X$ variables, say $X_j$, is closely related to all the other $X$ variables via
$$X_j \cong a_0 X_0+a_1 X_1+\ldots+a_{j-1} X_{j-1}+a_{j+1} X_{j+1}+\ldots+a_k X_k$$
then there is multicollinearity. And if the ” $\cong$ ” is, in fact, an ” $=$ ” in the equation above, then there is a perfect multicollinearity. (Note that the variable $X_0$ is the intercept column having all 1’s).
A perfect multicollinearity causes the $\mathbf{X}^{\mathrm{T}} \mathbf{X}$ matrix to be non-invertible, implying that there are no unique least-squares estimates. Equations 0 through $k$ shown in Section 7.1 can still be solved for estimates of the $\beta$ ‘s, but some equation or equations will be redundant with others, implying that there are infinitely many solutions for $\hat{\beta}_0, \hat{\beta}_1, \ldots$, and $\hat{\beta}_k$. Thus the effects of the individual $X_j$ variables on $Y$ are not identifiable when there is a perfect multicollinearity.

To understand the notion that there can be an infinity of solutions for the estimated $\beta$ ‘s, consider the case where there is only one $X$ variable. A perfect multicollinearity, in this case, means that $X_1=a_0 X_0$, so that the $X_1$ column is all the same number, $a_0$. Figure 8.5 shows how data might look in this case, where $x_i=10$ for every $i=1, \ldots, n$, and also shows several possible least-squares fits, all of which have the same minimum sum of squared errors.

## 统计代写|回归分析代写Regression Analysis代考|The Quadratic Model in One $X$ Variable

The simplest of polynomial models is the simple quadratic model,
$$f(x)=\beta_0+\beta_1 x+\beta_2 x^2$$
These models are quite flexible, see Figure 9.1 for various examples.
R code for Figure 9.1
$x=\operatorname{seq}(0.2,10, .1)$
$\mathrm{EY} 1=-3+0.3 * \mathrm{X}+0.1^{\star} \mathrm{X}^{\wedge} 2 ; \mathrm{EY} 2=2-0.9 * \mathrm{X}+0.3{ }^{\star} \mathrm{X}^{\wedge} 2$
$\mathrm{EY} 3=-1+3.0 * \mathrm{X}-0.4{ }^{\star} \mathrm{X}^{\wedge} 2 ; \mathrm{EY} 4=1+1.2{ }^{\star} \mathrm{X}-0.1 \mathrm{X}^{\wedge} 2$
plot $(x, E Y 1$, type=”1″, lty=1, ylab $=” E(Y \mid x=x) “)$
points(x, EY2, type=”1″, lty=2); points(x, EY3, type=”1″, lty=3)
points (x, EY4, type=”1″, lty=4)
legend $(0,10, \mathrm{c}$ (“b0 b1 b2 “,”-3.0 $0.3 \quad 0.1 “, “$
$\mathrm{C}(0,1,2,3,4))$

As it is the case for all models in this chapter, the ” $\beta$ ” coefficients cannot be interpreted in the way discussed in Chapter 8 , where you increase the value of one $X$ variable while keeping the others fixed, because there are functional relationships among the various terms in the model. Specifically, in the example of a quadratic polynomial function, you cannot increase $x^2$, while keeping $x$ fixed. But you can still interpret the parameters by understanding the graphs in Figure 9.1. In particular, $\beta_2$ measures curvature: When $\beta_2<0$, there is concave curvature, when $\beta_2>0$ there is convex curvature, and when $\beta_2=0$, there is no curvature. Further, the larger the $\left|\beta_2\right|$, the more extreme is the curvature.

The intercept term $\beta_0$ has the same meaning as before: It is the value of $f(x)$ when $x=0$. This interpretation is correct but, as always, it is a not useful interpretation when the range of the $x$ data does not cover 0 . Still, the coefficient is needed in the model as a “fitting constant,” which adjusts the function up or down as needed to match the observable data.

To interpret $\beta_1$, note that it is possible to increase $x$ by 1 when $x^2$ is fixed, but the only way that can happen is when you move from $x=-0.5$ to $x=+0.5$. Consider the solid graph shown in Figure 9.1: Here, $f(x)=-3+0.3 x+0.1 x^2$, so that $f(-0.5)=-3+0.3(-0.5)+0.1(-0.5)^2=-3.125$, and $f(+0.5)=-2.825$; these values differ by exactly 0.3 , the coefficient $\beta_1$ that multiplies $x$. While this math gives a correct way to interpret $\beta_1$ in the quadratic model, it is not useful if the range of the $X$ data does not cover zero.

## 统计代写|回归分析代写Regression Analysis代考|Multicollinearity

$$X_j \cong a_0 X_0+a_1 X_1+\ldots+a_{j-1} X_{j-1}+a_{j+1} X_{j+1}+\ldots+a_k X_k$$

## 统计代写|回归分析代写Regression Analysis代考|The Quadratic Model in One $X$ Variable

$$f(x)=\beta_0+\beta_1 x+\beta_2 x^2$$

$x=\operatorname{seq}(0.2,10, .1)$
$\mathrm{EY} 1=-3+0.3 * \mathrm{X}+0.1^{\star} \mathrm{X}^{\wedge} 2 ; \mathrm{EY} 2=2-0.9 * \mathrm{X}+0.3{ }^{\star} \mathrm{X}^{\wedge} 2$
$\mathrm{EY} 3=-1+3.0 * \mathrm{X}-0.4{ }^{\star} \mathrm{X}^{\wedge} 2 ; \mathrm{EY} 4=1+1.2{ }^{\star} \mathrm{X}-0.1 \mathrm{X}^{\wedge} 2$
Plot $(x, E Y 1$, type=”1″， lty=1, ylab $=” E(Y \mid x=x) “)$
points(x, EY2, type=”1″， lty=2);points(x, EY3, type=”1″， lty=3)
points (x, y4, type=”1″， lty=4)

$\mathrm{C}(0,1,2,3,4))$