Posted on Categories:Generalized linear model, 广义线性模型, 统计代写, 统计代考

# 统计代写|广义线性模型代写Generalized linear model代考|STAT458 Iterative Weighted Least Squares

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|广义线性模型代写Generalized linear model代考|Iterative Weighted Least Squares

In this and the next sections, we discuss estimation in non-Gaussian longitudinal models. These models have been used in the analysis of longitudinal data (e.g., Diggle et al. 2002), where a traditional method of estimating the fixed effects is weighted least squares, or WLS. Suppose that the observations are collected from individuals over time. Let $y$ denote the vector of observations, which may be correlated, and $X$ a matrix of known covariates. Suppose that $\mathrm{E}(y)=X \beta$, where $\beta$ is a vector of unknown regression coefficients. The WLS estimator of $\beta$ is obtained by minimizing
$$(y-X \beta)^{\prime} W(y-X \beta),$$
where $W$ is a known symmetric weighting matrix. As before, suppose, without loss of generality, that $X$ is of full column rank $p$. Then, for any nonsingular $W$, the minimizer of $(1.33)$ is given by
$$\hat{\beta}_W=\left(X^{\prime} W X\right)^{-1} X^{\prime} W y .$$
As a special case, the ordinary least squares (OLS) estimator is obtained by choosing $W=I$, the identity matrix. This gives
$$\hat{\beta}_I=\left(X^{\prime} X\right)^{-1} X^{\prime} y .$$

## 统计代写|广义线性模型代写Generalized linear model代考|Balanced Case

Suppose that the observations are collected over a common set of times. Let $y_{i j}, j=1, \ldots, k$ be the measures collected from the $i$ th individual over times $t_1, \ldots, t_k$, respectively, and $y_i=\left(y_{i j}\right){1 \leq j \leq k}, i=1, \ldots, m$. Suppose that the vectors $y_1, \ldots, y_m$ are independent with $$\mathrm{E}\left(y_i\right)=X_i \beta \quad \text { and } \operatorname{Var}\left(y_i\right)=V_0,$$ where $X_i$ is a matrix of known covariates and $V_0=\left(v{q r}\right){1 \leq q, r \leq k}$ is an unknown covariance matrix. It follows that $V=\operatorname{diag}\left(V_0, \ldots, V_0\right)$. Now the good thing is that $V$ may be estimated consistently, if $k$ is fixed. In fact, if $\beta$ were known, a simple consistent estimator of $V$ would be obtained as \begin{aligned} \hat{V} &=\operatorname{diag}\left(\hat{V}_0, \ldots, \hat{V}_0\right), \quad \text { where } \ \hat{V}_0 &=\frac{1}{m} \sum{i=1}^m\left(y_i-X_i \beta\right)\left(y_i-X_i \beta\right)^{\prime} . \end{aligned}
To summarize the main idea, if $V$ were known, one could use (1.36) to compute the BLUE of $\beta$; if $\beta$ were known, one could use (1.38) to obtain a consistent estimator of $V$. It is clear that there is a cycle, which motivates the following algorithm when neither $V$ nor $\beta$ are known: start with the OLS estimator (1.35), and compute $\hat{V}$ by (1.38) with $\beta$ replaced by $\hat{\beta}_I$; then replace $V$ on the right side of (1.36) by $\hat{V}$ just obtained to get the next step estimator of $\beta$; and repeat the process. We call such a procedure iterative weighted least squares, or I-WLS.

## 统计代写|广义线性模型代写Generalized linear model代考|Iterative Weighted Least Squares

$$(y-X \beta)^{\prime} W(y-X \beta),$$

$$\hat{\beta}W=\left(X^{\prime} W X\right)^{-1} X^{\prime} W y .$$ 作为一种特殊情况，普通最小二乘 (OLS) 估计量是通过选择 $W=I$ ，单位矩阵。这给 $$\hat{\beta}_I=\left(X^{\prime} X\right)^{-1} X^{\prime} y .$$

## 统计代写|广线性模型代写Generalized tinear model代考|Balanced Case

$$\mathrm{E}\left(y_i\right)=X_i \beta \quad \text { and } \operatorname{Var}\left(y_i\right)=V_0,$$

$$\hat{V}=\operatorname{diag}\left(\hat{V}0, \ldots, \hat{V}_0\right), \quad \text { where } \hat{V}_0 \quad=\frac{1}{m} \sum i=1^m\left(y_i-X_i \beta\right)\left(y_i-X_i \beta\right)^{\prime} .$$ 总结主要思想，如果 $V$ 已知，可以使用 (1.36) 来计算 $\mathrm{BLUE} \beta$; 如果 $\beta$ 已知，可以使用 (1.38) 来获得一致的估计 $V$. 很明显，存在一 个循环，当两者都不存在时，它会激发以下算法 $V$ 也不 $\beta$ 已知：从 OLS 估计器 (1.35) 开始，计算 $\hat{V}$ 由 (1.38) 与 $\beta$ 取而代之 $\hat{\beta}{\text {I，然后 }}$ 替换 $V$ 在 (1.36) 的右侧，由 $\hat{V}$ 刚刚获得以获得下一步估计 $\beta$; 并重复该过程。我们称这种过程为迭代加权最小二乘法，或I-WLS。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。