Posted on Categories:Time Series, 数据科学代写, 时间序列, 统计代写, 统计代考

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|时间序列分析代写Time-Series Analysis代考|Regularization methods

Let $\mathbf{Z}t=\left[Z{1, t}, Z_{2, t}, \ldots, Z_{m, t}\right]^{\prime}, t=1,2, \ldots, n$, be a zero-mean $m$-dimensional time series with $n$ observations. It is well known that the least squares method can be used to fit the VAR $(p)$ model by minimizing
$$\sum_{t=1}^n\left|\mathbf{Z}t-\sum{k=1}^p \mathbf{\Phi}k \mathbf{Z}{t-k}\right|_2,$$
where ||$_2$ is Euclidean $\left(L^2\right)$ norm of a vector. In practice, more compactly, with data $\mathbf{Z}t=\left[Z{1, t}\right.$, $\left.Z_{2, t}, \ldots, Z_{m, t}\right]^{\prime}, t=1,2, \ldots, n$, we can present the VAR $(p)$ model in Eq. (10.2) in the matrix form,
$$\underset{n \times m}{\mathbf{Y}}=\underset{(n \times m p)}{\mathbf{X}} \underset{(m p \times m)}{\mathbf{D}}+\underset{(n \times m)}{\boldsymbol{\xi}},$$
where
$$\mathbf{Y}=\left[\begin{array}{c} \mathbf{Z}1^{\prime} \ \mathbf{Z}_2^{\prime} \ \vdots \ \mathbf{Z}_n^{\prime} \end{array}\right], \mathbf{X}=\left[\begin{array}{c} \mathbf{X}_1^{\prime} \ \mathbf{X}_2^{\prime} \ \vdots \ \mathbf{X}_n^{\prime} \end{array}\right], \boldsymbol{\Phi}=\left[\begin{array}{c} \boldsymbol{\Phi}_1^{\prime} \ \boldsymbol{\Phi}_2^{\prime} \ \vdots \ \boldsymbol{\Phi}_p^{\prime} \end{array}\right], \boldsymbol{\xi}=\left[\begin{array}{c} \mathbf{a}_1^{\prime} \ \mathbf{a}_2^{\prime} \ \vdots \ \mathbf{a}_n^{\prime} \end{array}\right],$$ and $$\mathbf{X}_t^{\prime}=\left[\mathbf{Z}{t-1}^{\prime}, \mathbf{Z}{t-2}^{\prime}, \ldots, \mathbf{Z}{t-p}^{\prime}\right]$$
So, minimizing Eq. (10.3) is equivalent to
$$\underset{\boldsymbol{\Phi}}{\operatorname{argmin}}|\mathbf{Y}-\mathbf{X \Phi}|_F$$
where ||$_F$ is the Frobenius norm of the matrix.
For a VAR model in high-dimensional setting, many regularization methods have been developed, which assume sparse structures on coefficient matrices $\boldsymbol{\Phi}_k$ and use regularization procedure to estimate parameters. These methods include the Lasso (Least Absolute Shrinkage and Selection Operator) method, the lag-weighted lasso method, and the hierarchical vector autoregression method, among others.

## 统计代写|时间序列分析代写Time-Series Analysis代考|The lasso method

One of the most commonly used regularization methods is the lasso method proposed by Tibshirani (1996) and extended to the vector time series setting by Hsu et al. (2008). Formally, the estimation procedure for the VAR model is through
$$\underset{\boldsymbol{\Phi}}{\operatorname{argmin}}\left{|\mathbf{Y}-\mathbf{X} \boldsymbol{\Phi}|_F+\lambda|\operatorname{vec}(\boldsymbol{\Phi})|_1\right},$$
where the second term is the regularization through $L_1$ penalty with $\lambda$ being its control parameter. $\lambda$ can be determined by cross-validation. The lasso method does not impose any special assumption on the relationship of lag orders and tends to over select the lag order $p$ of the VAR model. This leads us to the development of some modified methods.
The lag-weighted lasso method
Song and Bickel (2011) proposed a method that incorporates the lag-weighted lasso (lasso and group lasso structures) approach for the high-dimensional VAR model. They placed group lasso penalties introduced by Yuan and Lin (2006) on the off-diagonal terms and lasso penalties on the diagonal terms. More specifically, if we denote $\boldsymbol{\Phi}(j,-j)$ as the vector composed of offdiagonal elements $\left{\phi_{j, i}\right}_{i \neq j}$, and $\boldsymbol{\Phi}k(j, j)$ as the $j$ thdiagonal element of $\boldsymbol{\Phi}_k$, then the regularization for $\boldsymbol{\Phi}_k$ is $$\sum{j=1}^m\left|\boldsymbol{\Phi}k(j,-j) \mathbf{W}(-j)\right|_2+\lambda \sum{j=1}^m w_j\left|\boldsymbol{\Phi}k(j, j)\right|,$$ where $\mathbf{W}(-j)=\operatorname{diag}\left(w_1, \ldots, w{j-1}, w_{j+1}, \ldots, w_m\right)$, an $(m-1) \times(m-1)$ diagonal matrix with $w_j$ being the positive real-valued weight associated with the $j$ th variable for $1 \leq j \leq m$, which is chosen to be the standard deviation of $Z_{j, t} . \lambda$ is the control parameter that controls the extent to which other lags are less informative than its own lags. The first term of Eq. (10.7) is the group lasso penalty, the second term is the lasso penalty, and they impose regularization on other lags and its own lags, respectively. Let $0<\alpha<1$ and $(k)^\alpha$ be the other control parameter for different regularization for different lags; the estimation procedure is based on
$$\underset{\boldsymbol{\Phi}1, \ldots, \boldsymbol{\Phi}_p}{\arg \min _k}\left{|\mathbf{Y}-\mathbf{X} \boldsymbol{\Phi}|_F+\sum{k=1}^p k^\alpha\left[\sum_{j=1}^m\left|\boldsymbol{\Phi}k(j,-j) \mathbf{W}(-j)\right|_2+\lambda \sum{j=1}^m w_j\left|\boldsymbol{\Phi}_k(j, j)\right|_1\right]\right}$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|Regularization methods

$$\sum_{t=1}^n\left|\mathbf{Z}t-\sum{k=1}^p \mathbf{\Phi}k \mathbf{Z}{t-k}\right|2,$$ 其中|| $_2$为向量的欧几里得$\left(L^2\right)$范数。在实践中，更简洁地说，对于数据$\mathbf{Z}t=\left[Z{1, t}\right.$, $\left.Z{2, t}, \ldots, Z_{m, t}\right]^{\prime}, t=1,2, \ldots, n$，我们可以将Eq.(10.2)中的VAR $(p)$模型以矩阵形式表示，
$$\underset{n \times m}{\mathbf{Y}}=\underset{(n \times m p)}{\mathbf{X}} \underset{(m p \times m)}{\mathbf{D}}+\underset{(n \times m)}{\boldsymbol{\xi}},$$

$$\mathbf{Y}=\left[\begin{array}{c} \mathbf{Z}1^{\prime} \ \mathbf{Z}_2^{\prime} \ \vdots \ \mathbf{Z}_n^{\prime} \end{array}\right], \mathbf{X}=\left[\begin{array}{c} \mathbf{X}_1^{\prime} \ \mathbf{X}_2^{\prime} \ \vdots \ \mathbf{X}_n^{\prime} \end{array}\right], \boldsymbol{\Phi}=\left[\begin{array}{c} \boldsymbol{\Phi}_1^{\prime} \ \boldsymbol{\Phi}_2^{\prime} \ \vdots \ \boldsymbol{\Phi}_p^{\prime} \end{array}\right], \boldsymbol{\xi}=\left[\begin{array}{c} \mathbf{a}_1^{\prime} \ \mathbf{a}_2^{\prime} \ \vdots \ \mathbf{a}_n^{\prime} \end{array}\right],$$和$$\mathbf{X}_t^{\prime}=\left[\mathbf{Z}{t-1}^{\prime}, \mathbf{Z}{t-2}^{\prime}, \ldots, \mathbf{Z}{t-p}^{\prime}\right]$$

$$\underset{\boldsymbol{\Phi}}{\operatorname{argmin}}|\mathbf{Y}-\mathbf{X \Phi}|_F$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|The lasso method

$$\underset{\boldsymbol{\Phi}}{\operatorname{argmin}}\left{|\mathbf{Y}-\mathbf{X} \boldsymbol{\Phi}|F+\lambda|\operatorname{vec}(\boldsymbol{\Phi})|_1\right},$$ 其中第二项是通过$L_1$惩罚进行正则化，$\lambda$是其控制参数。$\lambda$可以通过交叉验证来确定。lasso方法没有对滞后阶数之间的关系作任何特殊的假设，并倾向于过度选择VAR模型的滞后阶数$p$。这导致我们发展了一些改进的方法。 滞后加权套索法 Song和Bickel(2011)提出了一种将滞后加权套索(套索和组套索结构)方法纳入高维VAR模型的方法。他们将Yuan和Lin(2006)引入的套索罚放在非对角线项上，套索罚放在对角线项上。更具体地说，如果我们将$\boldsymbol{\Phi}(j,-j)$表示为由非对角元素$\left{\phi{j, i}\right}{i \neq j}$组成的向量，将$\boldsymbol{\Phi}k(j, j)$表示为$\boldsymbol{\Phi}_k$的$j$的th对角元素，则$\boldsymbol{\Phi}_k$的正则化为$$\sum{j=1}^m\left|\boldsymbol{\Phi}k(j,-j) \mathbf{W}(-j)\right|_2+\lambda \sum{j=1}^m w_j\left|\boldsymbol{\Phi}k(j, j)\right|,$$，其中$\mathbf{W}(-j)=\operatorname{diag}\left(w_1, \ldots, w{j-1}, w{j+1}, \ldots, w_m\right)$是$(m-1) \times(m-1)$对角矩阵，$w_j$是$1 \leq j \leq m$的变量与$j$相关联的正实值权，选择为$Z_{j, t} . \lambda$的标准差是控制参数，它控制其他滞后比自身滞后信息量少的程度。Eq.(10.7)的第一项是group lasso penalty，第二项是group lasso penalty，它们分别对其他lag和自身lag进行正则化。设$0<\alpha<1$和$(k)^\alpha$为针对不同滞后的不同正则化的另一个控制参数;估计过程是基于
$$\underset{\boldsymbol{\Phi}1, \ldots, \boldsymbol{\Phi}p}{\arg \min _k}\left{|\mathbf{Y}-\mathbf{X} \boldsymbol{\Phi}|_F+\sum{k=1}^p k^\alpha\left[\sum{j=1}^m\left|\boldsymbol{\Phi}k(j,-j) \mathbf{W}(-j)\right|_2+\lambda \sum{j=1}^m w_j\left|\boldsymbol{\Phi}_k(j, j)\right|_1\right]\right}$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Time Series, 数据科学代写, 时间序列, 统计代写, 统计代考

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|时间序列分析代写Time-Series Analysis代考|Spectrum analysis of a nonstationary vector time series

For a nonstationary time series, we normally use some transformation like variance stabilization and/or differencing to reduce it to stationary before performing its spectral matrix estimation. However, there are many kinds of nonstationary time series that cannot be reduced to stationary by these transformations. Let us first consider a univariate case. There are many univariate nonstationary processes $Z_t$, which cannot be represented by $Z_t=\int_{-\pi}^\pi e^{i \omega t} d U(\omega)$ given in Eq. $(9.1)$, because the function $\phi(\omega)=e^{i \omega t}$ as a sine and cosine waves is stationary. Priestley $\left(1965,1966\right.$, and 1967) has pointed out that in this case, instead of using $\phi(\omega)=e^{i \omega t}$, we need to consider an oscillatory function, which is a generalized Fourier transform,
$$\phi(t, \omega)=A(t, \omega) e^{i \omega t},$$
so that
$$Z_t=\int_{-\pi}^\pi A(t, \omega) e^{i \omega t} d U(\omega)$$
where $A(t, \omega)$ is a time-varying modulating or transfer function with absolute maximum at zero frequency. In other words, $Z_t$ is an oscillatory process with an evolutionary spectrum, which has the same type of physical interpretation as the spectrum of a stationary process. The main difference is that while the spectrum of a stationary process describes the power distribution across frequencies over all time, the evolutionary spectrum describes power distribution over frequency at instantaneous time. However, within this framework, letting the length of series $n \rightarrow \infty$ does not increase our knowledge about local behavior of spectrum since the pattern of the forthcoming series is different. As a result, the formulation does not allow the development of rigorous asymptotic theory of statistical inference. To overcome this problem, Dahlhaus (1996, 2000) introduced locally stationary time series that allows theoretical asymptotic analysis of the evolutionary spectrum for a univariate case, and further extended it to the multivariate case.

## 统计代写|时间序列分析代写Time-Series Analysis代考|Spectrum representations of a nonstationary multivariate process

Let $\left{\mathbf{Z}t: t=1, \ldots, n\right}$ be a $m$-dimensional time series. The idea of locally stationary process is to rescale the transfer function to unit time scale, such that $$\mathbf{Z}{t, n}=\int_{-\pi}^\pi \mathbf{A}(t / n, \omega) e^{(i \omega t)} d \mathbf{U}(\omega)$$
More rigorously, we have following definition.
Definition 9.1 The $m$-dimensional zero-mean time series $\left{\mathbf{Z}t: t=1, \ldots, n\right}$ is called locally stationary with an $(m \times m)$ matrix-valued transfer function $\mathbf{A}^0=\left[A{i, j}^0(t / n, \omega)\right]$ and mean function vector $\boldsymbol{\mu}$ if there exists a representation
$$\mathbf{Z}t=\boldsymbol{\mu}(t / n)+\int{-\pi}^\pi \mathbf{A}^0(t / n, \omega) \exp (i \omega t) d \mathbf{U}(\omega),$$
with the following properties:

$\mathbf{U}(\omega)$ is a complex-valued zero-mean vector process with $E\left{d \mathbf{U}(\omega) d \mathbf{U}^*(\zeta)\right}$ being the identity matrix if $\omega=\zeta$ and zero otherwise.

There exists a constant $K$ and a $(m \times m)$ matrix-valued function $\mathbf{A}(u, \omega)=\left[A_{i, j}(u, \omega)\right]$, with $\mathbf{A}(u, \omega)=\mathbf{A}(u,-\omega)^$ and $$\sup {t, \omega}\left|A{j, k}^0(t / n, \omega)-A_{j, k}(t / n, \omega)\right| \leq K n^{-1}$$
for all $j, k=1, \ldots, m$. Also, $\mathbf{A}(u, \omega)$ are assumed to be continuous in $u$.
Based on Definition 9.1, the time-varying power spectrum of the process is given by
$$\mathbf{f}(u, \omega)=\mathbf{A}(u, \omega) \mathbf{A}(u, \omega)^ .$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|Spectrum analysis of a nonstationary vector time series

$$\phi(t, \omega)=A(t, \omega) e^{i \omega t},$$

$$Z_t=\int_{-\pi}^\pi A(t, \omega) e^{i \omega t} d U(\omega)$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|Spectrum representations of a nonstationary multivariate process

9.1如果存在表示形式，则称$m$维零均值时间序列$\left{\mathbf{Z}t: t=1, \ldots, n\right}$具有$(m \times m)$矩阵值传递函数$\mathbf{A}^0=\left[A{i, j}^0(t / n, \omega)\right]$和均值函数向量$\boldsymbol{\mu}$的局部平稳
$$\mathbf{Z}t=\boldsymbol{\mu}(t / n)+\int{-\pi}^\pi \mathbf{A}^0(t / n, \omega) \exp (i \omega t) d \mathbf{U}(\omega),$$

$\mathbf{U}(\omega)$ 是一个复值零均值向量过程，如果$\omega=\zeta$为单位矩阵$E\left{d \mathbf{U}(\omega) d \mathbf{U}^*(\zeta)\right}$，否则为零。

$$\mathbf{f}(u, \omega)=\mathbf{A}(u, \omega) \mathbf{A}(u, \omega)^ .$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Time Series, 数据科学代写, 时间序列, 统计代写, 统计代考

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|时间序列分析代写Time-Series Analysis代考|The smoothed spectrum matrix

Given a zero-mean $m$-dimensional time series, $\mathbf{Z}1, \mathbf{Z}_2, \ldots$, and $\mathbf{Z}_n$, its Fourier transform at the Fourier frequencies $\omega_p=2 \pi p / n,-[(n-1) / 2] \leq p \leq[n / 2]$, is $$\mathbf{Y}\left(\omega_p\right)=\frac{1}{\sqrt{2 \pi n}} \sum{t=1}^n \mathbf{Z}_t \exp \left(-i \omega_p t\right) .$$
Then, the $m \times m$ sample spectrum matrix, which is also known as periodogram matrix, is simply the extension of Eqs. (9.9)-(9.11). Thus,

\begin{aligned} \widetilde{\mathbf{f}}\left(\omega_p\right) & =\mathbf{Y}\left(\omega_p\right) \mathbf{Y}^\left(\omega_p\right)=\left|\mathbf{Y}\left(\omega_p\right)\right|^2=\frac{1}{2 \pi n}\left|\sum_{t=1}^n \mathbf{Z}t \exp \left(-i \omega_p t\right)\right|^2 \ & =\frac{1}{2 \pi n}\left[\sum{t=1}^n \mathbf{Z}t \exp \left(-i \omega_p t\right)\right]\left[\sum{r=1}^n \mathbf{Z}r^{\prime} \exp \left(i \omega_p r\right)\right] \ & =\frac{1}{2 \pi n} \sum{t=1}^n \sum_{r=1}^n \mathbf{Z}t \mathbf{Z}_r^{\prime} e^{-i \omega_p(t-r)} \ & =\frac{1}{2 \pi} \sum{k=-(n-1)}^{(n-1)} \hat{\mathbf{\Gamma}}(k) e^{-i \omega_p k} \ & =\frac{1}{2 \pi}\left[\hat{\mathbf{\Gamma}}(0)+2 \sum_{k=1}^{(n-1)} \hat{\boldsymbol{\Gamma}}(k) e^{-i \omega_p k}\right]=\left[\widetilde{f}{i, j}\left(\omega_p\right)\right], \end{aligned} where $$\begin{gathered} \hat{\boldsymbol{\Gamma}}(k)=\left[\hat{\gamma}{i, j}(k)\right], \ \tilde{f}{i, j}\left(\omega_p\right)=\frac{1}{2 \pi} \sum{k=-(n-1)}^{(n-1)} \hat{\gamma}{i, j}(k) e^{-i \omega_p k}=y_i\left(\omega_p\right) y_j^\left(\omega_p\right), \end{gathered}$$
and
$$y_i\left(\omega_p\right)=\frac{1}{\sqrt{2 \pi n}} \sum{t=1}^n Z_{i, t} e^{-i \omega_p t} .$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|Multitaper smoothing

Developed by Thompson (1982) for univariate processes and extended by Walden (2000) for multivariate processes, multitaper smoothing is another useful way to estimate power spectrum density that balances the bias and variance of nonparametric spectral estimation. The multitaper reduces estimation bias by averaging modified periodograms obtained using a family of mutually orthogonal tapers from the same sample data. Let $h_j(t)$ for $t=1, \ldots, n$ and $j=1, \ldots, n$, be $n$ orthonormal tapers such that
\begin{aligned} & \sum_{t=1}^n h_j^2(t)=1, \text { and } \ & \sum_{t=1}^n h_i(t) h_j(t)=0,(i \neq j) . \end{aligned}
From Eq. (9.45), we note that
\begin{aligned} \widetilde{\mathbf{f}}(\omega) & =\frac{1}{2 \pi} \sum_{k=-(n-1)}^{(n-1)} \hat{\boldsymbol{\Gamma}}(k) e^{-i \omega k} \ & =\frac{1}{\sqrt{2 \pi n}} \sum_{t=1}^n \mathbf{Z}t e^{-i \omega t} \frac{1}{\sqrt{2 \pi n}} \sum{t=1}^n \mathbf{Z}t^{\prime} e^{i \omega t} \ & =\widetilde{\mathbf{Y}}(\omega) \widetilde{\mathbf{Y}}^(\omega), \end{aligned} where $\widetilde{\mathbf{Y}}(\omega)=\frac{1}{\sqrt{2 \pi n}} \sum{t=1}^n \mathbf{Z}t e^{-i \omega t}$ is the discrete Fourier transform of $\mathbf{Z}_t$. The multitaper power spectral estimator at frequency $\omega$ is $$\hat{\mathbf{f}}_M(\omega)=\frac{1}{K} \sum{j=1}^K \hat{\mathbf{Y}}j(\omega) \hat{\mathbf{Y}}_j^(\omega),$$
where $K$ is chosen through the method shown below and $\hat{\mathbf{Y}}_j(\omega)$ is the tapered Fourier transform such that
$$\hat{\mathbf{Y}}_j(\omega)=\frac{1}{\sqrt{2 \pi n}} \sum{t=1}^n h_j(t) \mathbf{Z}_t \exp (-i \omega t) .$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|The smoothed spectrum matrix

\begin{aligned} \widetilde{\mathbf{f}}\left(\omega_p\right) & =\mathbf{Y}\left(\omega_p\right) \mathbf{Y}^\left(\omega_p\right)=\left|\mathbf{Y}\left(\omega_p\right)\right|^2=\frac{1}{2 \pi n}\left|\sum_{t=1}^n \mathbf{Z}t \exp \left(-i \omega_p t\right)\right|^2 \ & =\frac{1}{2 \pi n}\left[\sum{t=1}^n \mathbf{Z}t \exp \left(-i \omega_p t\right)\right]\left[\sum{r=1}^n \mathbf{Z}r^{\prime} \exp \left(i \omega_p r\right)\right] \ & =\frac{1}{2 \pi n} \sum{t=1}^n \sum_{r=1}^n \mathbf{Z}t \mathbf{Z}r^{\prime} e^{-i \omega_p(t-r)} \ & =\frac{1}{2 \pi} \sum{k=-(n-1)}^{(n-1)} \hat{\mathbf{\Gamma}}(k) e^{-i \omega_p k} \ & =\frac{1}{2 \pi}\left[\hat{\mathbf{\Gamma}}(0)+2 \sum{k=1}^{(n-1)} \hat{\boldsymbol{\Gamma}}(k) e^{-i \omega_p k}\right]=\left[\widetilde{f}{i, j}\left(\omega_p\right)\right], \end{aligned} 在哪里$$\begin{gathered} \hat{\boldsymbol{\Gamma}}(k)=\left[\hat{\gamma}{i, j}(k)\right], \ \tilde{f}{i, j}\left(\omega_p\right)=\frac{1}{2 \pi} \sum{k=-(n-1)}^{(n-1)} \hat{\gamma}{i, j}(k) e^{-i \omega_p k}=y_i\left(\omega_p\right) y_j^\left(\omega_p\right), \end{gathered}$$

$$y_i\left(\omega_p\right)=\frac{1}{\sqrt{2 \pi n}} \sum{t=1}^n Z_{i, t} e^{-i \omega_p t} .$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|Multitaper smoothing

\begin{aligned} & \sum_{t=1}^n h_j^2(t)=1, \text { and } \ & \sum_{t=1}^n h_i(t) h_j(t)=0,(i \neq j) . \end{aligned}

\begin{aligned} \widetilde{\mathbf{f}}(\omega) & =\frac{1}{2 \pi} \sum_{k=-(n-1)}^{(n-1)} \hat{\boldsymbol{\Gamma}}(k) e^{-i \omega k} \ & =\frac{1}{\sqrt{2 \pi n}} \sum_{t=1}^n \mathbf{Z}t e^{-i \omega t} \frac{1}{\sqrt{2 \pi n}} \sum{t=1}^n \mathbf{Z}t^{\prime} e^{i \omega t} \ & =\widetilde{\mathbf{Y}}(\omega) \widetilde{\mathbf{Y}}^(\omega), \end{aligned}其中$\widetilde{\mathbf{Y}}(\omega)=\frac{1}{\sqrt{2 \pi n}} \sum{t=1}^n \mathbf{Z}t e^{-i \omega t}$是$\mathbf{Z}_t$的离散傅里叶变换。频率$\omega$处的多锥度功率谱估计为$$\hat{\mathbf{f}}_M(\omega)=\frac{1}{K} \sum{j=1}^K \hat{\mathbf{Y}}j(\omega) \hat{\mathbf{Y}}_j^(\omega),$$

$$\hat{\mathbf{Y}}_j(\omega)=\frac{1}{\sqrt{2 \pi n}} \sum{t=1}^n h_j(t) \mathbf{Z}_t \exp (-i \omega t) .$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Time Series, 数据科学代写, 时间序列, 统计代写, 统计代考

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|时间序列分析代写Time-Series Analysis代考|Spectral representations of multivariate time series processes

These univariate time series results can be readily generalized to the $m$-dimensional vector process. Let $\mathbf{Z}t=\left[Z{1, t}, Z_{2, t}, \ldots, Z_{m, t}\right]^{\prime}$ be a zero-mean jointly stationary $m$-dimensional vector process with the covariance matrix function, $\mathbf{\Gamma}(k)=\left[\gamma_{i, j}(k)\right]$, the spectral representation of $\mathbf{Z}_t$ is given by
$$\mathbf{Z}t=\int{-\pi}^\pi e^{i \omega t} d \mathbf{U}(\omega)$$
where $d \mathbf{U}(\omega)=\left[d U_1(\omega), d U_2(\omega), \ldots, d U_m(\omega)\right]^{\prime}$ is a $m$-dimensional complex-valued process with $d U_i(\omega)$, for $i=1,2, \ldots, m$, being both orthogonal as well as cross-orthogonal such that
$$E[d \mathbf{U}(\omega)]=\mathbf{0},-\pi \leq \omega \leq \pi$$
and
$$E\left{d \mathbf{U}(\omega)\left[d \mathbf{U}^(\lambda)\right]^{\prime}\right}=\mathbf{0}, \text { for all } \omega \neq \lambda$$ The spectral representation of the covariance matrix function is given by $$\boldsymbol{\Gamma}(k)=\int_{-\pi}^\pi e^{i \omega k} d \mathbf{F}(\omega),$$ where \begin{aligned} d \mathbf{F}(\omega) & =E\left{d \mathbf{U}(\omega)\left[d \mathbf{U}^(\omega)\right]^{\prime}\right} \ & =\left[E\left{d U_i(\omega) d U_j^*(\omega)\right}\right] \ & =\left[d F_{i, j}(\omega)\right], \end{aligned}
and $\mathbf{F}(\omega)$ is the spectral distribution matrix function of $\mathbf{Z}t$. The diagonal elements $F{i, i}(\omega)$ are the spectral distribution functions of the $Z_{i, t}$ and the off-diagonal elements $F_{i, j}(\omega)$ are the crossspectral distribution functions between the $Z_{i, t}$ and the $Z_{j, t}$.

If the covariance matrix function is absolutely summable in the sense that each of the $m \times m$ sequence $\gamma_{i, j}(k)$ is absolutely summable, then the spectrum matrix or the spectral density matrix function exists and is given by
\begin{aligned} \mathbf{f}(\omega) d \omega & =d \mathbf{F}(\omega) \ & =\left[d F_{i, j}(\omega)\right] \ & =\left[f_{i, j}(\omega) d \omega\right] . \end{aligned}

## 统计代写|时间序列分析代写Time-Series Analysis代考|The smoothed spectrum matrix

When $\mathbf{Z}_t$ is a multivariate Gaussian process with mean vector $\mathbf{0}$ and variance-covariance matrix $\boldsymbol{\Sigma}, \widetilde{\mathbf{f}}\left(\omega_p\right)$ has a distribution related to the sample variance-covariance matrix that is known as Wishart distribution with $n$ degrees of freedom, which is the multivariate analog of the Chi-square distribution. We refer readers to Goodman (1963), Hannan (1970), and Brillinger (2002) for further discussion of the properties of the periodogram and Wishart distribution.

Similar to the univariate extension of the sample spectral density discussed in Section 9.1, the sample spectrum matrix or periodogram matrix is also a poor estimate. So, we replace it by the following smoothed spectrum matrix
$$\hat{\mathbf{f}}(\omega)=\left[\hat{f}{i, j}(\omega)\right]$$ where $$\hat{f}{i, i}\left(\omega_p\right)=\hat{f}i\left(\omega_p\right) \sum{k=-M_i}^{M_i} W_i\left(\omega_k\right) \widetilde{f}_{i, i}\left(\omega_p-\omega_k\right),$$

$W_i(\omega)$ is a smoothing function, also known as kernel or spectral window, and $M_i$ is the bandwidth of the spectral window, and
$$\hat{f}{i, j}\left(\omega_p\right)=\sum{k=-M_{i, j}}^{M_{i, j}} W_{i, j}\left(\omega_k\right) \widetilde{f}{i, j}\left(\omega_p-\omega_k\right),$$ where $W{i, j}(\omega)$ is a spectral window, and $M_{i, j}$ is the corresponding bandwidth. Similar to the extension of the univariate smoothed spectrum, the smoothed spectrum matrix can also be approximated by the Wishart distribution.
Once $f_{i, i}(\omega)$ and $f_{i, j}(\omega)$ are estimated, we can estimate the co-spectrum, $c_{i, j}(\omega)$, the quadrature spectrum, $q_{i, j}(\omega)$, the cross-amplitude spectrum, $\alpha_{i, j}(\omega)$, phase spectrum, $\phi_{i, j}(\omega)$, the gain function, $G_{i, j}(\omega)$, and the squared coherency function, $K_{i, j}^2(\omega)$.

Note that the spectrum matrix is the Fourier transform of the covariance function, $\mathbf{\Gamma}(k)=$ $\left[\gamma_{i, j}(k)\right]$, and the sample spectrum matrix is
$$\widetilde{\mathbf{f}}\left(\omega_p\right)=\frac{1}{2 \pi} \sum_{k=-(n-1)}^{(n-1)} \hat{\mathbf{\Gamma}}(k) e^{-i \omega_p k}=\left[\widetilde{f}_{i, j}\left(\omega_p\right)\right] .$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|Spectral representations of multivariate time series processes

$\mathbf{F}(\omega)$为$\mathbf{Z}t$的谱分布矩阵函数。其中对角元素$F{i, i}(\omega)$为$Z_{i, t}$的光谱分布函数，非对角元素$F_{i, j}(\omega)$为$Z_{i, t}$与$Z_{j, t}$之间的交叉光谱分布函数。

\begin{aligned} \mathbf{f}(\omega) d \omega & =d \mathbf{F}(\omega) \ & =\left[d F_{i, j}(\omega)\right] \ & =\left[f_{i, j}(\omega) d \omega\right] . \end{aligned}

## 统计代写|时间序列分析代写Time-Series Analysis代考|The smoothed spectrum matrix

$$\hat{\mathbf{f}}(\omega)=\left[\hat{f}{i, j}(\omega)\right]$$ where $$\hat{f}{i, i}\left(\omega_p\right)=\hat{f}i\left(\omega_p\right) \sum{k=-M_i}^{M_i} W_i\left(\omega_k\right) \widetilde{f}_{i, i}\left(\omega_p-\omega_k\right),$$

$W_i(\omega)$ 是平滑函数，又称核函数或谱窗，$M_i$为谱窗的带宽，而
$$\hat{f}{i, j}\left(\omega_p\right)=\sum{k=-M_{i, j}}^{M_{i, j}} W_{i, j}\left(\omega_k\right) \widetilde{f}{i, j}\left(\omega_p-\omega_k\right),$$其中$W{i, j}(\omega)$为光谱窗，$M_{i, j}$为对应带宽。与单变量平滑谱的扩展类似，平滑谱矩阵也可以用Wishart分布来近似。

## 统计代写|时间序列分析代写Time-Series Analysis代考|The principal component method

$$\underset{m \times k}{\hat{\mathbf{L}}}=\left[\sqrt{\hat{\lambda}_1} \hat{\boldsymbol{\alpha}}_1 \sqrt{\hat{\lambda}_2} \hat{\boldsymbol{\alpha}}_2 \ldots \sqrt{\hat{\lambda}_k} \hat{\boldsymbol{\alpha}}_k\right],$$

$$\hat{\boldsymbol{\Sigma}}=\left[\begin{array}{cccccc} \hat{\sigma}_1^2 & 0 & . & \cdots & . & 0 \ 0 & \hat{\sigma}_2^2 & 0 & \cdots & . & 0 \ . & 0 & . & \cdots & . & . \ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \ 0 & . & . & \cdots & . & 0 \ 0 & . & . & \cdots & 0 & \hat{\sigma}_m^2 \end{array}\right],$$

$$\hat{\sigma}i^2=\hat{\gamma}{i, i}-\left(\hat{\ell}{i, 1}^2+\hat{\ell}{i, 2}^2+\cdots+\hat{\ell}{i, k}^2\right),$$其中平方和是对$i$第1个群体的估计 $$\hat{c}_i^2=\left(\hat{\ell}{i, 1}^2+\hat{\ell}{i, 2}^2+\cdots+\hat{\ell}{i, k}^2\right) .$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Time Series, 数据科学代写, 时间序列, 统计代写, 统计代考

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|时间序列分析代写Time-Series Analysis代考|The PCA based on the sample covariance matrix

The eigenvalues and eigenvectors, which are often known as variances and component loadings of the sample variance-covariance matrix, are given in Table 4.3.
The two sample principal components are
\begin{aligned} & \hat{Y}_1=0.516 Z_1+0.003 Z_2+0.227 Z_3+0.461 Z_4+0.686 Z_5 \ & \hat{Y}_2=0.081 Z_1-0.057 Z_2-0.331 Z_3-0.755 Z_4+0.557 Z_5 \end{aligned}
The first component explains $95.76 \%$ of the total sample variance, and the first two explain $99.04 \%$. Thus, sample variation is very much summarized by the first principle component or the first two principle components. Figure 4.5 shows the useful screeplot where the vertex of the elbow can be easily seen to be $k=1$.

Now, let us examine component 1 more carefully. In this component, the loadings are all positive. The component can be regarded as the CPI growth component that grew over the time period that we observed. The five variables are combined into a composite score, which is plotted in Figure 4.6, and it follows a combination of patterns observed mainly for gasoline and energy in Figure 4.4.

Thus, the PCA has provided us with a single component that contains the vast majority of information for the five individual variables. From this, we can conclude that gasoline and energy were the true drivers of the overall economy for the Greater New York City area during the period between 1986 and 2014 .

## 统计代写|时间序列分析代写Time-Series Analysis代考|The PCA based on the sample correlation matrix

Now let us try the PCA using the sample correlation matrix. The eigenvalues and eigenvectors, which are also known as variances and component loadings, of the sample correlation matrix are given in Table 4.4.

The two sample principal components are
\begin{aligned} & \hat{Y}_1=0.503 Z_1+0.044 Z_2+0.501 Z_3+0.499 Z_4+0.495 Z_5 \ & \hat{Y}_2=0.100 Z_1-0.987 Z_2-0.107 Z_3+0.032 Z_4+0.061 Z_5 \end{aligned}
The first component explains $76.74 \%$ of the total sample variance, and the first two explain $97.1 \%$. Thus, sample variation of the five industries is primarily summarized by the first two principle components. Figure 4.7 shows the screeplot, which clearly indicates $k=2$.
From Table 4.4, we see that the loadings in component 1 are all positive, almost equal for energy, commodities, housing, and gas, and have strong positive correlations among them. It represents the CPI growth over the time period that we observed. The loadings in component 2 are relatively positive small numbers for energy, housing, and gas, and negative for apparel and commodities. It represents the market contrast between consumer goods and utility housing. Since the loading for apparel is especially dominating, component 2 can also be simply regarded as representing the apparel sector.

The five variables are combined into two composite scores, which are plotted in Figure 4.8.

## 统计代写|时间序列分析代写Time-Series Analysis代考|The PCA based on the sample covariance matrix

\begin{aligned} & \hat{Y}_1=0.516 Z_1+0.003 Z_2+0.227 Z_3+0.461 Z_4+0.686 Z_5 \ & \hat{Y}_2=0.081 Z_1-0.057 Z_2-0.331 Z_3-0.755 Z_4+0.557 Z_5 \end{aligned}

## 统计代写|时间序列分析代写Time-Series Analysis代考|The PCA based on the sample correlation matrix

\begin{aligned} & \hat{Y}_1=0.503 Z_1+0.044 Z_2+0.501 Z_3+0.499 Z_4+0.495 Z_5 \ & \hat{Y}_2=0.100 Z_1-0.987 Z_2-0.107 Z_3+0.032 Z_4+0.061 Z_5 \end{aligned}

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Time Series, 数据科学代写, 时间序列, 统计代写, 统计代考

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|时间序列分析代写Time-Series Analysis代考|The PCA based on the sample covariance matrix

The eigenvalues and eigenvectors, which are often known as variances and component loadings, of the sample variance-covariance matrix, are given in Table 4.1.

Figure 4.2 shows a useful plot, which is often known as a scree plot or screeplot. It is a plot of eigenvalues $\hat{\lambda}i$ versus $i$, that is, the magnitude of an eigenvalue versus its number. To determine the suitable number, $k$, of components, we look for an elbow in the plot, where the component preceding the vertex of the elbow is chosen to be cutoff point $k$. In our example, we will choose $k$ to be 3 or 4 . The eigenvalues of the first four components account for \begin{aligned} & \left(\frac{\hat{\lambda}_1+\hat{\lambda}_2+\hat{\lambda}_3+\hat{\lambda}_4}{\hat{\lambda}_1+\hat{\lambda}_2+\hat{\lambda}_3+\hat{\lambda}_4+\hat{\lambda}_5+\hat{\lambda}_6+\hat{\lambda}_7+\hat{\lambda}_8+\hat{\lambda}_9+\hat{\lambda}{10}}\right) 100 \% \ & =\left(\frac{0.00058+0.0003+0.00018+0.00011}{0.00151}\right) 100 \%=77.5 \% \end{aligned}
of the total sample variance.
The four sample principal components are
\begin{aligned} \hat{Y}1= & \hat{\alpha}_1 \mathbf{Z}=-0.287 Z_1-0.354 Z_2 \ & -0.47 Z_6-0.369 Z_7-0.539 Z_8-391 Z_9-0.324 Z{10} \ \hat{Y}2= & \hat{\alpha}_2 \mathbf{Z}=-0.104 Z_1-0.126 Z_2-0.431 Z_3-0.513 Z_4-0.362 Z_5 \ & -0.329 Z_6-0.167 Z_7+0.289 Z_8+0.175 Z_9+0.379 Z{10} \ \hat{Y}3= & \hat{\alpha}_3 \mathbf{Z}=-0.407 Z_1-0.243 Z_2-0.173 Z_3-0.319 Z_4-0.286 Z_5 \ & +0.596 Z_6+0.316 Z_7-0.191 Z_8-0.251 Z{10} \ \hat{Y}4= & \hat{\alpha}_4 \mathbf{Z}=0.576 Z_1+0.552 Z_2-0.354 Z_3-0.231 Z_4-0.197 Z_5 \ & +0.126 Z_7-0.222 Z_8-0.256 Z{10} \end{aligned}

Now, let us examine the four components more carefully. The first component represents the general market other than communication technology sector. The second component represents the contrast between financial and non-financial sectors. The third component represents the contrast between health and non-health sectors. The fourth component represents the contrast between oil and non-oil industries. Thus, the PCA has provided us with four components that contain a vast amount of information for the 10 stock returns traded on the New York Stock Exchange.

## 统计代写|时间序列分析代写Time-Series Analysis代考|The PCA based on the sample correlation matrix

Now let us try the PCA using the sample correlation matrix, which is based on the standardized
The eigenvalues and eigenvectors, which are also known as variances and component loadings, of the sample correlation matrix are given in Table 4.2. The screeplot is shown in Figure 4.3 .

The screeplot again indicates $k=4$. The eigenvalues of the first four components account for
$$\left(\frac{\hat{\lambda}1+\hat{\lambda}_2+\hat{\lambda}_3+\hat{\lambda}_4}{m}\right) 100 \%=\left(\frac{3.393+2.21+1.196+0.939}{10}\right) 100 \%=77.38 \%$$ which is almost the same as the one obtained using the covariance matrix. The four sample principal components are now \begin{aligned} \hat{Y}_1= & \hat{\alpha}_1 \hat{\mathbf{U}}=-0.287 U_1-0.354 U_2-0.143 U_4-0.15 U_5 \ & -0.346 U_6-0.364 U_7-0.433 U_8-0.458 U_9-0.31 U{10} \ \hat{Y}2= & \hat{\alpha}_2 \hat{\mathbf{U}}=-0.155 U_1-0.137 U_2-0.491 U_3-0.506 U_4-0.49 U_5 \ & +0.236 U_8+0.231 U_9+0.32 U{10} \ \hat{Y}3= & \hat{\alpha}_3 \hat{\mathbf{U}}=0.654 U_1+0.464 U_2-0.199 U_3-0.418 U_6-0.354 U_7 \ \hat{Y}_4= & \hat{\alpha}_4 \hat{\mathbf{U}}=-0.136 U_1-0.32 U_2+0.233 U_3+0.171 U_4+0.341 U_5 \ & -0.405 U_6-0.429 U_7+0.281 U_8+0.206 U_9+0.458 U{10} \end{aligned}

Now, let us examine the four components. The first component now represents the general stock market. The second component represents the contrast mainly between financial and nonhealth related sectors. The third component represents the contrast between oil and health sectors. The fourth component now represents the contrast between financial/technology and nonfinancial/non-technology industry. For this data set, the PCA results from the covariance matrix and the correlation matrix are very much equivalent.

## 统计代写|时间序列分析代写Time-Series Analysis代考|The PCA based on the sample covariance matrix

\begin{aligned} \hat{Y}1= & \hat{\alpha}_1 \mathbf{Z}=-0.287 Z_1-0.354 Z_2 \ & -0.47 Z_6-0.369 Z_7-0.539 Z_8-391 Z_9-0.324 Z{10} \ \hat{Y}2= & \hat{\alpha}_2 \mathbf{Z}=-0.104 Z_1-0.126 Z_2-0.431 Z_3-0.513 Z_4-0.362 Z_5 \ & -0.329 Z_6-0.167 Z_7+0.289 Z_8+0.175 Z_9+0.379 Z{10} \ \hat{Y}3= & \hat{\alpha}_3 \mathbf{Z}=-0.407 Z_1-0.243 Z_2-0.173 Z_3-0.319 Z_4-0.286 Z_5 \ & +0.596 Z_6+0.316 Z_7-0.191 Z_8-0.251 Z{10} \ \hat{Y}4= & \hat{\alpha}_4 \mathbf{Z}=0.576 Z_1+0.552 Z_2-0.354 Z_3-0.231 Z_4-0.197 Z_5 \ & +0.126 Z_7-0.222 Z_8-0.256 Z{10} \end{aligned}

## 统计代写|时间序列分析代写Time-Series Analysis代考|The PCA based on the sample correlation matrix

$$\left(\frac{\hat{\lambda}1+\hat{\lambda}_2+\hat{\lambda}_3+\hat{\lambda}_4}{m}\right) 100 \%=\left(\frac{3.393+2.21+1.196+0.939}{10}\right) 100 \%=77.38 \%$$这与使用协方差矩阵得到的结果几乎相同。四个样本主成分为 \begin{aligned} \hat{Y}_1= & \hat{\alpha}_1 \hat{\mathbf{U}}=-0.287 U_1-0.354 U_2-0.143 U_4-0.15 U_5 \ & -0.346 U_6-0.364 U_7-0.433 U_8-0.458 U_9-0.31 U{10} \ \hat{Y}2= & \hat{\alpha}_2 \hat{\mathbf{U}}=-0.155 U_1-0.137 U_2-0.491 U_3-0.506 U_4-0.49 U_5 \ & +0.236 U_8+0.231 U_9+0.32 U{10} \ \hat{Y}3= & \hat{\alpha}_3 \hat{\mathbf{U}}=0.654 U_1+0.464 U_2-0.199 U_3-0.418 U_6-0.354 U_7 \ \hat{Y}_4= & \hat{\alpha}_4 \hat{\mathbf{U}}=-0.136 U_1-0.32 U_2+0.233 U_3+0.171 U_4+0.341 U_5 \ & -0.405 U_6-0.429 U_7+0.281 U_8+0.206 U_9+0.458 U{10} \end{aligned}

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Posted on Categories:Time Series, 数据科学代写, 时间序列, 统计代写, 统计代考

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 统计代写|时间序列分析代写Time-Series Analysis代考|The classical multiple regression model

In a multiple regression model, a response variable $Y$ is related to $k$ predictor variables, $X_1, X_2, \ldots, X_k$, as follows,
$$Y=\beta_0+\beta_1 X_1+\cdots+\beta_k X_k+\xi,$$

where $\xi$ is assumed to be uncorrelated white noise, often as i.i.d. $N\left(0, \sigma^2\right)$. When time series data are used to fit a multiple regression model, we often write Eq. (3.1) as
\begin{aligned} Y_{\mathrm{t}} & =\beta_0+\beta_1 X_{1, t}+\cdots+\beta_k X_{k, t}+\xi_{\mathrm{t}} \ & =\mathbf{X}t^{\prime} \boldsymbol{\beta}+\xi_t \end{aligned} where $t$ refers to time, \begin{aligned} \mathbf{X}_t^{\prime} & =\left[1, X{1, t}, X_{2, t}, \ldots, X_{k, t}\right] \ \boldsymbol{\beta} & =\left[\beta_0, \beta_1, \beta_2, \ldots, \beta_k\right]^{\prime} \end{aligned}
and in time series regression $\xi_t$ is normally assumed to follow a time series model such as $\operatorname{AR}(p)$.
When we have time series data from time $t=1$ to $t=n$, we can present Eq. (3.2) in the matrix form,
$$\underset{n \times 1}{\mathbf{Y}}=\underset{(n \times(k+1))}{\mathbf{X}} \underset{(k+1) \times 1}{\boldsymbol{\beta}}+\underset{(n \times 1)}{\boldsymbol{\xi}}$$
where
$$\mathbf{Y}=\left[\begin{array}{c} Y_1 \ Y_2 \ \vdots \ Y_n \end{array}\right], \quad \mathbf{X}=\left[\begin{array}{c} \mathbf{X}1^{\prime} \ \mathbf{X}_2^{\prime} \ \vdots \ \mathbf{X}_n^{\prime} \end{array}\right]=\left[\begin{array}{ccccc} 1 & X{1,1} & X_{2,1} & \cdots & X_{k, 1} \ 1 & X_{1,2} & X_{2,2} & \cdots & X_{k, 2} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 1 & X_{1, n} & X_{2, n} & \cdots & X_{k, n} \end{array}\right], \quad \boldsymbol{\beta}=\left[\begin{array}{c} \beta_0 \ \beta_1 \ \vdots \ \beta_k \end{array}\right], \quad \boldsymbol{\xi}=\left[\begin{array}{c} \xi_1 \ \xi_2 \ \vdots \ \xi_n \end{array}\right],$$
and $\boldsymbol{\xi}$ follows a $n$-dimensional multivariate normal distribution $N(\mathbf{0}, \mathbf{\Sigma})$. Given $\boldsymbol{\Sigma}$, the generalized least squares estimator (GLS)
$$\hat{\boldsymbol{\beta}}=\left(\mathbf{X}^{\prime} \mathbf{\Sigma}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{\Sigma}^{-1} \mathbf{Y}$$
has the smallest possible variance among all other unbiased estimators $\widetilde{\boldsymbol{\beta}}$ of $\boldsymbol{\beta}$ in the form
$$\boldsymbol{c}^{\prime} \tilde{\boldsymbol{\beta}}=c_0 \tilde{\beta}_0+c_1 \tilde{\beta}_1+\cdots+c_k \tilde{\beta}_k .$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|Multivariate multiple regression model

Now, suppose that instead of one response variable in Eq. (3.2), we have $m$ response time series variables related to these $k$ predictor time series variables, that is,
\begin{aligned} & Y_{1, t}=\beta_{1,0}+\beta_{1,1} X_{1, t}+\cdots \beta_{1, k} \mathrm{X}{k, t}+\varepsilon{1, t}=\mathbf{X}t^{\prime} \boldsymbol{\beta}{(1)}+\xi_{1, t} \ & Y_{2, t}=\beta_{2,0}+\beta_{2,1} X_{1, t}+\cdots \beta_{2, k} \mathrm{X}{k, t}+\varepsilon{2, t}=\mathbf{X}t^{\prime} \boldsymbol{\beta}{(2)}+\xi_{2, t} \ & \vdots \ & Y_{m, t}=\beta_{m, 0}+\beta_{m, 1} X_{1, t}+\cdots \beta_{m, k} \mathrm{X}{k, t}+\varepsilon{m, t}=\mathbf{X}t^{\prime} \boldsymbol{\beta}{(m)}+\xi_{m, t}, \end{aligned}
or
$$\mathbf{Y}t^{\prime}=\mathbf{X}_t^{\prime}\left[\boldsymbol{\beta}{(1)}, \boldsymbol{\beta}{(2)}, \ldots, \boldsymbol{\beta}{(m)}\right]+\boldsymbol{\xi}t^{\prime}$$ where \begin{aligned} \mathbf{Y}_t^{\prime} & =\left[Y{1, t}, Y_{2, t}, \ldots, Y_{m, t}\right], \ \mathbf{X}t^{\prime} & =\left[1, X{1, t}, X_{2, t}, \ldots, X_{m, t}\right], \ \boldsymbol{\beta}{(i)} & =\left[\beta{i, 0}, \beta_{i, 1}, \ldots \beta_{i, k}\right]^{\prime}, i=1,2, \ldots, m, \end{aligned}
and
$$\xi_{\mathbf{t}}^{\prime}=\left[\xi_{1, t}, \xi_{2, t}, \ldots, \xi_{m, t}\right]$$
For $i=1,2, \ldots, m$ and time $t=1$ to $t=n$, let
$$\mathbf{Y}{(i)}=\left[\begin{array}{c} Y{i, 1} \ Y_{i, 2} \ \vdots \ Y_{i, n} \end{array}\right], \mathbf{X}=\left[\begin{array}{c} \mathbf{X}1^{\prime} \ \mathbf{X}_2^{\prime} \ \vdots \ \mathbf{X}_n^{\prime} \end{array}\right]=\left[\begin{array}{ccccc} 1 & X{1,1} & X_{2,1} & \cdots & X_{k, 1} \ 1 & X_{1,2} & X_{2,2} & \cdots & X_{k, 2} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 1 & X_{1, n} & X_{2, n} & \cdots & X_{k, n} \end{array}\right], \text { and } \boldsymbol{\xi}{(i)}=\left[\begin{array}{c} \xi{i, 1} \ \xi_{i, 2} \ \vdots \ \xi_{i, n} \end{array}\right]$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|The classical multiple regression model

$$Y=\beta_0+\beta_1 X_1+\cdots+\beta_k X_k+\xi,$$

\begin{aligned} Y_{\mathrm{t}} & =\beta_0+\beta_1 X_{1, t}+\cdots+\beta_k X_{k, t}+\xi_{\mathrm{t}} \ & =\mathbf{X}t^{\prime} \boldsymbol{\beta}+\xi_t \end{aligned}其中$t$表示时间，\begin{aligned} \mathbf{X}t^{\prime} & =\left[1, X{1, t}, X{2, t}, \ldots, X_{k, t}\right] \ \boldsymbol{\beta} & =\left[\beta_0, \beta_1, \beta_2, \ldots, \beta_k\right]^{\prime} \end{aligned}

$$\underset{n \times 1}{\mathbf{Y}}=\underset{(n \times(k+1))}{\mathbf{X}} \underset{(k+1) \times 1}{\boldsymbol{\beta}}+\underset{(n \times 1)}{\boldsymbol{\xi}}$$

$$\mathbf{Y}=\left[\begin{array}{c} Y_1 \ Y_2 \ \vdots \ Y_n \end{array}\right], \quad \mathbf{X}=\left[\begin{array}{c} \mathbf{X}1^{\prime} \ \mathbf{X}2^{\prime} \ \vdots \ \mathbf{X}_n^{\prime} \end{array}\right]=\left[\begin{array}{ccccc} 1 & X{1,1} & X{2,1} & \cdots & X_{k, 1} \ 1 & X_{1,2} & X_{2,2} & \cdots & X_{k, 2} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 1 & X_{1, n} & X_{2, n} & \cdots & X_{k, n} \end{array}\right], \quad \boldsymbol{\beta}=\left[\begin{array}{c} \beta_0 \ \beta_1 \ \vdots \ \beta_k \end{array}\right], \quad \boldsymbol{\xi}=\left[\begin{array}{c} \xi_1 \ \xi_2 \ \vdots \ \xi_n \end{array}\right],$$
$\boldsymbol{\xi}$服从$n$维多元正态分布$N(\mathbf{0}, \mathbf{\Sigma})$。给定$\boldsymbol{\Sigma}$，广义最小二乘估计量(GLS)
$$\hat{\boldsymbol{\beta}}=\left(\mathbf{X}^{\prime} \mathbf{\Sigma}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{\Sigma}^{-1} \mathbf{Y}$$

$$\boldsymbol{c}^{\prime} \tilde{\boldsymbol{\beta}}=c_0 \tilde{\beta}_0+c_1 \tilde{\beta}_1+\cdots+c_k \tilde{\beta}_k .$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|Multivariate multiple regression model

\begin{aligned} & Y_{1, t}=\beta_{1,0}+\beta_{1,1} X_{1, t}+\cdots \beta_{1, k} \mathrm{X}{k, t}+\varepsilon{1, t}=\mathbf{X}t^{\prime} \boldsymbol{\beta}{(1)}+\xi_{1, t} \ & Y_{2, t}=\beta_{2,0}+\beta_{2,1} X_{1, t}+\cdots \beta_{2, k} \mathrm{X}{k, t}+\varepsilon{2, t}=\mathbf{X}t^{\prime} \boldsymbol{\beta}{(2)}+\xi_{2, t} \ & \vdots \ & Y_{m, t}=\beta_{m, 0}+\beta_{m, 1} X_{1, t}+\cdots \beta_{m, k} \mathrm{X}{k, t}+\varepsilon{m, t}=\mathbf{X}t^{\prime} \boldsymbol{\beta}{(m)}+\xi_{m, t}, \end{aligned}

$$\mathbf{Y}t^{\prime}=\mathbf{X}t^{\prime}\left[\boldsymbol{\beta}{(1)}, \boldsymbol{\beta}{(2)}, \ldots, \boldsymbol{\beta}{(m)}\right]+\boldsymbol{\xi}t^{\prime}$$ where \begin{aligned} \mathbf{Y}_t^{\prime} & =\left[Y{1, t}, Y{2, t}, \ldots, Y_{m, t}\right], \ \mathbf{X}t^{\prime} & =\left[1, X{1, t}, X_{2, t}, \ldots, X_{m, t}\right], \ \boldsymbol{\beta}{(i)} & =\left[\beta{i, 0}, \beta_{i, 1}, \ldots \beta_{i, k}\right]^{\prime}, i=1,2, \ldots, m, \end{aligned}

$$\xi_{\mathbf{t}}^{\prime}=\left[\xi_{1, t}, \xi_{2, t}, \ldots, \xi_{m, t}\right]$$

$$\mathbf{Y}{(i)}=\left[\begin{array}{c} Y{i, 1} \ Y_{i, 2} \ \vdots \ Y_{i, n} \end{array}\right], \mathbf{X}=\left[\begin{array}{c} \mathbf{X}1^{\prime} \ \mathbf{X}2^{\prime} \ \vdots \ \mathbf{X}_n^{\prime} \end{array}\right]=\left[\begin{array}{ccccc} 1 & X{1,1} & X{2,1} & \cdots & X_{k, 1} \ 1 & X_{1,2} & X_{2,2} & \cdots & X_{k, 2} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 1 & X_{1, n} & X_{2, n} & \cdots & X_{k, n} \end{array}\right], \text { and } \boldsymbol{\xi}{(i)}=\left[\begin{array}{c} \xi{i, 1} \ \xi_{i, 2} \ \vdots \ \xi_{i, n} \end{array}\right]$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。