Posted on Categories:CS代写, Machine Learning, 机器学习, 计算机代写

# 计算机代写|机器学习代写Machine Learning代考|Viterbi Algorithm

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 计算机代写|机器学习代写Machine Learning代考|Viterbi Algorithm

We begin by considering the problem of computing the most-likely sequence of states given a data set $\mathbf{y}{1: T}$ and a known HMM model. That is, we wish to compute $$s{1: T}^*=\arg \max {s{1: T}} P\left(s_{1: T} \mid \theta, \mathbf{y}_{1: T}\right)$$
The naive approach is to simply enumerate every possible state sequence and choose the one that maximizes the above conditional probability. Since there are $K^T$ possible state-sequences, this approach is clearly infeasible for sequences of more than a few steps.

Fortunately, we can take advantage of the Markov property to perform this computation much more efficiently. The Viterbi algorithm is a dynamic programming approach to finding the most likely sequence of states $s_{1: T}$ given $\theta$ and a sequence of observations $\mathbf{y}{1: T}$. We begin by defining the following quantity for each state and each time-step: $$\delta_t(i) \equiv \max {s_{1: t-1}} p\left(s_{1: t-1}, s_t=i, \mathbf{y}_{1: t}\right)$$

(Henceforth, we omit $\theta$ from these equations for brevity.) This quantity tells us the likelihood that the most-likely sequence up to time $t$ ends at state $i$, given the data up to time $t$. We will compute this quantity recursively. The base case is simply:
$$\delta_1(i)=p\left(s_1=i, \mathbf{y}1\right)=p\left(\mathbf{y}_1 \mid s_1=i\right) P\left(s_1=i\right)$$ for all $i$. The recursive case is: \begin{aligned} \delta_t(i) & =\max {s_{1: t-1}} p\left(s_{1: t-1}, s_t=i, \mathbf{y}{1: t}\right) \ & =\max {s_{1: t-2}, j} p\left(s_t=i \mid s_{t-1}=j\right) p\left(\mathbf{y}t \mid s_t=i\right) p\left(s{1: t-2}, s_{t-1}=j, \mathbf{y}{1: t-1}\right) \ & =p\left(\mathbf{y}_t \mid s_t=i\right) \max _j\left[p\left(s_t=i \mid s{t-1}=j\right) \max {s{1: t-2}} p\left(s_{1: t-2}, s_{t-1}=j, \mathbf{y}{1: t-1}\right)\right] \ & =p\left(\mathbf{y}_t \mid s_t=i\right) \max _j A{j i} \delta_{t-1}(j) \end{aligned}

## 计算机代写|机器学习代写Machine Learning代考|The Forward-Backward Algorithm

We may be interested in computing quantities such as $p\left(\mathbf{y}{1: T} \mid \theta\right)$ or $P\left(s_t \mid \mathbf{y}{1: T}, \theta\right)$; these distributions are useful for learning and analyzing models. Again, the naive approach to computing these quantities involves summing over all possible sequences of hidden states, and thus is intractable to compute. The Forward-Backward Algorithm allows us to compute these quantities in polynomial time, using dynamic programming.
In the Forward Recursion, we compute:
$$\alpha_t(i) \equiv p\left(\mathbf{y}_{1: t}, s_t=i\right)$$
The base case is:
$$\alpha_1(i)=p\left(\mathbf{y}_1 \mid s_1=i\right) p\left(s_1=i\right)$$

and the recursive case is:
\begin{aligned} \alpha_t(i) & =\sum_j p\left(\mathbf{y}{1: t}, s_t=i, s{t-1}=j\right) \ & =\sum_j p\left(\mathbf{y}t \mid s_t=i\right) P\left(s_t=i \mid s{t-1}=j\right) p\left(\mathbf{y}{1: t-1}, s{t-1}=j\right) \ & =p\left(\mathbf{y}t \mid s_t=i\right) \sum{j=1}^K A_{j i} \alpha_{t-1}(j) \end{aligned}
Note that this is identical to the Viterbi algorithm, except that maximization over $j$ has been replaced by summation.
In the Backward Recursion we compute:
$$\beta_t(i) \equiv p\left(\mathbf{y}{t+1: T} \mid s_t=i\right)$$ The base case is: $$\beta_T(i)=1$$ The recursive case is: $$\beta_t(i)=\sum{j=1}^K A_{i j} p\left(\mathbf{y}{t+1} \mid s{t+1}=j\right) \beta_{t+1}(j)$$
From these quantities, we can easily the following useful quantities.

## 计算机代写|机器学习代写Machine Learning代考|Viterbi Algorithm

$$s 1: T^*=\arg \max s 1: T P\left(s_{1: T} \mid \theta, \mathbf{y}{1: T}\right)$$ 天真的方法是简单地枚举每一个可能的状态序列，然后选择一个最大化上述条件概率的状态序列。既然有 $K^T$ 可能的状态序列，这 种方法对于多于几个步褧的序列显然是不可行的。 幸运的是，我们可以利用马尔可夫属性百有效地执行此计算。维特比算法是一种寻找最可能状态序列的动态规划方法 $s{1: T} T$ 给予 $\theta$ 和 一系列观倧 $\mathbf{y} 1: T$. 我们首先为每个状态和每个时间步定义以下数量:
$$\delta_t(i) \equiv \max s_{1: t-1} p\left(s_{1: t-1}, s_t=i, \mathbf{y}{1: t}\right)$$ (从今以后，我们省略 $\theta$ 为简洁起见，从这些等式中得出。) 这个数量告诉我们最有可能的序列到时间的可能性 $t$ 结束于状态 $i$, 给定 时间的数据 $t$. 我们将递归地计算这个数量。基本情况很简单: $$\delta_1(i)=p\left(s_1=i, \mathbf{y} 1\right)=p\left(\mathbf{y}_1 \mid s_1=i\right) P\left(s_1=i\right)$$ 对全部i. 達归情况是: $$\delta_t(i)=\max s{1: t-1} p\left(s_{1: t-1}, s_t=i, \mathbf{y} 1: t\right) \quad=\max s_{1: t-2}, j p\left(s_t=i \mid s_{t-1}=j\right) p\left(\mathbf{y} t \mid s_t=i\right) p\left(s 1: t-2, s_{t-1}=j, \mathbf{y} 1: t-1\right)=p\left(\mathbf{y}_t \mid s_t=\right.$$

## 计算机代写|机器学习代写Machine Learning代考|The Forward-Backward Algorithm

$$\alpha_t(i) \equiv p\left(\mathbf{y}{1: t}, s_t=i\right)$$ 基本情况是: $$\alpha_1(i)=p\left(\mathbf{y}_1 \mid s_1=i\right) p\left(s_1=i\right)$$ 递归情况是: $$\alpha_t(i)=\sum_j p\left(\mathbf{y} 1: t, s_t=i, s t-1=j\right) \quad=\sum_j p\left(\mathbf{y} t \mid s_t=i\right) P\left(s_t=i \mid s t-1=j\right) p(\mathbf{y} 1: t-1, s t-1=j)=p\left(\mathbf{y} t \mid s_t=i\right) \sum j=1^K A_j$$ 请洼意，这与 Viterbi 算法相同，除了最大化j已被求和取代。 在反向遅帅，我们计算: $$\beta_t(i) \equiv p\left(\mathbf{y} t+1: T \mid s_t=i\right)$$ 基本情况是: $$\beta_T(i)=1$$ 道归情况是: $$\beta_t(i)=\sum j=1^K A{i j} p(\mathbf{y} t+1 \mid s t+1=j) \beta_{t+1}(j)$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。