Posted on Categories:CS代写, Machine Learning, 机器学习, 计算机代写

# 计算机代写|机器学习代写Machine Learning代考|ENGG3300 Probability Density Functions (PDFs)

avatest™

## avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

## 计算机代写|机器学习代写Machine Learning代考|Probability Density Functions (PDFs)

In many cases, we wish to handle data that can be represented as a real-valued random variable, or a real-valued vector $\mathbf{x}=\left[x_1, x_2, \ldots, x_n\right]^T$. Most of the intuitions from discrete variables transfer directly to the continuous case, although there are some subtleties.

We describe the probabilities of a real-valued scalar variable $x$ with a Probability Density Function (PDF), written $p(x)$. Any real-valued function $p(x)$ that satisfies:
$$\begin{array}{rlr} p(x) & \geq 0 \quad \text { for all } x \ \int_{-\infty}^{\infty} p(x) d x & =1 \end{array}$$
is a valid PDF. I will use the convention of upper-case $P$ for discrete probabilities, and lower-case $p$ for PDFs.

With the PDF we can specify the probability that the random variable $x$ falls within a given range:
$$P\left(x_0 \leq x \leq x_1\right)=\int_{x_0}^{x_1} p(x) d x$$
This can be visualized by plotting the curve $p(x)$. Then, to determine the probability that $x$ falls within a range, we compute the area under the curve for that range.

The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete distribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete distribution with $N$ possible outcomes, each corresponding to a range on the real number line. Then, suppose we increase $N$ towards infinity, so that each outcome shrinks to a single real number; a PDF is defined as the limiting case of this discrete distribution.

There is an important subtlety here: a probability density is not a probability per se. For one thing, there is no requirement that $p(x) \leq 1$. Moreover, the probability that $x$ attains any one specific value out of the infinite set of possible values is always zero, e.g. $P(x=5)=$ $\int_5^5 p(x) d x=0$ for any PDF $p(x)$. People (myself included) are sometimes sloppy in referring to $p(x)$ as a probability, but it is not a probability – rather, it is a function that can be used in computing probabilities.

## 计算机代写|机器学习代写Machine Learning代考|Mathematical expectation, mean, and variance

Some very brief definitions of ways to describe a PDF:
Given a function $f(\mathbf{x})$ of an unknown variable $\mathbf{x}$, the expected value of the function with repect to a PDF $p(\mathbf{x})$ is defined as:
$$E_{p(\mathbf{x})}[f(\mathbf{x})] \equiv \int f(\mathbf{x}) p(\mathbf{x}) d \mathbf{x}$$
Intuitively, this is the value that we roughly “expect” $\mathrm{x}$ to have.
The mean $\boldsymbol{\mu}$ of a distribution $p(\mathbf{x})$ is the expected value of $\mathbf{x}$ :
$$\boldsymbol{\mu}=E_{p(\mathbf{x})}[\mathbf{x}]=\int \mathbf{x} p(\mathbf{x}) d \mathbf{x}$$
The variance of a scalar variable $x$ is the expected squared deviation from the mean:
$$E_{p(x)}\left[(x-\mu)^2\right]=\int(x-\mu)^2 p(x) d x$$
The variance of a distribution tells us how uncertain, or “spread-out” the distribution is. For a very narrow distribution $E_{p(x)}\left[(x-\mu)^2\right]$ will be small.
The covariance of a vector $\mathrm{x}$ is a matrix:
$$\boldsymbol{\Sigma}=\operatorname{cov}(\mathbf{x})=E_{p(\mathbf{x})}\left[(\mathbf{x}-\boldsymbol{\mu})(\mathbf{x}-\boldsymbol{\mu})^T\right]=\int(\mathbf{x}-\boldsymbol{\mu})(\mathbf{x}-\boldsymbol{\mu})^T p(x) d \mathbf{x}$$
By inspection, we can see that the diagonal entries of the covariance matrix are the variances of the individual entries of the vector:
$$\boldsymbol{\Sigma}{i i}=\operatorname{var}\left(x{i i}\right)=E_{p(\mathbf{x})}\left[\left(x_i-\mu_i\right)^2\right]$$

## 计算机代写|机器学习代写Machine Learning代考|Probability Density Functions (PDFs)

$$p(x) \geq 0 \quad \text { for all } x \int_{-\infty}^{\infty} p(x) d x=1$$

$$P\left(x_0 \leq x \leq x_1\right)=\int_{x_0}^{x_1} p(x) d x$$

PDF 可以被认为是离散分布的无限极限，即具有无狠数量可能结果的离散分布。具体来说，假设㧴们创建一个离散分布 $N$ 可能的 结果，每个结果对应于实数线上的一个范围。然后，假设我们增加 $N$ 趋于无穷大，使每个结果都㜚小为一个实数; PDF 被定义为 这种离散分布的极限情况。

## 计算机代写|机器学习代写Machine Learning代考|Mathematical expectation, mean, and variance

$$E_{p(\mathbf{x})}[f(\mathbf{x})] \equiv \int f(\mathbf{x}) p(\mathbf{x}) d \mathbf{x}$$

$$\boldsymbol{\mu}=E_{p(\mathbf{x})}[\mathbf{x}]=\int \mathbf{x} p(\mathbf{x}) d \mathbf{x}$$

$$E_{p(x)}\left[(x-\mu)^2\right]=\int(x-\mu)^2 p(x) d x$$

$$\boldsymbol{\Sigma}=\operatorname{cov}(\mathbf{x})=E_{p(\mathbf{x})}\left[(\mathbf{x}-\boldsymbol{\mu})(\mathbf{x}-\boldsymbol{\mu})^T\right]=\int(\mathbf{x}-\boldsymbol{\mu})(\mathbf{x}-\boldsymbol{\mu})^T p(x) d \mathbf{x}$$

$$\boldsymbol{\Sigma} i i=\operatorname{var}(x i i)=E_{p(\mathbf{x})}\left[\left(x_i-\mu_i\right)^2\right]$$

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。