Posted on Categories:数学代写, 随机分析

数学代写|随机分析代写Stochastic Calculus代考|MA547 Non-EUT Preferences

avatest™

avatest™帮您通过考试

avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试，包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您，创造模拟试题，提供所有的问题例子，以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试，我们都能帮助您！

•最快12小时交付

•200+ 英语母语导师

•70分以下全额退款

数学代写|随机分析代写Stochastic Calculus代考|Non-EUT Preferences

There is abundant empirical and experimental evidence showing that when making choices under uncertainty, individuals do not maximize expected utility (EU); see for instance a survey by Starmer (2000). Various alternatives to the EU model, which are generally referred to as non-EU models, have been proposed in the literature. Some of these models employ probability weighting functions to describe the tendency of overweighing extreme outcomes that occur with small probabilities, examples being prospect theory (PT) (Kahneman and Tversky, 1979, Tversky and Kahneman, 1992) and rank-dependent utility (RDU) theory (Quiggin, 1982).

It has been noted that when applied to dynamic choice problems, non-EU models can lead to time inconsistency; see Machina (1989) for a review of early works discussing this issue. For illustration, consider a casino gambling problem studied by Barberis (2012): a gambler is offered 10 independent bets with equal probabilities of winning and losing $\$ 1$, plays these bets sequentially, and decides when to stop playing. Suppose at each time, the gambler’s objective is to maximize the preference value of the payoff at end of the game and the preferences are represented by a nonEU model involving a probability weighting function. We represent the cumulative payoff of playing the bets by a binomial tree with up and down movements standing for winning and losing, respectively. At time 0 , the top most state (TMS) of the tree at$t=10$represents the largest possible payoff achievable and the probability of reaching this state is extremely small$\left(2^{-10}\right)$. The gambler overweighs this state due to probability weighting and aspires to reach it. Hence, at time 0 , her plan is to play the 10-th bet if and when she has won all the previous 9 bets. Now, suppose she has played and indeed won the first 9 bets. If she has a chance to re-consider her decision of whether to play the 10-th bet at that time, she may find it no longer favorable to play because the probability of reaching the TMS at time 10 is$1 / 2$and thus this state is not overweighed. Consequently, when deciding whether to play the 10-th bet conditioning on she has won the first 9 bets, the gambler may choose differently when she is at time 0 and when she is at time 9 , showing time inconsistency. In a continuous-time, complete market, Hu et al. (2021) study a portfolio selection problem in which an agent maximizes the following RDU of her wealth$X$at a terminal time: $$\int_{\mathbb{R}} u(x) w\left(1-F_X(x)\right),$$ where$u$is a utility function,$w$is a probability weighting function, and$F_X$is the cumulative distribution function of$X$. The authors derive an open-loop intra-personal equilibrium and show that it is in the same form as in the classical Merton model but with a properly scaled market price of risk. He et al. (2020) consider median and quantile maximization for portfolio selection, where the objective function, namely the quantile of the terminal wealth, can be regarded as a special case of RDU with a particular probability weighting function$w$. The authors study closed-loop intrapersonal equilibrium and find that an affine trading strategy is an equilibrium if and only if it is a portfolio insurance strategy. Ebert and Strack (2017) consider the optimal time to stop a diffusion process with the objective to maximize the value of the process at the stopping time under a PT model. Using the notion of mild intra-personal equilibrium as previously discussed in Section 5, the authors show that under reasonable assumptions on the probability weighting functions, the only equilibrium among all two-threshold stopping rules is to immediately stop. Huang et al. (2020) study mild intra-personal equilibrium stopping rules for an agent who wants to stop a geometric Brownian motion with the objective of maximizing the RDU value at the stopping time. 数学代写|随机分析代写Stochastic Calculus代考|Dynamically Consistent Preferences Machina (1989) notes that, in many discussions of time inconsistency in the literature, a hidden assumption is consequentialism: at any intermediate time$t$of a dynamic decision process, the agent employs the same preference model as used at the initial time to evaluate the choices in the continuation of the dynamic decision process from time$t$, conditional on the circumstances at time$t$. For example, consider a dynamic consumption problem for an agent with present-bias preferences and suppose that at the initial time 0 , the agent’s preference value for a consumption stream$\left(C_s\right){s \geq 0}$is represented by$\mathbb{E}\left[\int_0^{\infty} h(s) u\left(C_s\right) d s\right]$, where the discount function$h$models the agent’s time preferences at the initial time 0 and$u$is the agent’s utility function. The consequentialism assumption implies that at any intermediate time$t$, the agent’s preferences for the continuation of the consumption stream, i.e.,$\left(C_s\right){s \geq t}$, are represented by the same preference model as at the initial time 0 , conditional on the situations at time$t$, i.e., by$\mathbb{E}_t\left[\int_t^{\infty} h(s-t) u\left(C_s\right) d s\right]$, where the discount function$h$and$u$are the same as the ones in the preference model at the initial time 0. Similarly, for a dynamic choice problem with RDU preferences for the payoff at a terminal time, the consequentialism assumption stipulates that the agent uses the same utility function$u$and probability weighting function$w$at all intermediate times$t$when evaluating the terminal payoff at those times. The consequentialism assumption, however, has not been broadly validated because there are few experimental or empirical studies on how individuals dynamically update their preferences. Machina (1989) consider a class of non-EU maximizers, referred to as$\gamma$-people, who adjust their preferences dynamically over time so as to remain time consistent. The idea in Machina (1989) was further developed by Karnam et al. (2017) who propose the notion of time-consistent dynamic preference models. The idea of considering time-consistent dynamic preferences is also central in the theory of forward performance criteria proposed and developed by Musiela and Zariphopoulou (2006, 2008, 2009, 2010a,b, 2011); see also He et al. (2021) for a related discussion. 随机分析代写 数学代写|随机分析代写Stochastic Calculus代考|Non-EUT Preferences 有大量的经验和实验证据表明，在不确定的情况下做出选择时，个人不会最大化预期效用（EU）；例如，参见 Starmer (2000) 来苗述以小概率发生的极端结果的过度加权趋势，例如前景理论 (PT) (Kahneman 和 Tversky，1979， Tversky和 Kahneman，1992) 和秩相关效用 (RDU) 理论 (Quiggin，1982)。 已经注意到，当应用于动态选择问题时，非欧盟模型可能导致时间不一致；参见 Machina (1989) 回顾早期讨论这个问题的作 品。为了说明这一点，请考虑 Barberis (2012) 研究的赌场赌軲问题：向赌徒提供 10 个独立的赌注，输赢概率相等$\$1$ ，按顺序玩 这些赌注，并决定何时停止玩。假设每次，赌徒的目标都是在游戏结束时最大化收益的偏值，并且偏伒由包含概率加权函数的非 $\mathrm{EU}$ 模型表示。我们用二叉树表示下注的累积回报，上下运动分别代表输赢。在时间 0 处，树的最顶层状态 (TMS) $t=10$ 代表可 实现的最大可能回报，达到这种状态的概率极小 $\left(2^{-10}\right)$. 赌徒由于概率加权而高估了这种状态并渴望达到它。因此，在时间 0，她 的计划是在她赢得所有之前的 9 次投注时进行第 10 次投注。现在，假设她玩了并且确实赢了前 9 个赌注。如果她有机会重新考虑 她当时是否要玩第 10 次赌注的决定，她可能会发现不再喜欢玩，因为在时间 10 达到 TMS 的既率是 $1 / 2$ 因此，这种状态并不过 分。因此，在决定是否以她赢得前 9 次投注为条件进行第 10 次投妵时，赌徒可能会在她在时间 0 和她在时间 9 时选择不同，表现 出时间不一致。

$$\int_{\mathbb{R}} u(x) w\left(1-F_X(x)\right)$$

数学代写|随机分析代写Stochastic Calculus代考|Dynamically Consistent Preferences

Machina (1989) 指出，在文南中许多关于时间不一致的讨论中，一个隐藏的假设是结果论: 在任何中间时间 $t$ 在动态抉策过程 中，代理使用与初始时间相同的偏好模型来评估动态决策过程后续的选择 $t$, 视当时情况而定 $t$. 例如，考虑具有当前偏好的代理的动 态消費问题，并假设在初始时间 0 时，代理对消费流的偏好值 $\left(C_s\right) s \geq 0$ 代表 $\mathbb{E}\left[\int_0^{\infty} h(s) u\left(C_s\right) d s\right]$, 其中折扣函数 $h$ 对代理人 在初始时间 0 的时间偏好进行建模，并且 $u$ 是代理人的效用函数。结果论假设意味着在任何中间时间 $t$ ，代理人对继续诮费流的偏 好，即 $\left(C_s\right) s \geq t$ ，由与初始时间 0 相同的偏好模型表示，以时间的情况为条件 $t$ ，即由 $\mathbb{E}_t\left[\int_t^{\infty} h(s-t) u\left(C_s\right) d s\right]$ ，其中折扣函 数 $h$ 和 $u$ 与初始时间0时偏好模型中的相同。类似地，对于终端时间具有RDU偏好的动态选择问题，结果主义假设规定代理人使用 相同的效用函数 $u$ 和概率加权函数 $w$ 在所有中间时间 $t$ 在评估当时的终端收益时。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。