Let $\mathrm{A}$ and $\mathrm{B}$ be two non-empty sets. A relation $f$ from the set $\mathrm{A}$ to the set $\mathrm{B}$ is said to be a function if it satisfies the following two conditions. (i) $\mathrm{D}(f)=\mathrm{A}$ and (ii) if $\left(x_1, y_1\right) \in f$ and $\left(x_2, y_2\right) \in f$ then $y_1=y_2$.
In other words a relation $f$ from the set $A$ to the set $B$ is said to be a function if for each element $x$ in A there exists unique element $y$ in B. A function from A to B is some times denoted as $f: \mathrm{A} \rightarrow \mathrm{B}$.
Consider the following relations from the set $\mathrm{A}={1,2,3,4}$ to the set $\mathrm{B}={1,4,6,9,16,18}$. $$ \begin{aligned} & f_1={(1,1),(2,6),(4,9),(4,18)} \ & f_2={(1,1),(2,6),(3,9),(4,9),(4,16)} \ & f_3={(1,1),(2,4),(3,9),(4,16)} \ & f_4={(1,1),(2,4),(3,9),(4,9)} \end{aligned} $$ and Now, $\mathbf{D}\left(f_1\right)={1,2,4} \neq \mathrm{A}$. Therefore $f_1$ is not a function from the set $\mathrm{A}$ to the set $\mathrm{B}$. Further $\mathrm{D}\left(f_2\right)={1,2,3,4}=\mathrm{A}$; but $(4,9) \in f_2$ and $(4,16) \in f_2$ with $9 \neq 16$. This implies $f_2$ can not be a function from the set $A$ to the set $B$.
Again $\mathrm{D}\left(f_3\right)={1,2,3,4}=\mathrm{A}$ and for every element $x \in$ A there exists unique $y \in \mathrm{B}$. Therefore $f_3$ is a function from the set A to the set B. Similarly $f_4$ is also a function. The arrow diagrams are given below.
Note: From the above discussions it is clear that One-Many and Many-Many relations are not functions.
数学代写|离散数学代写Discrete Mathematics代考|Domain and co-domain of a Function
Suppose that $f$ be a function from the set A to the set B. The set A is called the domain of the function $f$ where as the set $\mathrm{B}$ is called the co-domain of the function $f$. Consider the function $f$ from the set $\mathrm{A}={a, b, c, d}$ to the set $\mathrm{B}={1,2,3,4}$ as $$ f={(a, 1),(b, 2),(c, 2),(d, 4)} $$ Therefore, domain of $f={a, b, c, d}$ and co-domain of $f={1,2,3,4}$. i.e. $\mathrm{D}(f)={a, b, c, d}$ and Co-domain $f={1,2,3,4}$. 4.1.2 Range of a Function Let $f$ be a function from the set $\mathrm{A}$ to the set $\mathrm{B}$. The element $y \in \mathrm{B}$ which the function $f$ associates to an element $x \in \mathrm{A}$ is called the image of $x$ or the value of the function $f$ for $x$. From the definition of function it is clear that each element of A has an unique image on $B$. Therefore the range of a function $f: \mathrm{A} \rightarrow \mathrm{B}$ is defined as the image of its domain $\mathrm{A}$. Mathematically, $$ \mathrm{R}(f) \text { or } \operatorname{rng}(f)={y=f(x): x \in \mathrm{A}} $$ It is clear that $R(f) \subseteq B$. Consider the function $f$ from $\mathrm{A}={a, b, c}$ to $\mathrm{B}={1,3,5,7,9}$ as $f={(a, 3),(b, 5),(c, 5)}$. Therefore $\mathrm{R}(f)={3,5}$.
数学代写|离散数学代写Discrete Mathematics代考|TYPES OF RELATIONS AND RELATION MATRIX
Let $\mathrm{A}=\left{a_1, a_2, \ldots, a_i, \ldots, a_j, \ldots \ldots, a_n\right}$ be a non-empty set and $\mathrm{R}$ be a relation defined on the set A. Hence the matrix of the relation $\mathrm{R}$ relative to the ordering $a_1, a_2, \ldots, a_i, \ldots, a_j, \ldots \ldots$, $a_n$ is defined as $$ \begin{aligned} \mathrm{M}(\mathrm{R}) & =\left[m_{i j}\right]{n \times n} \ m{i j} & = \begin{cases}1 & \text { If } a_i \mathrm{R} a_j \ 0 & \text { If } a_i \mathrm{R} a_j\end{cases} \end{aligned} $$ 3.12.1 Reflexive Relations The relation $\mathrm{R}$ is said to be reflexive if $m_{i i}=1 \forall 1 \leq i \leq n$ i.e. all elements of the main diagonal in relation matrix $\mathrm{M}(\mathrm{R})$ are 1 . 3.12.2 Symmetric Relations The relation $\mathrm{R}$ is said to be symmetric if $m_{i j}=m_{j i} \forall 1 \leq i \leq n$ and $1 \leq j \leq n$. In other words the relation $R$ is said to be symmetric if $M(R)=[M(R)]^T$. where $[M(R)]^{\mathrm{T}}$ represents the transpose of the relation matrix $M(R)$. 3.12.3 Transitive Relation The relation $\mathrm{R}$ is said to be transitive if $m_{i j}=1$ and $m_{j k}=1$, then $m_{i k}=1$ for $1 \leq i \leq n ; 1 \leq j \leq n$ and $1 \leq k \leq n$.
In other words the relation $\mathrm{R}$ is said to be transitive if and only if $\mathrm{R}^2 \subseteq \mathrm{R}$. i.e. Whenever entry $i, j$ in $[\mathrm{M}(\mathrm{R})]^2$ is non-zero, entry $i, j$ in $\mathrm{M}(\mathrm{R})$ is also non-zero. Let $R$ be a relation on the set $A$ and $R$ is transitive. Let $$ (x, z) \in \mathrm{R}^2=\mathrm{R} . \mathrm{R} \text {. } $$ So, there exists $y \in \mathrm{A}$ such that $(x, y) \in \mathrm{R}$ and $(y, z) \in \mathrm{R}$ Thus $(x, z) \in \mathrm{R}[\because \quad \mathrm{R}$ is transitive $]$ i.e. $$ (x, z) \in \mathrm{R}^2 \Rightarrow(x, z) \in \mathrm{R} $$ Therefore $$ \mathrm{R}^2 \subseteq \mathrm{R} $$ Conversely,Suppose that $\mathrm{R}^2 \subseteq R$. Let $\quad(x, y) \in \mathrm{R}$ and $(y, z) \in \mathrm{R}$ This implies i.e. $(x, z) \in \mathrm{R} . \mathrm{R}=\mathrm{R}^2$ i.e. $(x, z) \in \mathrm{R}^2 \subseteq \mathrm{R}$ $(x, z) \in \mathrm{R}$ Therefore $\mathrm{R}$ is transitive.
数学代写|离散数学代写Discrete Mathematics代考|RELATION MATRIX (MATRIX OF THE RELATION)
A matrix is a convenient way to represent a relation $\mathrm{R}$. Such a representation can be used by a computer to analyze the relation. Let $$ \begin{aligned} & \mathrm{A}=\left{a_1, a_2, a_3, \ldots ., a_i, \ldots ., a_k\right} \ & \mathrm{B}=\left{b_1, b_2, b_3, \ldots, b_j, \ldots, b_l\right} \end{aligned} $$ and be two finite sets and $R$ be a relation from the set $A$ to the set $B$. Then the matrix of the relation $R$, i.e. $M(R)$ is defined as $$ \begin{aligned} \mathrm{M}(\mathrm{R}) & =\left[m_{\mathrm{Ij}}\right] \text { of order }(k \times l) \ m_{\mathrm{Ij}} & = \begin{cases}1 ; & \text { if } a_i \mathrm{R} b_j \ 0 ; & \text { if } a_i \mathrm{R} b_j\end{cases} \end{aligned} $$ In other words label the rows of rectangular array by the elements of $A$ and the columns by the elements of B. Each position of the array is to be filled with $a 1$ (one) or 0 (zero) according as $a \in \mathrm{A}$ is related or not related to $b \in \mathrm{B}$. Consider the example
Let $\mathrm{A}={1,2,3} ; \mathrm{B}={a, b, c, d, e}$ and $\mathrm{R} \subseteq(\mathrm{A} \times \mathrm{B})$ such that $\mathrm{R}={(1, a),(1, d),(2, b),(3, c),(3, d)}$. So the matrix of the above relation $\mathrm{R}$ is given as $$ M(R)=\left[\begin{array}{ccccc} a & b & c & d & e \ 1 \ 2 & 0 & 0 & 1 & 0 \ 3 & 1 & 0 & 0 & 0 \ 0 & 0 & 1 & 1 & 0 \end{array}\right] $$
数学代写|离散数学代写Discrete Mathematics代考|COMPOSITION OF RELATIONS
Let $R_1$ be a relation from the set $A$ to the set $B$ and $R_2$ be a relation from the set $B$ to the set C. That is $R_1$ is a subset of $(A \times B)$ and $R_2$ is a subset of $(B \times C)$. Then the composition of $R_1$ and $R_2$ is given by $R_1 R_2$ and is defined by $$ \mathrm{R}_1 \mathrm{R}_2=\left{(x, z) \in(\mathrm{A} \times \mathrm{C}) \mid \text { for some } y \in \mathrm{B},(x, y) \in \mathrm{R}_1 \text { and }(y, z) \in \mathrm{R}_2\right} $$ Consider the example: Let $\mathrm{A}={1,2,4,5,7}$; $$ \begin{aligned} & \mathrm{B}={a, b, c, d, e} \ & \mathrm{C}={1,4,16,25} . \end{aligned} $$ and Consider the relations $R_1: A \rightarrow B$ and $R_2: B \rightarrow C$ as
$\mathrm{R}_1={(1, a),(1, c),(2, d),(2, e),(5, d)}$ and $\mathrm{R}_2={(c, 1),(d, 4),(e, 25)}$. The arrow diagram is given as So, $$ \mathrm{R}_1 \mathrm{R}_2={(1,1),(2,4),(2,25),(5,4)} $$
数学代写|离散数学代写Discrete Mathematics代考|APPLICATION OF SET THEORY
Let $\mathrm{A}$ and B be finite sets. Let $n(\mathrm{~A})$ be the number of distinct elements of the set $\mathrm{A}$. Then $$ n(\mathrm{~A} \cup \mathrm{B})=n(\mathrm{~A})+n(\mathrm{~B})-n(\mathrm{~A} \cap \mathrm{B}) . $$ Further if $A$ and $B$ are disjoint, then $$ n(\mathrm{~A} \cup \mathrm{B})=n(\mathrm{~A})+n(\mathrm{~B}) $$ Proof: A and B be finite sets and $n(\mathrm{~A})$ represent the number of distinct elements of the set $\mathrm{A}$. From the above Venn diagram it is clear that and $$ \begin{aligned} n(\mathrm{~A}) & =n(\mathrm{~A}-\mathrm{B})+n(\mathrm{~A} \cap \mathrm{B}) \ n(\mathrm{~B}) & =n(\mathrm{~B}-\mathrm{A})+n(\mathrm{~A} \cap \mathrm{B}) \ n(\mathrm{~A} \cup \mathrm{B}) & =n(\mathrm{~A}-\mathrm{B})+n(\mathrm{~A} \cap \mathrm{B})+n(\mathrm{~B}-\mathrm{A}) \ & =n(\mathrm{~A})-n(\mathrm{~A} \cap \mathrm{B})+n(\mathrm{~A} \cap \mathrm{B})+n(\mathrm{~B})-n(\mathrm{~A} \cap \mathrm{B}) \ & =n(\mathrm{~A})+n(\mathrm{~B})-n(\mathrm{~A} \cap \mathrm{B}) \end{aligned} $$ and i.e. $$ n(\mathrm{~A} \cup \mathrm{B})=n(\mathrm{~A})+n(\mathrm{~B})-n(\mathrm{~A} \cap \mathrm{B}) $$ If $\mathrm{A}$ and $\mathrm{B}$ are disjoint, then $(\mathrm{A} \cap \mathrm{B})=\phi$ i.e. $n(\mathrm{~A} \cap \mathrm{B})=0$ Therefore, $n(\mathrm{~A} \cup \mathrm{B})=n(\mathrm{~A})+n(\mathrm{~B})$.
数学代写|离散数学代写Discrete Mathematics代考|PRODUCT OF SETS
The product of sets is defined with the help of an order pair. An order pair is usually denoted by $(x, y)$ such that $(x, y) \neq(y, x)$ whenever $x \neq y$. The product of two sets A and B is the set of all those order pairs whose first coordinate is an element of A and the second coordinate is an element of $\mathrm{B}$. The set is denoted by $(\mathrm{A} \times \mathrm{B})$. Mathematically, $$ (\mathrm{A} \times \mathrm{B})={(x, y) \mid x \in \mathrm{A} \text { and } x \in \mathrm{B}} $$ Consider the example Let $$ \begin{aligned} & \mathrm{A}={1,2,3,5,7} \ & \mathrm{B}={4,9,25} \end{aligned} $$ So, $(\mathrm{A} \times \mathrm{B})={(1,4),(1,9),(1,25),(2,4),(2,9),(2,25),(3,4),(3,9),(3,25),(5,4),(5,9),(5,25)$, $(7,4),(7,9),(7,25)}$
Note : The product of sets can be extendable for $n$ sets $\mathrm{A}_1, \mathrm{~A}_2, \mathrm{~A}_3, \ldots \ldots ., \mathrm{A}_n$. Thus $\mathrm{A}_1 \times \mathrm{A}_2$ $\times \mathrm{A}_3 \times \ldots . . \times \mathrm{A}_n$ can be defined as $$ \mathrm{A}_1 \times \mathrm{A}_2 \times \mathrm{A}_3 \times \ldots . . \mathrm{A}_n=\left{\left(x_1, x_2, x_3, \ldots ., x_n\right) \mid x_1 \in \mathrm{A}_1 \text { and } x_2 \in \mathrm{A}_2 \text { and } x_3 \in \mathrm{A}_3 \text { and } \ldots \text { and } x_n \in\right. $$ $\left.\mathrm{A}_n\right}$ where $\left(x_1, x_2, x_3, \ldots ., x_n\right)$ is called as $n$-tuple of $x_1, x_2, x_3, \ldots, x_n$. To explain this consider the example in which $\mathrm{A}={a, b, c} ; \mathrm{B}={1,2}$ and $\mathrm{C}={\alpha, \beta}$. Therefore $$ \mathrm{A} \times \mathrm{B} \times \mathrm{C}={(a, 1, \alpha),(a, 1, \beta),(a, 2, \alpha),(a, 2, \beta),(b, 1, \alpha),(b, 1, \beta),(b, 2, \alpha),(b, 2, \beta),(c, 1, \alpha), $$ $(c, 1, \beta),(c, 2, \alpha),(c, 2, \beta)}$.
From the above example it is very clear that $|\mathrm{A} \times \mathrm{B} \times \mathrm{C}|=|\mathrm{A}| \times|\mathrm{B}| \times|\mathrm{C}|$. In general, $\left|\mathrm{A}_1 \times \mathrm{A}_2 \times \mathrm{A}_3 \times \ldots \ldots \times \mathrm{A}_n\right|=\left|\mathrm{A}_1\right| \times\left|\mathrm{A}_2\right| \times\left|\mathrm{A}_3\right| \times \ldots \times\left|\mathrm{A}_n\right|$.
现代博弈论始于约翰-冯-诺伊曼(John von Neumann)提出的两人零和博弈中的混合策略均衡的观点及其证明。冯-诺依曼的原始证明使用了关于连续映射到紧凑凸集的布劳威尔定点定理,这成为博弈论和数学经济学的标准方法。在他的论文之后,1944年,他与奥斯卡-莫根斯特恩(Oskar Morgenstern)共同撰写了《游戏和经济行为理论》一书,该书考虑了几个参与者的合作游戏。这本书的第二版提供了预期效用的公理理论,使数理统计学家和经济学家能够处理不确定性下的决策。
Let $\mathrm{P}$ and $\mathrm{Q}$ be any two statements. Then the statement $\mathrm{P} \rightarrow \mathrm{Q}$ is called a conditional statement. This can be put in any one of the following forms. (a) If $\mathrm{P}$, then $\mathrm{Q}$ (b) P only if $\mathrm{Q}$ (c) P implies $\mathrm{Q}$ (d) $\mathrm{Q}$ if $\mathrm{P}$ In an implication $\mathrm{P} \rightarrow \mathrm{Q}, \mathrm{P}$ is called the antecedent (hypothesis) and $\mathrm{Q}$ is called the consequent (conclusion). To explain the conditional statement, consider the example A boy promises a girl “I will take you boating on Sunday if it is not raining”. Now if it is raining, then the boy would not be deemed to have broken his promise. The boy would be deemed to have broken his promise only when it is not raining and the boy did not take the girl for boating on Sunday. Let us break the above conditional statement to symbolic from. P: It is not raining Q: I will take you boating on Sunday So, the above statement reduces to $\mathrm{P} \rightarrow \mathrm{Q}$. From the above discussion it is clear that if $P$ is false then $P \rightarrow Q$ is true, whatever be the truth value of $Q$. The conditional $P \rightarrow Q$ is false if $P$ is true and $Q$ is false.
Rule: An implication (conditional) $\mathrm{P} \rightarrow \mathrm{Q}$ is False only when the hypothesis $(\mathrm{P})$ is true and conclusion (Q) is false, otherwise True.
数学代写|离散数学代写Discrete Mathematics代考|BI-CONDITIONAL
Let $\mathrm{P}$ and $\mathrm{Q}$ be any two statements. Then the statement $\mathrm{P} \leftrightarrow \mathrm{Q}$ is called a bi-conditional statement. This $P \leftrightarrow Q$ can be put in any one of the following forms. (a) $\mathrm{P}$ if and only if $\mathrm{Q}$ (b) $\mathrm{P}$ is necessary and sufficient of $\mathrm{Q}$ (c) $\mathrm{P}$ is necessary and sufficient for $\mathrm{Q}$ (d) $\mathrm{P}$ is implies and implied by $\mathrm{Q}$ The bi-conditional (double implication) $P \leftrightarrow Q$ is defined as $$ (\mathbf{P} \leftrightarrow \mathbf{Q}):(\mathbf{P} \rightarrow \mathbf{Q}) \wedge(\mathbf{Q} \rightarrow \mathbf{P}) $$ From the truth table discussed below it is clear that $P \leftrightarrow Q$ has the truth value $T$ whenever both $\mathrm{P}$ and $\mathrm{Q}$ have identical truth values.
Rule: $(\mathrm{P} \leftrightarrow \mathrm{Q})$ is True only when both $\mathrm{P}$ and $\mathrm{Q}$ have identical truth Values, otherwise false.
Let $\mathrm{P}$ and $\mathrm{Q}$ be any two statements. The converse statement of the conditional $\mathrm{P} \rightarrow \mathrm{Q}$ is given as $\mathrm{Q} \rightarrow \mathrm{P}$.
Consider the example “all concurrent triangles are similar”. The above statement can also be written as “if triangles are concurrent, then they are similar”. Let $P$ : Triangles are concurrent Q : Triangles are similar So, the statement becomes $\mathrm{P} \rightarrow \mathrm{Q}$. The converse statement is given as “if triangles are similar, then they are concurrent” or all similar triangles are concurrent.
现代博弈论始于约翰-冯-诺伊曼(John von Neumann)提出的两人零和博弈中的混合策略均衡的观点及其证明。冯-诺依曼的原始证明使用了关于连续映射到紧凑凸集的布劳威尔定点定理,这成为博弈论和数学经济学的标准方法。在他的论文之后,1944年,他与奥斯卡-莫根斯特恩(Oskar Morgenstern)共同撰写了《游戏和经济行为理论》一书,该书考虑了几个参与者的合作游戏。这本书的第二版提供了预期效用的公理理论,使数理统计学家和经济学家能够处理不确定性下的决策。
数学代写|离散数学代写Discrete Mathematics代考|Recalling the Strategy Definition
The statement “not $\mathbf{A}$,” written $\sim \mathbf{A}$, is true whenever $\mathbf{A}$ is false. For example, the statement Charles is not happily married is true provided the statement “Charles is happily married” is false. The truth table for $\sim \mathbf{A}$ is as follows: \begin{tabular}{cc} \hline $\mathbf{A}$ & $\sim \mathbf{A}$ \ \hline $\mathrm{T}$ & $\mathrm{F}$ \ $\mathrm{F}$ & $\mathrm{T}$ \ \hline \end{tabular} Greater understanding is obtained by combining the connectives: EXAMPLE 1.6 We examine the truth table for $\sim(\mathbf{A} \wedge \mathbf{B})$ : \begin{tabular}{lccc} \hline $\mathbf{A}$ & $\mathbf{B}$ & $\mathbf{A} \wedge \mathbf{B}$ & $\sim(\mathbf{A} \wedge \mathbf{B})$ \ \hline $\mathrm{T}$ & $\mathrm{T}$ & $\mathrm{T}$ & $\mathrm{F}$ \ $\mathrm{T}$ & $\mathrm{F}$ & $\mathrm{F}$ & $\mathrm{T}$ \ $\mathrm{F}$ & $\mathrm{T}$ & $\mathrm{F}$ & $\mathrm{T}$ \ $\mathrm{F}$ & $\mathrm{F}$ & $\mathrm{F}$ & $\mathrm{T}$ \ \hline \end{tabular} EXAMPLE 1.7 Now we look at the truth table for $(\sim \mathbf{A}) \vee(\sim \mathbf{B})$ : \begin{tabular}{ccccc} \hline $\mathbf{A}$ & $\mathbf{B}$ & $\sim \mathbf{A}$ & $\sim \mathbf{B}$ & $(\sim \mathbf{A}) \vee(\sim \mathbf{B})$ \ \hline $\mathrm{T}$ & $\mathrm{T}$ & $\mathrm{F}$ & $\mathrm{F}$ & $\mathrm{F}$ \ $\mathrm{T}$ & $\mathrm{F}$ & $\mathrm{F}$ & $\mathrm{T}$ & $\mathrm{T}$ \ $\mathrm{F}$ & $\mathrm{T}$ & $\mathrm{T}$ & $\mathrm{F}$ & $\mathrm{T}$ \ $\mathrm{F}$ & $\mathrm{F}$ & $\mathrm{T}$ & $\mathrm{T}$ & $\mathrm{T}$ \ \hline \end{tabular}
Notice that the statements $\sim(\mathbf{A} \wedge \mathbf{B})$ and $(\sim \mathbf{A}) \vee(\sim \mathbf{B})$ have the same truth table. As previously noted, such pairs of statements are called logically equivalent. The logical equivalence of $\sim(\mathbf{A} \wedge \mathbf{B})$ with $(\sim \mathbf{A}) \vee(\sim \mathbf{B})$ makes good intuitive sense: the statement $\mathbf{A} \wedge \mathbf{B}$ fails [that is, $\sim(\mathbf{A} \wedge \mathbf{B})$ is true] precisely when either $\mathbf{A}$ is false or $\mathbf{B}$ is false. That is, $(\sim \mathbf{A}) \vee(\sim \mathbf{B})$. Since in mathematics we cannot rely on our intuition to establish facts, it is important to have the truth table technique for establishing logical equivalence. The exercise set will give you further practice with this notion.
One of the main reasons that we use the inclusive definition of “or” rather than the exclusive one is so that the connectives “and” and “or” have the nice relationship just discussed. It is also the case that $\sim(\mathbf{A} \vee \mathbf{B})$ and $(\sim \mathbf{A}) \wedge(\sim \mathbf{B})$ are logically equivalent. These logical equivalences are sometimes referred to as de Morgan’s laws.
数学代写|离散数学代写Discrete Mathematics代考|‘‘If-Then’’
A statement of the form “If $\mathbf{A}$ then $\mathbf{B}$ ” asserts that whenever $\mathbf{A}$ is true then $\mathbf{B}$ is also true. This assertion (or “promise”) is tested when $\mathbf{A}$ is true, because it is then claimed that something else (namely $\mathbf{B}$ ) is true as well. However, when $\mathbf{A}$ is false then the statement “If $\mathbf{A}$ then $\mathbf{B}$ ” claims nothing. Using the symbols $\mathbf{A} \Rightarrow \mathbf{B}$ to denote “If $\mathbf{A}$ then $\mathbf{B}$ “, we obtain the following truth table: \begin{tabular}{ccc} \hline $\mathbf{A}$ & $\mathbf{B}$ & $\mathbf{A} \Rightarrow \mathbf{B}$ \ \hline $\mathrm{T}$ & $\mathrm{T}$ & $\mathrm{T}$ \ $\mathrm{T}$ & $\mathrm{F}$ & $\mathrm{F}$ \ $\mathrm{F}$ & $\mathrm{T}$ & $\mathrm{T}$ \ $\mathrm{F}$ & $\mathrm{F}$ & $\mathrm{T}$ \ \hline \end{tabular} Notice that we use here an important principle of aristotelian logic: every sensible statement is either true or false. There is no “in between” status. When $\mathbf{A}$ is false we can hardly assert that $\mathbf{A} \Rightarrow \mathbf{B}$ is false. For $\mathbf{A} \Rightarrow \mathbf{B}$ asserts that “whenever A is true then $\mathbf{B}$ is true”, and $\mathbf{A}$ is not true!
Put in other words, when $\mathbf{A}$ is false then the statement $\mathbf{A} \Rightarrow \mathbf{B}$ is not tested. It therefore cannot be false. So it must be true. We refer to $\mathbf{A}$ as the hypothesis of the implication and to $\mathbf{B}$ as the conclusion of the implication. When the if-then statement is true, then the hypothsis implies the conclusion. EXAMPLE 1.8 The statement “If $2=4$ then Calvin Coolidge was our greatest president” is true. This is the case no matter what you think of Calvin Coolidge. The point is that the hypothesis $(2=4)$ is false; thus it doesn’t matter what the truth value of the conclusion is. According to the truth table for implication, the sentence is true. The statement “If fish have hair then chickens have lips” is true. Again, the hypothesis is false so the sentence is true.
The statement “If $9>5$ then dogs don’t fly” is true. In this case the hypothesis is certainly true and so is the conclusion. Therefore the sentence is true. (Notice that the “if” part of the sentence and the “then” part of the sentence need not be related in any intuitive sense. The truth or falsity of an “if-then” statement is simply a fact about the logical values of its hypothesis and of its conclusion.)
现代博弈论始于约翰-冯-诺伊曼(John von Neumann)提出的两人零和博弈中的混合策略均衡的观点及其证明。冯-诺依曼的原始证明使用了关于连续映射到紧凑凸集的布劳威尔定点定理,这成为博弈论和数学经济学的标准方法。在他的论文之后,1944年,他与奥斯卡-莫根斯特恩(Oskar Morgenstern)共同撰写了《游戏和经济行为理论》一书,该书考虑了几个参与者的合作游戏。这本书的第二版提供了预期效用的公理理论,使数理统计学家和经济学家能够处理不确定性下的决策。
数学代写|离散数学代写Discrete Mathematics代考|Recalling the Strategy Definition
Before beginning to analyze sequential-move games, it is critically important to make sure that you completely understand the definition of “strategy.” Otherwise, you’re in for some major discomfort as you try to learn the material in this and the next parts of this book.
Consider the ultimatum-offer bargaining game just described. In this game, player 1’s strategy is simply a number $p$, which we can assume is between 0 and 100. Thus, the strategy space for player 1 is $S_1=[0,100]$. Player 2’s strategy is from a more complicated space. Note that player 2 has an infinite number of information sets, one for each of the feasible offers of player 1. For instance, one information set corresponds to player 1 having just made the offer $p=28$; another information set follows the offer $p=30.75$; another follows the offer $p=62$; and so on. Because there is an infinite number of points in the interval $[0,100]$, player 2 has an infinite number of information sets.
Remember that a strategy for a player is a complete contingent plan. Thus, player 2’s strategy must specify player 2’s choice between Yes and No at every one of player 2’s information sets. In other words, player 2’s strategy describes whether she will accept an offer of $p=28$, whether she will accept an offer of $p=30.75$, whether she will accept an offer of $p=62$, and so on. Formally, player 2’s strategy in this game can be expressed as a function that maps player 1 ‘s price offer $p$ to the set {Yes, No}. That is, considering $p \in[0,100]$, we can write player 2’s strategy as some function $s_2:[0,100] \rightarrow{$ Yes, No $}$. Then, for whatever offer $p$ that player 1 makes, player 2’s response is $s_2(p)$.
Here are some examples of strategies for player 2 in the ultimatum-offer bargaining game. A really simple strategy is a constant function. One such strategy specifies $s_2(p)=$ Yes for all $p$; this strategy accepts whatever player 1 offers. Another type of strategy for player 2 is a “cutoff rule,” which would accept any price at or below some cutoff value $\underline{p}$ and otherwise would reject. For a given number $\underline{p}$, this strategy is defined by $$ s_2(p)= \begin{cases}\text { Yes } & \text { if } p \leq \underline{p} \ \text { No } & \text { if } p>\underline{p}\end{cases} $$
数学代写|离散数学代写Discrete Mathematics代考|Incredible Threats in the Stackelberg Duopoly Game
Here is another example. Consider the Stackelberg duopoly game described in exercise 6 of Chapter $14^2$ In this game, firm 1 selects a quantity $q_1 \in[0,12]$, which is observed by firm 2 , and then firm 2 selects its quantity $q_2 \in[0,12]$. Firm 1’s payoff is $\left(12-q_1-q_2\right) q_1$, and firm 2’s payoff is $\left(12-q_1-q_2\right) q_2$. Remember that firm 2’s strategy can be expressed as a function $s_2:[0,12] \rightarrow[0,12]$ that maps firm 1’s quantity into firm 2’s quantity in response. This is because each value of $q_1$ yields a distinct information set for firm 2. Part (c) of the exercise asked you to confirm that for any $x \in[0,12]$, there is a Nash equilibrium of the game in which $q_1=x$ and $s_2(x)=(12-x) / 2$. Let us check this assertion for $x=0$, the case in which firm 1 is supposed to produce $q_1=0$ and firm 2 is supposed to follow with the quantity $q_2=6$. In words, by producing nothing, firm 1 leaves the entire market to firm 2 . First verify that $q_2=6$ is the payoff-maximizing quantity for firm 2 when firm 1 produces 0 ; clearly $q_2=6$ maximizes $\left(12-q_2\right) q_2$. Next, note that this calculation is not enough to verify Nash equilibrium because we have not yet specified the strategy for firm 2. We have so far only specified that $s_2(0)=6$; we have not yet defined $s_2\left(q_1\right)$ for $q_1 \neq 0$. Furthermore, we need to check whether firm 1 would have the incentive to deviate from $q_1=0$.
Consider the following strategy for firm $2: s_2(0)=6$ and $s_2\left(q_1\right)=12-q_1$ for every $q_1 \neq 0$. Note that if firm 1 produces a positive amount, then firm 2 will produce exactly the amount that pushes the price (and therefore firm 1’s payoff) down to 0 . Clearly, against strategy $s_2$, firm 1 cannot gain by deviating from $q_1=0$. Furthermore, against $q_1=0$, firm 2 has no incentive to deviate from $s_2$. To see this, observe that by changing the specification $s_2(0)=6$, firm 2’s payoff would decrease. Moreover, changing the specification of $s_2(x)$ for any $x \neq 0$ would have no effect on firm 2’s payoff, as player 1’s strategy is $q_1=0$. Thus, $\left(0, s_2\right)$ is a Nash equilibrium.
现代博弈论始于约翰-冯-诺伊曼(John von Neumann)提出的两人零和博弈中的混合策略均衡的观点及其证明。冯-诺依曼的原始证明使用了关于连续映射到紧凑凸集的布劳威尔定点定理,这成为博弈论和数学经济学的标准方法。在他的论文之后,1944年,他与奥斯卡-莫根斯特恩(Oskar Morgenstern)共同撰写了《游戏和经济行为理论》一书,该书考虑了几个参与者的合作游戏。这本书的第二版提供了预期效用的公理理论,使数理统计学家和经济学家能够处理不确定性下的决策。
数学代写|离散数学代写Discrete Mathematics代考|Randomization in Sports
For another example, take the tennis-service game of Chapter 7’s Guided Exercise, whose payoff matrix is reproduced in Figure 11.2. Recall that each player’s strategy $\mathrm{F}$ is removed in the iterated-dominance procedure, so the set of rationalizable strategies for each player is ${\mathrm{C}, \mathrm{B}}$. The game has no Nash equilibrium in pure strategies. In any mixed-strategy equilibrium, the players will put positive probability on only rationalizable strategies. Thus, we know a mixedstrategy equilibrium will specify a strategy $(0, p, 1-p)$ for player 1 and a strategy $(0, q, 1-q)$ for player 2 . In this strategy profile, $p$ is the probability that player 1 selects C, and $1-p$ is the probability that he selects B; likewise, $q$ is the probability that player 2 selects $\mathrm{C}$, and $1-q$ is the probability that she selects B. To calculate the mixed-strategy equilibrium in the tennis example, observe that against player 2’s mixed strategy, player 1 would get an expected payoff of $$ q \cdot 0+(1-q) \cdot 3=3-3 q $$ if he selects $\mathrm{C}$; whereas by choosing $\mathrm{B}$, he would expect $$ q \cdot 3+(1-q) \cdot 2=2+q $$
数学代写|离散数学代写Discrete Mathematics代考|TECHNICAL NOTES
The following summarizes the steps required to calculate mixed-strategy Nash equilibria for simple two-player games. Procedure for finding mixed-strategy equilibria:
Calculate the set of rationalizable strategies by performing the iterateddominance procedure.
Restricting attention to rationalizable strategies, write equations for each player to characterize mixing probabilities that make the other player indifferent between the relevant pure strategies.
Solve these equations to determine equilibrium mixing probabilities.
If each player has exactly two rationalizable strategies, this procedure is quite straightforward. If a player has more than two rationalizable strategies, then there are several cases to consider; the various cases amount to trying different combinations of pure strategies over which the players may randomize. For example, suppose that $\mathrm{A}, \mathrm{B}$, and $\mathrm{C}$ are all rationalizable for a particular player. Then, in a mixed-strategy equilibrium, it may be that this player mixes between $\mathrm{A}$ and $\mathrm{B}$ (putting zero probability on $\mathrm{C}$ ), mixes between $\mathrm{A}$ and $\mathrm{C}$ (putting zero probability on $B$ ), mixes between $\mathrm{B}$ and $\mathrm{C}$ (putting zero probability on $\mathrm{A}$ ), or mixes between $\mathrm{A}, \mathrm{B}$, and C. There are also cases in which only one of the players mixes.
Note that every pure-strategy equilibrium can also be considered a mixedstrategy equilibrium-where all probability is put on one pure strategy. All of the games analyzed thus far have at least one equilibrium (in pure or mixed strategies). In fact, this is a general theorem. ${ }^4$ Result: Every finite game (having a finite number of players and a finite strategy space) has at least one Nash equilibrium in pure or mixed strategies.
现代博弈论始于约翰-冯-诺伊曼(John von Neumann)提出的两人零和博弈中的混合策略均衡的观点及其证明。冯-诺依曼的原始证明使用了关于连续映射到紧凑凸集的布劳威尔定点定理,这成为博弈论和数学经济学的标准方法。在他的论文之后,1944年,他与奥斯卡-莫根斯特恩(Oskar Morgenstern)共同撰写了《游戏和经济行为理论》一书,该书考虑了几个参与者的合作游戏。这本书的第二版提供了预期效用的公理理论,使数理统计学家和经济学家能够处理不确定性下的决策。
Remember that the Nash equilibrium concept represents the extreme version of congruity in which the players coordinate on a single strategy profile. In some settings, it may not be reasonable to expect such an extreme form of coordination. One reason is that there may not be a social institution that serves to coordinate beliefs and behavior. Another reason is that coordination on a single strategy profile may be inconsistent with best-response behavior in some games. For an interesting example, consider the game shown in Figure 9.3. Suppose the players can communicate before the game to discuss how to coordinate their play. Would they coordinate on the Nash equilibrium strategy profile $(\mathrm{z}, \mathrm{m})$ ? Perhaps, but it would be a shame, for the players would get higher payoffs if they could coordinate on not playing strategies $\mathrm{z}$ and $\mathrm{m}$. Unfortunately, this kind of coordination cannot be captured by the equilibrium notion, as $(\mathrm{z}, \mathrm{m})$ is the only Nash equilibrium of the game. One can define a more general notion of congruity that lies between rationalizability and Nash equilibrium, in which strategic uncertainty is reduced but not always eliminated. The key is to associate the congruity idea with sets of strategy profiles. For instance, for the game shown in Figure 9.3, consider the set of strategy profiles $X \equiv{\mathrm{w}, \mathrm{y}} \times{\mathrm{k}, 1}$. Notice that if player 1 is convinced that player 2 will select either $\mathrm{k}$ or 1 (but not $\mathrm{m}$ ), then player 1’s best response must be $\mathrm{w}$ or $\mathrm{y}$. Likewise, if player 2 thinks player 1 will select either w or $\mathrm{y}$, then player 2’s best responses are only strategies $\mathrm{k}$ and 1 . We can say that the set $X$ is a congruous set because coordinating on $X$ is consistent with common knowledge of best-response behavior. Here is a precise and general definition: Consider a set of strategy profiles $X=X_1 \times X_2 \times \cdots \times X_n$, where $X_i \subset S_i$ for each player $i$. The set $X$ is called congruous if, for each player $i$, a strategy $s_i$ is included in $X_i$ if and only if there is a belief $\theta_{-i} \in \Delta X_{-i}$ (putting probability only on strategies in $X_{-i}$ ) such that $s_i \in B R_i\left(\theta_{-i}\right)$. The set $X$ is called weakly congruous if, for each player $i$ and each strategy $s_i \in X_i$, there is a belief $\theta_{-i} \in \Delta X_{-i}$ such that $s_i \in B R_i\left(\theta_{-i}\right)$.
数学代写|离散数学代写Discrete Mathematics代考|Aside: Experimental Game Theory
At this point in our tour of game theory, it is worthwhile to pause and reflect on the purpose and practicality of the theory. As I have already emphasized (and will continue to emphasize) in this book, game theory helps us to organize our thinking about strategic situations. It provides discipline for our analysis of the relation between the outcome of strategic interaction and our underlying assumptions about technology and behavior. Furthermore, the theory gives us tools for prescribing how people ought to behave-or, at least, what things people ought to consider-in strategic settings. You might start to ask, however, whether the theory accurately describes and predicts real behavior. The answer is not so straightforward. There are two ways of evaluating whether game theory is successful in this regard. First, you might gather data about how people behave in real strategic situations. For example, you can observe where competing firms locate in a city, how team members interact within a firm, how managers contract with workers, and so forth. Then you can construct game-theoretic models in an attempt to make sense of the data. You can even perform statistical tests of the models. In fact, many empirical economists dedicate themselves to this line of work. These economists are constantly challenged by how to reconcile the complexities of the real world with necessarily abstract and unadorned theoretical models. The second way of evaluating game theory’s predictive power is to bring the real world closer to the simple models. You can, for example, run laboratory experiments in which subjects are asked to play some simple matrix games. In fact, this sort of research-which is called experimental game theory-has become a little industry in itself. In many universities throughout the world, experimental economists herd students into laboratories that are filled with computer stations, attracting the students with the prospect of winning significant amounts of money. In comparison with experimental work done by researchers in other disciplines, the economists certainly have gotten one thing right: they pay well. By paying the subjects according to their performance in games, experimenters give them a strong incentive to think about how best to play.
现代博弈论始于约翰-冯-诺伊曼(John von Neumann)提出的两人零和博弈中的混合策略均衡的观点及其证明。冯-诺依曼的原始证明使用了关于连续映射到紧凑凸集的布劳威尔定点定理,这成为博弈论和数学经济学的标准方法。在他的论文之后,1944年,他与奥斯卡-莫根斯特恩(Oskar Morgenstern)共同撰写了《游戏和经济行为理论》一书,该书考虑了几个参与者的合作游戏。这本书的第二版提供了预期效用的公理理论,使数理统计学家和经济学家能够处理不确定性下的决策。
Searching is the process of locating an element in a list. A search algorithm is an algorithm that involves a search problem. Searching a database employs a systematic procedure to find an entry with a key designated as the objective of the search. A search algorithm locates an element $x$ in a list of distinct elements or determines that it is not in the list. The solution to the search is either the location of the element $x$ in the list or 0 if $x$ is not on the list. We now briefly introduce two well-known search algorithms whose worst-case and average time complexities are presented in Table 12.2.
The linear search, also known as the sequential search, is the simplest search algorithm. It is an algorithm, based on the brute-force algorithmic paradigm, that scans the elements of a list in sequence in search of $x$, the element that needs to be located. A comparison is made between $x$ and the first element in the list, if they are the same, then the solution is 1. Otherwise, a comparison is made between $x$ and the second element in the list, if they are the same, then the solution is 2 . This process continues until a match is found and the solution is the location of the element sought. If no match is found, then the solution is 0 . Linear search is applied on unsorted or unordered lists consisting of a small number of elements. Because $n$ comparisons are required to find $x$, the linear search has a time complexity of $O(n)$, which means the time is linearly dependent on the number of elements in the list.
The list of data in a binary search must be in a sorted order for it to work, such as ascending order. This search algorithm, which is quite effective in large sorted array, is based on the divide-and-conquer algorithmic paradigm. A binary search works by comparing the element to be searched with the element in the middle of the array of elements. If we get a match, the position of the middle element is returned. If the target element is less than the middle element, the search continues in the upper half of the array (i.e., the target element is compared to the element in the middle of the upper subarray), and the process repeats itself. If the target element is greater than the middle element, the search continues in the lower half of the array (i.e., the target element is compared to the element in the middle of the lower subarray), and the process repeats itself. By doing this, the algorithm eliminates the half in which the target element cannot lie in each iteration. Assuming the number of elements is $n=2^k$ (i.e., $k=\log _2 n$ ), at most $2 k+2=2 \log _2 n+2$ comparisons are required to perform a binary search. Binary search is thus more efficient than linear search, as it has a time complexity of $O(\log n)$. The worst case occurs when $x$ is not in the list.
数学代写|离散数学代写Discrete Mathematics代考|Deductive Reasoning and Inductive Reasoning
Deductive reasoning, which is top-down logic, contrasts with inductive reasoning, which is bottom-up logic. While the conclusion of a deductive argument is certain, based on the facts provided, the truth of the conclusion of an inductive argument may be probable based upon the evidence given.
Deductive reasoning refers to the process of concluding that something must be true because it is a specific case of a general principle that is already known to be true. Deductive reasoning is the process of reasoning from premises to reach a logically certain conclusion; it is logically valid and is the fundamental method in which mathematical facts are shown to be true. Deductive reasoning provides a guarantee of the truth of the conclusion if the premises (assumptions) are true. In other words, in a deductive argument, the premises are intended to provide such a strong support for the conclusion that, if the premises are true, then it would be impossible for the conclusion to be false. For example, a general principle in plane geometry states that the sum of the angles in any triangle is 180 degrees, then one can conclude that the sum of the angles in an isosceles right triangle is also 180 degrees. Another example is that the colonial powers systematically colonized countries and oppressed their people, then one can conclude that the British Empire, as it was a major colonial power, also colonized countries and oppressed people in a systematic manner. In summary, deductive reasoning requires one to start with a few general ideas, called premises, and apply them to a specific situation. Recognized rules, laws, theories, and other widely accepted truths are used to prove that a conclusion is right.
Inductive reasoning is the process of reasoning that a general principle is true because the special cases are true. Inductive reasoning makes broad generalizations from specific observations. Basically, there is data, and then conclusions are drawn from the data. Inductive reasoning is a process of reasoning in which the premises are viewed as supplying some evidence for the truth of the conclusion. It is also described as a method where one’s experiences and observations, including what are learned from others, are synthesized to come up with a general truth. For example, if all the people one has ever met from a particular country have been racist, one might then conclude all the citizens of that country are racist. Inductive reasoning is not logically valid. Just because all the people one happens to have met from a country were racist is no guarantee at all that all the people from that country are racist. Therefore this form of reasoning has no part in a mathematical proof. Even if all of the premises are true in a statement, inductive reasoning allows for the conclusion to be false. For instance, my neighbor is a grandfather. My neighbor is bald. Therefore all grandfathers are bald. The conclusion does not follow logically from the statements. In summary, inductive reasoning uses a set of specific observations to reach an overarching conclusion. Therefore a few particular premises create a pattern that gives way to a broad idea that is possibly true.
Inductive reasoning is part of the discovery process whereby the observation of special cases leads one to suspect very strongly (though not know with absolute logical certainty) that some general principle is true. Deductive reasoning, on the other hand, is the method you would use to demonstrate with logical certainty that the special case is true. In other words, inductive reasoning is used to formulate hypotheses and theories, and deductive reasoning is employed when applying them to specific situations. The difference between the two kinds of reasoning lies in the relationship between the premises and the conclusion. If the truth of the premises definitely establishes the truth of the conclusion (due to definition, logical structure, or mathematical necessity), then it is deductive reasoning. If the truth of the premises does not definitely establish the truth of the conclusion but nonetheless provides a reason to believe the conclusion may be true, then the argument is inductive.
现代博弈论始于约翰-冯-诺伊曼(John von Neumann)提出的两人零和博弈中的混合策略均衡的观点及其证明。冯-诺依曼的原始证明使用了关于连续映射到紧凑凸集的布劳威尔定点定理,这成为博弈论和数学经济学的标准方法。在他的论文之后,1944年,他与奥斯卡-莫根斯特恩(Oskar Morgenstern)共同撰写了《游戏和经济行为理论》一书,该书考虑了几个参与者的合作游戏。这本书的第二版提供了预期效用的公理理论,使数理统计学家和经济学家能够处理不确定性下的决策。
Warning: file_get_contents(https://spmbacklink.000webhostapp.com/1.txt): Failed to open stream: HTTP request failed! HTTP/1.1 410 Gone
in /www/wwwroot/avatest.org/wp-content/themes/wpckid/footer.php on line 40