The Anscombe-Aumann Approach

Magister Ludi

Back

As we have seen, Savage's axiomatization of subjective expected utility theory is a rather involved affair. A simpler derivation of subjective expected utility theory was famously provided by F.J. Anscombe and Robert J. Aumann (1963). However, Anscombe and Aumann's derivation can be regarded as an intermediate theory as it requires the presence of lotteries with objective probabilities. What they assume is that an action ¦ is no longer merely a function from states S to outcomes X, but rather ¦ : S ® D (X), where D (X) is the set of simple probability distributions on the set X. Thus, a consequence is no longer a particular x, but rather a distribution p Î D (X). The set of consequences D (X) are themselves lotteries - but now lotteries with "objective" probabilities.

As a result the components of the Anscombe and Aumann (1963) theory are the following:

S is a set of states

D (X) a set of consequences (objective lotteries on outcomes)

¦ : S ® D (X) is an action (horse/roulette lottery combination)

F = {¦ | ¦ : S ® D (X)} a set of actions

³ h Ì F ´ F are preferences on actions

Thus, an agent's preferences ³ h is a binary relation on actions F that fulfills the following axioms:

(A.1) ³ h is complete, i.e. either ¦ ³ h g or g ³ h ¦ for all ¦ , g Î F.

(A.2) ³ h is transitive, i.e. if ¦ ³ h g and g ³ h h then ¦ ³ h h for all ¦ , g, h Î F.

(A.3) Archimedean Axiom: if ¦ , g, h Î F such that ¦ >h g >h h, then there is an a , b Î (0, 1) such that a ¦ + (1-a )h >h g and g >h b ¦ + (1-b )h.

(A.4) Independence Axiom, i.e. for all ¦ , g, h Î F and any a Î [0, 1], then ¦ ³ h g if and only if a ¦ + (1-a )h ³ h a g + (1-a )h.

which, of course, are merely analogues of axioms (A.1)-(A.4) set out earlier in the von Neumann-Morgenstern structure. As before, F is a "mixture set", i.e. for any ¦ , g Î F and for any a Î [0, 1], we can associate another element a ¦ + (1-a )g Î F defined pointwise as (a ¦ + (1-a )g)(s) = a ¦ (s) + (1-a )g(s) for all s Î S.

Heuristically, as Aumann and Anscombe (1963) indicate, we can think of this as a combination of "horse race lotteries" (i.e. with subjective probabilities) and "roulette lotteries" (i.e. with objective probabilities). Or, more simply, ¦ : S ® D (X) is a horse race but the bettor, instead of receiving the winnings on his bet in cold cash, is actually given a voucher for a roulette bet, or a ticket for a lottery with objective probabilities. This can be visualized in Figure 1, where we have a tree diagram for a particular action ¦ where S = {1, 2} and X = {x1, x2, x3}, so ¦ : S ® D (X) is a particular action. As Nature chooses states, then depending on which s Î S occurs, we will obtain ¦ 1 or ¦ 2. However, recall ¦ s is lottery ticket, thus ¦ s Î D (X) is a probability distribution over X, or ¦ s = [¦ s(x1), ¦ s(x2), ¦ s(x3)].

anscomb1.gif (2538 bytes)

 

Figure 1 - An Anscombe-Aumann action ¦ : S ® D (X)

This helps our analysis as, immediately, we know that we can evaluate different (objective) lotteries with the old von Neumann-Morgenstern expected utility function. However, as the lottery is only played after a particular state s Î S occurs, then the von Neumann-Morgenstern expected utility function will be dependent on the state, i.e. Us: D (X) ® R. We also know that Us(¦ s) has an expected utility form:

Us(¦ s) = å xÎ X ¦ s(xi)us(xi)

where us: X ® R is the elementary utility function which corresponds to the particular von Neumann-Morgenstern expected utility function Us: D (X) ® R that obtains when state s Î S. Thus, note that us: X ® R is a state-dependent elementary utility function. Thus, in terms of Figure 1, if state s = 1 obtains, then the expected utility of ¦ 1 is U1(¦ 1) = ¦ 1(x1)u1(x1) + ¦ 1(x2)u1(x2) + ¦ 1(x3)u1(x3) and if state s = 2 obtains, then the expected utility of ¦ 2 is U2(¦ 2) = ¦ 2(x1)u2(x1) + ¦ 2(x2)u2(x2) + ¦ 2(x3)u2(x3).

As we can see immediately, Us(¦ s) can be thought of as the expected utility of state s Î S given that a particular action ¦ : S ® D (X) is chosen. If S is finite, then obviously the utility of the action ¦ is:

U(¦ ) = å sÎ S Us(¦ s)

where, notice, we are not multiplying Us(¦ s) by the probability that state s occurs - because we do not know what those probabilities are. That is, after all, the purpose of this subjective expected utility theory - otherwise it would be merely a case of compound lotteries and we would simply apply von Neumann-Morgenstern. However, as we do have expressions for Us(¦ s), then we can write out the utility from the act ¦ as:

U(¦ ) = å sÎ Så xÎ X ¦ s(xi)us(xi).

We can thus call this a state-dependent expected utility representation of the utility of act ¦ . The next question should be obvious: does this represent preferences over actions? To formalize all this intuition and prove this last result, let us state the first theorem:

Theorem: (State-Dependent Expected Utility) Let S = [s1, .., sn] and let D (X) be a set of simple probability distributions on X. Let ³ h be a preference relation on the set F = {¦ | ¦ :S ® D (X)}. Then ³ h fulfills axioms (A.1)-(A.4) if and only if there is a collection of functions {us: X ® R}sÎ S such that for every ¦ , g Î F:

¦ >h g if and only if å sÎ Så xÎ X ¦ s(x)us(x).³ å sÎ Så xÎ X gs(x)us(x).

Moreover, if {vs: X ® R}sÎ S is another collection of state-dependent utility functions which represent preferences, then there is b ³ 0 and as such that vs = bus + as.

Proof: This is an if and only if statement thus we must prove from axioms to representation and representation to axioms. We omit the latter, and concentrate on the former. Now, by the von Neumann-Morgenstern theorem, we know that if (A.1)-(A.4) are fulfilled over a linear convex set F, then there exists a function U: F ® R such that for every ¦ , g Î H, ¦ ³ h g iff U(¦ ) ³ U(g) and U is affine, i.e. U(a ¦ + (1-a )g) = a U(¦ ) + (1-a )U(g). Now, let us fix ¦ * Î F, thus ¦ * = (¦ 1*, ..., ¦ n*). Consider now another function ¦ and define ¦ s = [¦ 1*, ..., ¦ s-1*, ¦ s, ¦ s+1*, .., ¦ n*], thus ¦ s is identical to ¦ except for the sth position. Doing so for all s Î S, then we obtain a collection of n functions, {¦ s}sÎ S. Now, observe that:

å sÎ S ¦ s = ¦ + (n-1)¦ *

where ¦ = [¦ 1, ¦ 2, .., ¦ n]. To see this heuristically, let n = 3. Then:

é

¦1

ù é

¦1*

ù é

¦1*

ù  

å sÎ S ¦ s =

ê

¦2*

ú

+

ê

¦2

ú

+

ê

¦2*

ú  
ë

¦3*

û ë

¦3*

û ë

¦3

û  

or rearranging:

é

¦ 1

ù é

2¦ 1*

ù

å sÎ S ¦ s =

ê

¦ 2

ú

+

ê

2¦ 2*

ú

=

¦ + 2¦ *

ë

¦ 3

û ë

2¦ 3*

û

 

Thus, in general, for any n, we see å sÎ S ¦ s = ¦ + (n-1)¦ *. Now, dividing through by n:

(1/n)å sÎ S ¦ s = (1/n)¦ + ((n-1)/n)¦ *

Now, by affinity of U: F ® R:

(1/n)å sÎ S U(¦ s) = (1/n)U(¦ ) + ((n-1)/n)U(¦ *)

or:

(1/n)U(¦ ) = (1/n)å sÎ S U(¦ s) - ((n-1)/n)U(¦ *)

Now, let us turn to the following. For any p Î D (X), let us define state-dependent Us(p) as:

Us(p) = U(¦ 1*, .., ¦ s-1*, p, ¦ s+1*, .., ¦ n*) - ((n-1)/n)U(¦ *)

Letting p = ¦ s Î D (X) then obviously:

Us(¦ s) = U(¦ s) - ((n-1)/n)U(¦ *) - ((n-1)/n))U(¦ *)

by the definition of ¦ s. Thus, summing up over s Î S and dividing through by n:

(1/n)å sÎ SUs(¦ s) = (1/n)å sÎ SU(¦ s) - ((n-1)/n)U(¦ *)

But recall from before that the entire right hand side is merely (1/n)U(¦ ), thus:

(1/n)U(¦ ) = (1/n)å sÎ SUs(¦ s)

or simply:

U(¦ ) = å sÎ SUs(¦ s)

thus we have a representation of the utility of the action ¦ U(¦ ) expressed as the sum of state-dependent utility function over lotteries, Us(¦ s), as we had intimated before. Thus, we know that as U represents preferences, then:

¦ ³ h g Û U(¦ ) ³ U(g) Û å sÎ SUs(¦ s) ³ å sÎ SUs(¦ s)

We are half-way there. Define us(x) = Us(d x) where d x(y) = 1 if y = x and = 0 otherwise. Now, recalling the definition of Us(p) = U(¦ 1*, .., ¦ s-1*, p, ¦ s+1*, .., ¦ n*) - ((n-1)/n)U(¦ *), then for any p, q Î D (X), then:

Us(a p + (1-a )q) = U(¦ 1* .., ¦ s-1*, a p + (1-a )q, ¦ s+1*, .., ¦ n*) - ((n-1)/n)U(¦ *)

= U(a ¦ 1* + (1-a )¦ 1* .., a p + (1-a )q, .., a ¦ n* + (1-a )¦ n*) - ((n-1)/n)U(a ¦ * + (1-a )¦ *)

or as U is affine, then we obtain:

Us(a p + (1-a )q) = a [U(¦ 1*, .., p, .., ¦ n*) - ((n-1)/n)U(¦ *)] + (1-a )[U(¦ 1*, .., q, .., ¦ n*) - ((n-1)/n)U(¦ *)]

thus:

Us(a p + (1-a )q) = a Us(p) + (1-a )Us(q)

so Us is also affine.

Now, by the corollary to the von Neumann-Morgenstern theorem, since D (X) is a set of simple lotteries and d x Î D (X), then there is a function us: X ® R such that:

Us(¦ s) = å xÎ X ¦ s(x)us(x)

As this is true for any s Î S, then:

U(¦ ) = å sÎ SUs(¦ s) = å sÎ Så xÎ X ¦ s(x)us(x)

thus we conclude that for any ¦ , g Î F, then:

¦ ³ h g Û U(¦ ) ³ U(g) Û å sÎ SUs(¦ s) ³ å sÎ SUs(gs)

Û å sÎ Så xÎ X ¦ s(x)us(x) ³ å sÎ Så xÎ X gs(x)us(x)

which is what we sought. Finally, we shall not prove the "moreover" remark as it follows directly from the uniqueness of U. All we wish to note from this uniqueness statement, vs = bus + as, is that b ³ 0 is state-independent.§

Now, so far we have obtained an additive representation of U(¦ ) with state-dependent elementary utility functions on outcomes, us: X ® R. Our aim, however, is to derive an additive representation with a state-independent elementary utilities on outcome, u:X ® R. This is the important task and requires some additional structure. Before we do this, let us provide a definition:

Null States: a state s Î S is a null state if (¦ 1, .., ¦ s-1, p, ¦ s+1, .., ¦ n) ~h (¦ 1, .., ¦ s-1, q, ¦ s+1, .., ¦ n) for all p, q Î D (X).

Notice that the action on the left is the same as the action on the right except for the component at position s, where that on the left yields p and the right has q. If one is nonetheless indifferent between the two acts, then effectively state s does not matter, it i.e. it is equivalent to stating that the agent believes s will never happen. We do not want to rule this out, but we do want to prove that there are at least some states that are non-null states. To establish this, we need the following axiom:

(A.5) Non-degeneracy Axiom: there is an ¦ , g Î F such that ¦ >h g. (i.e. >h is non-empty).

We can see that non-degeneracy guarantees the existence of non-null states. To see this, suppose not. Suppose all states are null. Then, (¦ 1, ¦ 2 ..,¦ n) ~h (¦ 1¢ , ¦ 2, .., ¦ n) ~h (¦ 1¢ , ¦ 2¢ , .., ¦ n) ~h .... ~h (¦ 1¢ , ¦ 2¢ , .., ¦ n¢ ). But, (¦ 1¢ , ¦ 2¢ , .., ¦ n¢ ) can be any g Î F. Thus, ¦ ~h g for all g Î F, or there is no g Î F such that ¦ >h g. Thus, (A.5) is contradicted.

Let us now turn to a rather important axiom:

(A.6) State-Independence Axiom: Let s Î S be a non-null state and p, q Î D (X). Then if:

(¦ 1, ..., ¦ s-1, p, ¦ s+1, .., ¦ n) >h (¦ 1, ..., ¦ s-1, q, ¦ s+1, .., ¦ n)

then, for every non-null state t Î S:

(¦ 1, ..., ¦ t-1, p, ¦ t+1, .., ¦ n) >h (¦ 1, ..., ¦ t-1, q, ¦ t+1, .., ¦ n)

The state-independent axiom is quite important so let us be clear as to what is says. Effectively, it claims that if p >h q at non-null state s Î S, then p ³ h q at any non-null state t Î S. Thus, the preference ranking between lotteries p and q is state independent.

With these two axioms, we can now turn to the main theorem we seek from Anscombe and Aumann (1963) to derive the state-independent expected utility representation:

Theorem: (Anscombe-Aumann) Let S = [s1, .., sn] and let D (X) be a set of simple probability distributions on X. Let ³ h be a preference relation on the set F = {¦ | ¦ :S ® D (X)}. Then ³ h fulfills axioms (A.1)-(A.4), (A.5), (A.6) if and only if there is a unique probability measure p on S and a non-constant function u: X ® R such that for every ¦ , g Î H:

¦ ³ h g if and only if å sÎ Sp (s)å xÎ X ¦ s(x)u(x).³ å sÎ Sp (s)å xÎ X gs(x)u(x).

Moreover, (p , u) is unique (p ¢ , v) is another probability measure on S and if v: X ® R represents ³ h in the sense above, then there is b > 0 and a such that v = bu + a and p = p ¢ .

Proof: We shall go from axioms to representations first. Notice that from the previous theorem, (A.1)-(A.4) there is a collection of functions {us: X ® R}sÎ S such that for every ¦ , g Î F:

¦ ³ h g if and only if å sÎ Så xÎ X ¦ s(x)us(x).³ å sÎ Så xÎ X gs(x)us(x).

Now, let s Î S be a non-null state (which we know exists by non-degeneracy axiom (A.5)). Consider now two actions, ¦ s = (¦ 1, .., ¦ s-1, p, ¦ s+1, .., ¦ n) and gs = (¦ 1, .., ¦ s-1, q, ¦ s+1, .., ¦ n) where p, q Î D (X). Then by the above representation, notice that ¦ s ³ h gs if and only if å så x ¦ ss(x)us(x).³ å så x gss(x)us(x).which reduces to ¦ s ³ h gs Û å x p(x)us(x).³ å x q(x)us(x). But we know by state-independence axiom (A.6) that if ¦ s ³ h gs for non-null s Î S, then ¦ t ³ h gt for all non-null t Î S. Thus, it is also true that if t is non-null, then ¦ t ³ h gt Û å x p(x)ut(x).³ å x q(x)ut(x). But recall that the von Neumann-Morgenstern representation argued that if U(p) ³ U(q), then there is a real-valued function u: X ® R such that å x p(x)u(x).³ å x q(x)u(x) and if any v: X ® R also represented preferences over D (X), then there is a b > 0 such that v = bu + a. Well, in our case, we have us and ut representing preferences over D (X). Thus, there is a bs, bt > 0 and as, at such that us = bsu + as and ut = btu + at. This will be true for any non-null s, t Î S. If, however, s is null, then b = 0. Thus, substituting into our earlier expression:

¦ ³ h g Û å sÎ S å xÎ X ¦ s(x)(bsu + as)(x).³ å sÎ S å xÎ X gs(x)(bsu + as)(x).

or:

¦ ³ h g Û å sÎ S bs å xÎ X ¦ s(x)u(x).³ å sÎ Sbs å xÎ X gs(x)u(x).

so, defining B = å sÎ Sbs > 0 (by non-degeneracy (A.5), there is at least one such S), then dividing through by B:

¦ ³ h g Û å sÎ S (bs/B) å xÎ X ¦ s(x)u(x).³ å sÎ S (bs/B) å xÎ X gs(x)u(x).

so, finally, defining p (s) = bs/B and it will be noted that å sÎ S bs/B = å sÎ S p (s) = 1, then:

¦ ³ h g Û å sÎ S p (s) å xÎ X ¦ s(x)u(x).³ å sÎ S p (s) å xÎ X gs(x)u(x).

and thus we have it. We leave the uniqueness and the converse proof undone.§

We have now obtained the state-independent utility function u: X ® R and expressed preferences over actions via this expected utility decomposition. To see the expected utility composition more clearly, recall that Us(¦ s) = ¦ s(x)us(x) = ¦ s(x)u(x) = U(¦ s) by our last result. Thus this becomes:

¦ ³ h g Û å sÎ S p (s)u(¦ s).³ å sÎ S p (s)u(gs).

Thus, we have obtained an expected utility representation of preferences over actions, ¦ : X ® D (X). Thus, a particular action ¦ is preferred to another g if the expected utility of action ¦ is greater than the expected utility of action g. Note the terms we use. The term å sÎ S p (s)u(¦ (s)) is the expected utility of action ¦ because it sums up the utility of the consequences of this action (u(¦ (s)) over states weighted by the probability of a state happening, p (s). The crucial thing to recall here is that the probabilities p (s) were derived from preferences over actions and not imposed externally! Thus, these are subjective probabilities and, hopefully, they represent individual belief.

This last part is something of a leap here, but the basic notion is that a rational agent would not choose an action ¦ over an action g if they did not correspond rationally to his beliefs on the probabilities of the occurrences of states. In horse-racing language, then p (s) corresponds to the beliefs on the outcome of the horse race (the different states) because a bettor would not rationally prefer a betting strategy that yields contradicts his beliefs.

 

back
Back
top
Top
book4.gif (1891 bytes)
Selected References
 

next
Next

 

top1.gif (924 bytes)Top
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

All rights reserved, Gonçalo L. Fonseca