Epistemic Interpretations of Probability

Two recent episodes (Fitleson, Ep. 31; Vasudevan, Ep. 45) have mentioned ‘epistemic interpretations’ of probability and Bayes’ Theorem. For Fitleson, Bayes’ Theorem provides a model for inductive reasoning, and he is concerned with deviations from this model (as in the ‘base rate fallacy’ and ‘Linda cases’). Vasudevan takes epistemic interpretations of probability as the historical response to the apparent tension between determinism and our intuitions about chance events like the flip of a coin—a response which he ultimately rejects. Bayes’ Theorem and the epistemic interpretation of probability are intimately related, as one view in the philosophy of probability, Bayesianism, uses Bayes’ Theorem to represent changes in our beliefs in light of new evidence, which assumes an epistemic interpretation of probability. In the first part of this post, let’s take a look at some of the basic ideas and motivations behind the epistemic interpretation. In the second, we will look at some of the ‘mathematical machinery’ the Bayesian, who assumes an epistemic interpretation of probability, employs.

The core idea behind an epistemic interpretation of probability (such as Bayesianism) is that we can represent our confidence of belief in some proposition or event as probabilities. This isn’t to say that one’s degree of belief is identical to a probability (or that these degrees of belief can be precisely measured by probabilities), but rather that it can be a helpful way of expressing one’s confidence in a belief as a probability.

How do we link up our confidence in a belief with probabilities? Typically, this is done by examining an agent’s dispositions to place varying bets her beliefs. (‘Agent’ here is basically just jargon for a person who holds a set of beliefs and makes decisions based on these beliefs—not to be confused with 007 or Ben Affleck in Argo!) For example, let’s imagine that our agent, Emily, believes with that it will rain in Chicago on April 15th. We can imagine offering Emily a bet:  we’ll ask her to bet $5 that it will in fact rain in Chicago on April 15th, and we’ll match her bet with $5 that it won’t in fact rain in Chicago on April 15th. The total ($10) goes into a pot, and whoever wins the bet receives the total. Let’s say Emily takes this bet. We can then offer her another one: $6 that it will rain, $4 that it won’t rain. She might take this bet as well. (It’s important to note that we’re not asking if Emily would take the second bet instead of the first one—we’re not offering both at once—but rather whether she’d take the second bet at all.) We can keep offering Emily bets of this form– $7 to $3, $8 to $2, and so on—until she decides not to take the bet. Let’s say that the last bet Emily takes is at $7 (that it will rain in Chicago on April 15th) to $3 (that it won’t rain in Chicago in April 15th), so that she won’t take bets at any ‘worse’ ratio (e.g.: $8 to $2, or $7.01 to $1.99, etc.).

We can calculate the ‘betting ratio’ for this last bet Emily takes, where the betting ratio is the amount Emily bets divided by the total amount in the pot. So, in our example, we have $7/$10, or 0.7. The Bayesian takes this value, the betting ratio, as a representation of Emily’s degree of belief that it will rain in Chicago on April 15th. We can extend this procedure to the rest of Emily’s beliefs to form a ‘ranking’ of Emily’s beliefs, from those she is most confident in (those with the highest betting ratio) to those she is least confident in (those with the lowest betting ratio).

Now the trick is to link up these betting ratios to probabilities. ‘0.7’ sure looks like a probability, but we can make a stronger argument. In particular, we can argue that when an agent’s betting ratios are rational, they accord with the basic rules of probability. Let’s sketch out these rules and show how rational betting ratios reflect them. (Note: here I draw heavily from Hacking’s An Introduction to Probability and Inductive Logic. This is an excellent general introduction to issues of probability and inductive reasoning that I highly suggest to those interested with minimal background.)

Normality: One basic rule of probability is that all probabilities are valued between 0 and 1. ‘Rational’ betting ratios reflect this rule, as the minimum ratio is when an agent bets nothing—so that the betting ratio is 0/(the total amount in the pot)=0—and the maximum reasonable ratio is to bet the total amount in the pot—so that the amount bet divided by the amount in the pot is 1. To bet more would be to throw money away: if the total pot that I could win was $10, and I contributed $12 of those $10 to the pot, well, $2 would have disappeared! An agent should never bet more than the total amount of the pot, because then she would always lose money.

Certainty: Another basic rule of probability is that the probability of a certainty is 1. This may seem to build in an epistemic interpretation into it (as ‘certainty’ is typically taken as an epistemic state), but we can avoid this by taking ‘certainty’ as ‘encompassing all the possibilities,’ or all the live possibilities. For example, we take the possibilities resulting from a flip of a coin to be heads or tails. Maybe, if we were dealing with an actual physical event of a coin-flip, we’d also account for a possibility of the coin landing on its side. But we would not consider the possibility that the coin vaporizes in mid-air as a live possibility. So, in the case of an actual coin flip, we would say that the probability that it either lands heads, tails, or on its side, as 1.

A rational agent would offer betting ratios equal to this, because otherwise she could be, as Fitleson puts it, pumped for money. For example, if Emily offers a betting ratio of .9 on a coin-flip ending up heads, tails, or on its side, this means that she will offer a .1 betting ratio that none of these things will happen. We can take this second bet, betting, say, $9 to her $1, and take her $1 every time—which would indicate a flaw with respect to Emily’s inductive reasoning. So rational betting ratios also match up to the ‘certainty’ rule of probability.

Additivity: This last basic rule of probability is essentially an extension of the last one: the probability of a disjunction (‘A or B’), is equal to the sum of its disjuncts (‘A’ and ‘B’), given that A and B are disjoint (mutually exclusive). For example, when rolling one six-sided die, the probability that it comes up with a 3 or 4 is equal to the probability that it comes up a 3 (1/6) plus the probability that it comes up a 4 (1/6), or 1/3.

The idea rational betting behavior reflecting this rule is similar to the one described in the last paragraph: if an agent offers betting ratios on A, B, and A or B, that do not follow the rule, we can set up a ‘sure-loss’ contract. For example, let’s say that Emily offers the following betting rates:

A: 0.3

B: 0.4

A or B: 0.5

Notice that Emily’s rate on A or B does not follow the rule of additivity—if it did, the rate would be 0.7 (0.3+0.4). We might then ask Emily to make the following three bets (with the overall stakes being $1):

  1. Bet $0.30 on A being true. If A is true, Emily wins the total stakes minus her bet, or $0.70. If A is false, Emily loses $0.30.
  2. Bet $0.40 on B being true. If B is true, Emily wins $0.60. If B is false, Emily loses $0.40.
  3. Bet $0.50 on A or B being false. If either A or B is true, Emily wins $0.50. If neither A or B is true (so that ‘A or B’ is false), Emily loses $0.50.

Given that these bets reflect Emily’s betting rates as we described above, she’ll take them. But if she does, she runs into a problem: she’ll lose money no matter what happens!

 

Result Payoff on (1) Payoff on (2) Payoff on (3) Total Payoff
A true, B false $0.70 -$0.40 -$0.50 -$0.20
A false, B true -$0.30 $0.60 -$0.50 -$0.20
A false, B false -$0.30 -$0.40 $0.50 -$0.20

 

We’ve tricked poor Emily into a ‘sure-loss’ contract: no matter what the results are (remember, A and B both being true is not a possibility, as we’ve stipulated that they are mutually exclusive events), Emily will lose money. But these are bets that she would have taken, as they reflect the betting ratios she offered above. The problem that Emily ran into was that her betting ratios did not reflect the probabilistic rule of additivity—we could go on to generalize this result for any case in which Emily’s betting ratios for A and B separately did not equal her betting ratio for A or B, given that A and B are mutually exclusive events. So rational betting behavior—that which avoids ‘sure-loss’ contracts, must follow the rule of additivity.

So, the proponent of an epistemic interpretation of probability points out, what we take to be rational betting behavior (i.e.: the kind that won’t get you ‘pumped for money’) in fact reflects these three basic rules of probability. A more rigid formulation of these points could be built up into a positive case for an epistemic interpretation of probability. Combined with the intuitive negative argument, that the alternative, an objective interpretation of probability, is incompatible with determinism (which we take to be true on separate grounds), this seems to present a strong argument for the epistemic line of interpretation: probability models our betting behavior (and inductive reasoning more generally) pretty well, and can only mesh with our deterministic view of the universe if we treat probabilities as reflect of beliefs about the world, rather than events in the world. Vasudevan attacks both conjuncts of this argument in his episode: even if probability models our inductive reasoning pretty well, so that we can justify our inductive beliefs by appealing to their abidance by the rules of probability, it’s very different to say that the proposition we are expressing is about our epistemic state (our degree of belief) rather than something about the world; and there is in fact no tension between an objective interpretation of probability and determinism, once we pay attention to the context in which propositions containing probabilities are asserted.

Still, the epistemic interpretation of probability has been popular historically, so it may be of interest to see how we might employ some of the mathematical machinery of probability if we were to assume the epistemic interpretation. In the next post, we’ll talk about one of the main bits of machinery that epistemic interpretations employ: Bayes’ Theorem.

Phil Yaure


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *