Ethical Theory Spring 2022

Should You Vote?


Suppose you believe that it is important that Daisy wins the election; society would be much better off with Daisy in charge than if her competitor, Donald, were to win. So you believe that it is important that enough people vote for Daisy to ensure that she wins.

Does it follow that you should vote for Daisy? It is hard to make the case. There are millions of voters and your vote will only make a difference if they split exactly between Daisy and Donald. That is so unlikely that, it seems, the value of voting is negligible and there will be many other things that you could do instead with better consequences. While it isn’t very high minded, even taking a nap would be more important.

This yields a paradoxical result. It is very important that a lot of us vote for Daisy, but it is not important that any individual votes for her. But if all the individual voters think this way, they won’t do the thing that they all agree is important, namely, electing Daisy.

Zach Barnett argues that it is individually rational for someone who cares about the social good to vote. It is a great example of how to use probabilistic reasoning when thinking about the consequences of your actions.

The argument in a nutshell

What is at issue is the expected value of voting. That expression refers to the probability that voting will bring about a good outcome multiplied by the value of that outcome.1 If I am offered the opportunity to bet that a coin will land on heads with a $5 payoff for being right, the expected value of the bet is $2.50: 50% of $5 or .5 × $5.

Barnett’s opponents calculate the expected value of voting as being astonishingly low. He argues that their calculations rely on what is called a binomial model of elections. A binomial model begins with an estimate of how likely it is that a candidate will win an election and then it assumes that each voter has an identical probability of voting for that candidate. If Daisy is thought to have a 52% chance of winning, then each voter is assumed to have a 52% chance of voting for her. However, Barnett argues, the binomial model is flawed. In particular, it overestimates how certain the probability that Daisy will win is (see Barnett 2020, 439–41). To illustrate his point, the best estimates in the last two presidential elections in the US greatly underestimated the amount of support for Donald Trump but the binomial model, I gather, would have predicted that his winning was extraordinarily unlikely.

Barnett does not propose an alternative model of elections. Rather, he proposes two assumptions that, he says, are shared by all models, including the binomial one (Barnett 2020, 434–36). The upshot of these, very roughly, is that in a reasonably close election, the candidate who is projected to lose still has at least a 10% chance of winning and there is at least a 5% chance that the loser has between 45 and 50% of the vote. With these numbers in hand, Barnett maintains he can show that the probability of an individual’s casting the decisive vote and so making a difference in the election exceeds what he calles the chances condition, expressed as d ≥ 1/N, where N is the total number of citizens (Barnett 2020, 436–38, 426–27).

That, he maintains, is enough to show that voting is rational for someone who cares about the social good and believes one candidate is significantly better than the other.

Should you use expected utility with extreme outcomes?

Expected utility calculations can get strange when the value (or disvalue) of an outcome is extremely large. When this is so, even a minuscule probability that an outcome will happen as a result of something you might do has to govern your decisions. You might wonder whether that makes sense.

I included an optional reading by Bostrom titled “Pascal’s Mugging” that makes the point in an amusing way. Barnett addresses the objection in section E of his essay (Barnett 2020, 438–39).


Barnett, Zach. 2020. “Why You Should Vote to Change the Outcome.” Philosophy & Public Affairs 48 (4): 422–46. doi:10.1111/papa.12177.

  1. Strictly speaking, we would have to subtract the disvalue of bad outcomes multiplied by the probability that they will happen from the expected positive value.↩︎