Update, March 2018: A revised version of this post has recently been published under the title “Egalitarianism about expected utility” in Ethics. The most important difference is that the example given in Table 2, below, has been corrected in the published version. The version given here doesn't quite work, though the basic point being made still stands.
Voorhoeve and Fleurbaey’s Hybrid Egalitarianism
In “Priority or Equality for Possible People” (2016), Alexander Voorhoeve and Marc Fleurbaey develop a novel theory of distributive ethics for choices under risk. They avoid the difficulties of population ethics, by focusing on cases in which the same number of people are guaranteed to exist, but they take on the challenge of including nonidentity cases – i.e. cases in which your decision affects who will exist.
Their theory is a form of “hybrid egalitarianism”, which addresses concerns for fairness in the distribution, not only of actual wellbeing in whatever outcome comes about, but also in the distribution of chances.
Below I raise some doubts about the viability of this project.
Setup
The key ethical idea that motivates hybrid egalitarianism (V&F 2012; 2016) is that, when evaluating possible distributions, it is necessary not merely to consider the final outcomes, but also each person’s expected utility.
Consider: are the following two lotteries equivalent, where \(b > a > 0\)?
Lottery A:
\(\frac{1}{2}\)  \(\frac{1}{2}\)  

Person 1  \(a\)  \(b\) 
Person 2  \(b\)  \(a\) 
Lottery B:
\(\frac{1}{2}\)  \(\frac{1}{2}\)  

Person 1  \(a\)  \(a\) 
Person 2  \(b\)  \(b\) 
Person 1 prefers \(A\) over \(B\), and Person 2 prefers \(B\) over \(A\). Notice that the total utility of each outcome is always \(a + b\), and the amount of inequality in each outcome is identical. The key difference is that the expected utility of person 1 and 2 differ in the two lotteries. Voorhoeve and Fleurbaey assert that this is an important difference: even though in some sense the possible outcomes are equivalent, there is a reason to prefer Lottery \(A\).
The ethical idea here is something like: if Person 1 complains about receiving outcome \(a\), we can better defend that outcome if it resulted from Lottery \(B\) than from Lottery \(A\). We can explain that, although the actual outcome was poor for them relative to others, they had a chance at a much better outcome. Indeed, their expected utility in \(A\) is perfectly equal to the other person’s.
To accommodate this idea, Voorhoeve and Fleurbaey recommend adjusting the “currency” of distributive concern. It is not simply wellbeing: it is a hybrid of outcome wellbeing and expected wellbeing. We can represent this hybrid by transforming each lottery as follows: add to the contents of each cell a term that is a function \(f\) of the expected utility. So the value of each outcome depends, in part, on what sort of chances it arose from. Below the lotteries \(A'\) and \(B'\) represent the effect of converting our original lotteries into this novel currency.[1]
\[A' = \begin{pmatrix} a + f(\frac{a+b}{2}) & b + f(\frac{a+b}{2})\\ b + f(\frac{a+b}{2})& a + f(\frac{a+b}{2})\\ \end{pmatrix}\]
\[B' = \begin{pmatrix} a + f(a) & a + f(a)\\ b + f(b) & b + f(b)\\ \end{pmatrix}\]
A further step is then performed on the lotteries to incorporate for egalitarian concern regarding actual outcomes. For present purposes, however, we can ignore this step, because the examples that I will discuss all involve zero outcome inequality; indeed they can be posed with single person populations. Henceforth, I focus on single person lotteries (prospects). So according to hybrid egalitarianism, the value of a prospect \(((p, x_1), (1p, x_2))\) is:[2]
\[p(x_1 + f({p x_1 + (1p) x_2})+ (1p)(x_2 + f({p x_1 + (1p) x_2}))\]
Voorhoeve and Fleurbaey argue that the function \(f\) should be (like a prioritarian weighting function), increasing and concave. By that property, \(f(\frac{a+b}{2}) > \frac{f(a) + f(b)}{2}\), and so it is straightforward to see why Lottery \(A\) will be more valuable than \(B\). If the math is a bit opaque to you, the key idea is that, in general, a lottery will be more valuable if expected utility is distributed more equally across persons.
The key takeaway message is that the value of an outcome of a lottery is an increasing function of two factors: (i) the aggregate expected utility in that outcome (adjusted for egalitarian concerns), and (ii) the expected utility enjoyed by each person in that outcome.
The first condition, given its egalitarianism, implies that the value of a lottery is not separable with respect to prospects. The second condition entails nonseparability with respect to outcomes also, because the value of one outcome depends on what might happen in other outcomes.
What sort of expected utility?
One final subtlety to note: Voorhoeve and Fleurbaey say that for the purposes of the second condition, we should not use expected utility simpliciter, but expected utility, conditional on existence. That means for a subject whose prospect is \((0.2,a),(0.4, b),(0.2, )\), we set aside the third possible outcome, in which the person does not exist, and calculate the expected utility, conditional on either the first or second outcome occurring. So that yields:
\[ CEU((0.2,a),(0.4, b),(0.2, )) = \frac{0.2 a + 0.4 b}{0.2 + 0.4} \]
The motivation here is that this is a way of respecting a personaffecting intuition, or something in that vicinity – if we did not condition on existence in this way, we would regard nonexistence as the same as existing at wellbeing zero. Someone should not be able to complain that their expected wellbeing is low, because they had a large chance of not existing. To illustrate, consider the choice between the following distributions:
Table 1
\(p\)  \(1p\)  

Ann  7  – 
Bob  –  7 
Suppose that we can choose \(p\). According to Voorhoeve and Fleurbaey, we may choose any level of \(p\), being indifferent between conferring a large chance of existence on one person or the other, given their outcome wellbeing levels are equivalent. But if we calculate expected wellbeing without conditioning on existence (\(EU_{Ann} = 7p\); \(EU_{Bob} = 7(1p)\)), we get the result that if \(p\) is greater than one half, Bob’s expected utility is lower than Ann’s and vice versa if we have \(p\) less than one half. So absurdly, we maximize value by conferring equal chances of existence on both. But if we condition on existence, then both Ann and Bob’s expected utility is constant, so these quantities are insensitive to \(p\).
Similarly, if we modify the example to make Ann’s wellbeing slightly greater than Bob’s, rather than maximizing value by setting \(p\) to one, we would be required to give Bob some nonzero chance of existing, to compensate for the inequality in expected wellbeing that will result.
Mark this rationale. We will return to it.
Wrong currency
Voorhoeve and Fleurbaey have erred in their suggestion that this hybrid currency is the appropriate object of distributive concern.[3] The first counterexample I present relies on cases in which individuals have negative wellbeing levels, where Voorhoeve and Fleurbaey specify that negative wellbeing is understood to be worse for an individual than not existing.
Here’s the key idea: compare these lotteries:
\[A = \begin{pmatrix} 10 & 2\\ \end{pmatrix}\]
\[B = \begin{pmatrix} 10 & \\ \end{pmatrix}\]
Intuitively, B is strictly better than A, because it reduces the chance that someone exists at a miserable level of existence, where they would prefer never to have existed. To the extent that there is more negative utility in A, V&F will support this conclusion. But note that the subject’s conditional on existence expected utility in \(A\) is \(\frac{102}{2} = 6\). Whereas in \(B\) it is simply \(10\). As a result, there is a pro tanto reason for V&F to prefer \(A\). So we have good reason to expect there will be some cases of this type, where they rank a lottery such as \(A\) as strictly better.
This suspicion can be proven.
Claim: for any \(a < 0\), and any probability \(p\), there exists some \(b < 0\) such that the hybrid egalitarian value of the lottery \(A = ((p,a),((1p),b))\) is greater than the lottery \(B = ((p,a),((1p),))\), provided the weighted value of a conditional expected utility of zero is also zero, i.e. \(f(0) = 0\).[4]
So I claim this is a counterexample to the theory. For any lotteries analogous to \(A\) and \(B\) above, a satisfactory axiology should rank \(A\) as strictly worse.
In trying to understand the nature of this problem, it is helpful to compare it to a similar problem for average utilitarianism. Because nonexistence does not count as zero, for an average utilitarian, if everyone else is doing very poorly, then existing at a negative wellbeing level can raise the average utility in an outcome. But making someone exist at a negative wellbeing is surely making things worse. Here the impact does not arise from averaging across other people’s wellbeing levels, but from averaging across the other wellbeing levels that a single person obtains, in other outcomes. If their average is already negative, a less extreme negative outcome will raise their conditional expected utility, and be protanto good. This is implausible.
A bad example?
There are two reasons to think that this example is not in fact decisive. First, it involves negative wellbeing levels, and V&F explicitly said that they didn’t intend their theory to cover negative wellbeing levels. Second, it involves possible outcomes in which the total number of people existing varies. Hence it is opens up the Pandora’s box of population ethics, which they also said they were not addressing.
Regarding the first concern, here is what V&F say:
We … stipulate that all lives that might come about have positive wellbeing. We do so in order to focus solely on the distribution of chances of benefits and goods, rather than having to balance them against risks of harms or evils, which a decision maker may have special reason to avoid. (V&F 2016: 933)
This suggests that they anticipate difficulty in reconciling both positive and negative welfare levels in a single decision problem. But the class of examples I rely upon involve only negative welfare levels, so no such difficulty arises.
If that reply is not satisfactory, however, an analogous problem can be demonstrated using positive welfare levels.
The converse problem with positive wellbeing
Consider the following lotteries, supposing \(a > b > 0\), and assuming that the two outcomes are equiprobable.
\[A = \begin{pmatrix} a & b\\ \end{pmatrix}\]
\[B = \begin{pmatrix} a & \\ \end{pmatrix}\]
Now if you share some degree of personaffecting intuition (nonexistent people cannot complain; people with positive wellbeing cannot complain about the fact of their existence), \(A\) should certainly be no worse than \(B\): you are increasing the chance that the person exists at a positive wellbeing level (\(\frac{a+b}{2} > \frac{a}{2}\)). But for the hybrid egalitarian, you are also reducing their conditional on existence expected wellbeing (\(a\) > \(\frac{a+b}{2}\)). By reasoning exactly analogous to the above, we can show that for any \(a > 0\), for any probability \(p\), there exists some positive \(b\) such that the lottery \(A\) is strictly worse than \(B\).
A bad example, again?
I still have not responded to the second reason to reject this example: the case in question involves variable population size. We already know that population ethics cannot satisfy all our intuitions about an adequate axiology. So we perhaps should not ask V&F’s theory to do better.
That might be right. But I have at least some grounds to suspect that the move to conditional expected utility in particular is illfounded. Whatever sympathy you may have for egalitarianism about expected utility, V&F’s proposal that we use the form of expected utility that ignores nonexistence appears undermotivated.
Consider the following case, which is a more complicated variant on the Ann and Bob case introduced in Table 1 above.
Table 2
\(p\)  \(q\)  \(r\)  \(1pqr\)  

Ann  7 + \(\epsilon\)  7  –  – 
Bob  –  –  7  7 \( \epsilon\) 
In the original case, conditional on existence expected utility did not depend on the probabilities of either state, so it could be ignored as a factor, and the right response was to maximize utility by setting \(p\) to one.
In this case, because the outcomes are a little more complex, it is not possible to eliminate the probabilistic dependence of conditional expected utility on the probabilities \(p,q,r\). So we get the following expressions for Ann and Bob’s CEU:
\[CEU_{Ann} = (7+\epsilon)p + 7q\]
\[ CEU_{Bob} = 7r + (7\epsilon)(1pqr)\]
Now, setting \(p = 1\) will lead to a highly unequal distribution of conditional expected utility. Consequently, because of the concavity of the function \(f\), you will maximize the currency of distributive concern by setting \(p<1\), and giving Bob some chance of existing. This is exactly the absurdity that V&F were trying to avoid by the move to conditional on existence expected utility, but with a slightly more sophisticated example, the problem returns.
So now there are three strikes against conditional expected utility: first, it leads to problems in variable population cases. The examples I gave above are handled fine if you use ordinary expected utility, treating nonexistence as zero. Second, it leads to Nebel’s problem cases in fixed population cases. Third, it doesn’t fix a slight variation on the original case that V&F used to motivate the idea, as shown above. All this is highly suggestive that conditional expected utility is only a fix for an artificially narrow range of examples, and is not an appropriate metric, if want a theory that reflects a concern for having roughly equal life chances.
Acknowledgments
Thanks to Frank Calegari for assistance with constructing the proof. Thanks to Marc Fleurbaey, Wlodek Rabinowicz, and participants at the ANU Workshop on Themes from the Work of Marc Fleurbaey for helpful comments and suggestions.
References
Nebel, J. M. (2017). Priority, Not Equality, for Possible People. Ethics.
Voorhoeve, A., & Fleurbaey, M. (2012). Egalitarianism and the Separateness of Persons. Utilitas, 24(03), 381–398.
Voorhoeve, A., & Fleurbaey, M. (2016). Priority or equality for possible people? Ethics.

Strictly, I should have some way of representing the probabilities of the outcomes also. Whenever I use this sort of notation, assume that the various states are equiprobable. So here, each outcome occurs with probability \(\frac{1}{2}\). ↩

A prospect \(((p, x_1), (1p, x_2))\) means an alternative that brings the subject outcome \(x_1\) with probability \(p\) and \(x_2\) with probability \(1p\). ↩

Jacob Nebel has developed an independent line of criticism of hybrid egalitarianism. Like the examples here, Nebel’s example relies on variable chances of existence. Unlike the examples here, his examples necessarily involve multiple persons who might exist. Moreover, Nebel manages to avoid the problem of variable populations. Both lines of critique appear successful to the present author. See Nebel for enlightening discussion of alternative views that might satisfy the intuitions which inspire V&F’s project. ↩

We aim to establish the following inequality holds, subject to the conditions above:
\[V(A) = pa + (1p)b + f(pa + (1p)b) > V(B) = p(a + f(a))\]
Note that if we hold \(p,a\) constant, the left hand side is an increasing function of \(b\), and the right hand side is constant. Our strategy is to show that the inequality holds for \(b = 0\), and then argue that because the function is continuous, there exist values of \(b < 0\) nearby, such that the inequality will continue to hold.
Suppose \(b = 0\), then the inequality \(V(A) > V(B)\) reduces to:
\[f(pa) > pf(a)\]
Recall that \(f\) is concave. Assume further that it is strictly concave. By definition, then, we know that:
\[f(pa + (1p)b) > p(f(a)) + (1p)f(b)\]
Given we are considering the case where \(b = 0\), and we are assuming that \(f(0) = 0\), then we obtain:
\[f(pa + (1p)·0) > p(f(a)) + (1p)·0\]
which reduces to \(f(pa) > pf(a)\).
For nonzero \(b\), the inequality will continue to hold providing the following expression is positive:
\[(1p)b + f(pa + (1p)b)  pf(a)\]
We know that this is positive when \(b = 0\). Because \(f\) is concave, it must be continuous; hence there must be a value of \(b\) in some neighbourhood of zero, such that the value of the above expression is still positive. \(\Box\)
(The assumption that \(f\) is strictly concave might be questioned, but the only functions that are concave without being strictly concave are linear, which would defeat V&F’s purpose in introducing concavity. So it seems safe in this context.) ↩
Image credit: @imran_potato