The first concerns the critique that EA is a “misery trap”. Michael argues that because EA is (at least potentially) a totalising
philosophy - “do as much good as possible” - it can’t provide a framework for a satisfying life: you could always do more good by making yourself more miserable (Michael gives examples of EAs who denied themselves ice cream or even having children because other resource uses would do more good). I recognise the force of the argument, but it strikes me that if there are
correct moral principles, we should expect them to be demanding and difficult to live by (Otherwise the world would already be a better place!) Certainly that’s my impression of the world’s major religions
and (some) ideologies. I think at most margins it’s a good sign if ones moral framework makes one uncomfortable.
Second, another compelling argument is Michael’s concern, which applies to all of utilitarianism, not just EA:
“‘[G]ood” isn’t fungible, and so any quantification is an oversimplification. Indeed, not just an oversimplification: it is sometimes downright wrong and badly misleading
I’m sympathetic to that. A lot of energy has been expended by a lot of brilliant minds in wrestling with utilitarianism: so many problems, but so hard to give up a commitment to at least “thin utilitarianism
”. Reflecting on this did make me wonder, though, if EA needs its “Rawlsian moment”. Rawls (see TiB 190
) wrote A Theory of Justice
as a way to address utilitarianism’s lack of respect for the “separateness of persons”. The result is an elaborate system of rights, principles and mechanisms that constrain utilitarianism’s totalising tendency. Perhaps EA - or some flavour of EA - needs a version of its own.
RELATED: Some effective altruists are offering cash prizes (up to $100K) for the best critiques of EA