Scope Insensitivity

  • Exponential increases in alleviation of suffering generate only linear increases in willingness to pay
  • To be an effective altruist, you have to actually think through the numbers, rather than using your feelings as a guide

The Allais Paradox

  • Choose between the following two options
    • $24,000 with certainty
    • $27,000 with 33/34 probability
  • Now choose between the following two options
    • $24,000 with 34% odds of winning
    • $27,000 with 33% odds of winning
  • People choose the first option in the first scenario and the second option in the second
  • However, the second scenario is a linear scaling of the first, and therefore your choice shouldn’t change
  • This preference reversal can be exploited to make a profit

Zut Allais

  • Money pump - ask people to bet on two separate outcomes, knowing that a preference reversal (as described above) will occur
  • Then arbitrage the bets against each other to ensure that you win no matter what the actual outcome is
  • People will defend taking the losing side in money pump scenarios, even when the outcomes are pointed out to them
  • People also attach a huge premium to certainty - the price jump from 99% to 100% is much higher than the price jump from 98% to 99%
  • However, one scenario’s certainty is another scenario’s probability, and you can exploit that to create a money pump
  • The warm fuzzy feeling of certainty is just that, and you shouldn’t use it to calculate which outcome you should bet on
  • When something truly important is at stake, do the math and take your time

The ‘Intuitions’ behind ‘Utilitarianism’

  • “Intuition” in this essay means “the basic building blocks of reasoning”
  • Even though these building blocks aren’t perfect, we can’t get rid of them
  • Many of the counterintuitive aspects of utilitarianism go away if you stop taking people’s moral intuitions as the gospel truth and accept the fact that people are factualy wrong about probabalistic reasoning a lot of the time
  • People prefer to follow unconditional social rules, even in a probabilistic world
  • Signalling unconditional, unlimited commitment is a great social strategy but a poor decision-making strategy
  • When lives are at stake, shut up and multiply

Ends Don’t Justify Means (Among Humans)

  • What if you’re running on corrupted hardware?
  • Bayesian decision theory assumes you’re correctly calculating probabilities
  • You can dodge thought experiments that attempt to pit deontological rules against utilitarian outcomes by pointing out that the deontological rules exist to guard against miscalculations of utility
  • Worrying about AIs becoming corrupted by power is a misplaced fear
    • The effect of being corrupted by power is a result of specific elements of human neuroarchitecture
    • The AI wouldn’t have this problem insofar as an AI would be totally alien and lacking human temptations