What Would You Do Without Morality

  • What if you were told convincingly that nothing is immoral?
  • Would your behavior change?
  • If so, how would it change?

Changing Your Metaethics

  • Ethics: “murder is bad”
  • Meta-ethics: “murder is bad because…”
  • It’s possible for people to agree on points of ethics while being in strong disagreement on points of metaethics
  • In order to make philosophical progress, you need to be able to change your metaethics without losing your sense of ethics entirely
  • To allow yourself to change metaethics, it’s useful to set up lines of retreat that allow you to change your meta-ethics without immediately affecting your sense of ethics
  • Lines of possible retreat:
  • Why is meta-ethics important
    • People think that rationality drains the meaning and wonder out of the universe
    • Having a false understanding of where ethics comes from causes distress

Could Anything Be Right

  • Can you trust any of your moral instincts?
  • Your belief that you have no information about morality is not internally consistent
  • If you can’t come up with a procedure to define morality, how can you program an AI to decide morality?
  • If you discard everything evolution has to say about morality, you discard yourself; every part of you (including the parts of your brain that think about morality) is a product of evolution
  • You should take your initial notion of morality as a starting position
  • Morality is a product of reason; it doesn’t come from some light shining from beyond

Morality as Fixed Computation

  • If you tell an all powerful AI to give you what you want, the AI will modify you to want something cheap, and then give you that
  • The problem is that we don’t have enough insight into our own psychology and neurology to fully know and describe why we want what we want
  • This situation analogizes to moral philosophy if you replace the question of “What do I want?” with “What is right?”
  • The notion of “right” is a fixed question/fixed framework
  • Morality isn’t tautologically defined by what you want
  • Instead what you want and what is moral are both linked to some underlying value system
  • If the AI changes what you desire, that underlying value system doesn’t change, and so what is moral doesn’t change
  • Note: I was confused by this article, so I apologize in advance if I misinterpreted Eliezer or oversimplified his argument

The True Prisoners’ Dilemma

  • The normal formulation of a Prisoner’s Dilemma involves two humans and a symmetric payoff matrix
  • The problem with this formulation is that it triggers humans’ natural empathy for one another
  • No human can pretend that they’re perfectly selfish, so we instinctively start hunting for way to make a cooperate-cooperate outcome happen
  • To get a true sense of a prisoners’ dilemma, we need to be in a situation where we’re instinctively looking for ways to trick the other side into cooperating while we defect
  • There has to be a moral justification for defection in order for us to truly feel the conflict inherent in the prisoners’ dilemma