Math Is Subjunctively Objective

  • It’s easy to establish whether a mathematical statement is true or false, but it’s more difficult to assess what it means for a mathematical statement to be true or false
  • Does a statement like 2 + 3 = 5 exist even without human brains to process it?
  • If you have a broken calculator that outputs 2 + 3 = 6, that doesn’t mean that 2 + 3 = 6 is correct – it just means you have a broken calculator
  • We can imagine counterfactual worlds in which we get different questions to mathematical questions, but that doesn’t mean that those worlds are equally correct to ours
  • If something appears the same way to everyone, regardless of their background, then maybe that thing is as it appears

Does Your Morality Care What You Think

  • Is morality like math? Does it exist as a truth regardless of how we compute it?
  • People seem to get morality from their teachers, not a mysterious external referent
  • How are moral preferences different from other sorts of preferences? What makes moral judgments carry more weight than other preferential statements?
  • There is such a thing as morality – saying that murder is right is no more moral than saying 2 + 3 = 6

Changing Your Meta-Ethics

Setting Up Metaethics

  • There are two broad camps regarding morality
    • Morality as fixed given
      • Morality exists regardless of whether any human perceives it
      • Right and wrong don’t change depending on your opinion
    • Morality as preference – morality exists in people and nowhere else
  • Both have advantages and disadvantages
    • Fixed given
      • Seems more normal at the level of everyday conversations
      • Underlies our intuitions about moral error and moral progress
      • Cleanly separates “I want to do X” from “It is right to do X
      • Has to contend with Moore’s Open Question - if you’re arguing that morality is equivalent to something else, then you have to make the argument; it’s not self-evident that morality is equivalent to anything
    • Preference
      • More compatible with reductionism
      • Has an easier time answering questions like, “What is morality,” and “Where does morality come from?”
      • Struggles with the is/ought problem - if morality is something that’s a reflection of our preferences, why should we give it any kind of priority
  • Ideally we want a meta-ethic that:
    1. Results in “moral normality” – includes notions of “moral error”, “moral progress”, and things that you should do even though you don’t want to
    2. Fits in a naturalistic universe without exceptions to reductionism
    3. Does not oversimplify complicated moral arguments
    4. Answers paradoxes

The Meaning of Right

  • Instead of asking yourself, “What is right?” ask yourself, “Why does my brain label some things as right?”
    • The second question isn’t the same as the first, as it is empirical rather than ethical
    • However, knowing why we think some things are right can help answer whether they are right
  • However, in order to start this process, we must start by asking ourselves, “What is right?” and looking at the answers
    • Morality seems to run backwards in time – if an action A is moral, then B such that B -> A is also moral
    • At some point, though, we have to justify whether an action is moral without saying that it leads to another action which is moral
      • If flipping a switch will pull a child out of the path of a runaway train, you still have to justify saving the child
    • All of our moral arguments seem to end in “self-evident” truths
    • How do you define “right” if you can’t use words like, “good”, “desirable”, “preferable”, “moral” or “should”
  • Now that we have some idea of what feels right and what doesn’t feel right, we can try to look at why we have those feelings
    • In a sense, our concept of “should” acts as an inverse to our concept of “could”
      • “Could” starts at a present point in time and runs forward
      • “Should” starts at a future point in time and runs backward
    • It seems like both concepts are the result of the same underlying mental structures
    • We reason about the moral properties of objects and actions in the same way that we reason about physical properties
      • This is because we already have highly developed cognitive structures for reasoning about the physical properties of objects
      • Reusing those same structures to reason about moral properties is evolutionarily efficient
    • So is our notion of morality a result of the mind-projection fallacy?
    • Not necessarily
      • We can talk about derived properties of objects as if they were fundamental properties, if the derived property is the result of a stable and well-defined computation
      • For example: we can talk about “red” or “green” apples even though our perception of color is a heavily mediated process and our perception of “red” and “green” is the result of significant computations on top of the raw sense-data entering our eyes
    • If we were designing a machine to save lives, it would reason backwards from future states in which lives were saved
    • If the backwards chain of reasoning led to an action that the machine could take right now, it would assign a positive label to that action
  • When we look at actions and decide whether they’re “moral” or not, we’re behaving much like that little machine
    • When we look at an action and label it “moral” we’re saying that action will result in a more moral world
    • Only, in our case, we’re not just looking at saving lives but also:
      • Maximizing happiness
      • Maximizing people’s freedom and autonomy
      • Many other things
    • This is a very complex non-verbalizable function that we’re trying to maximize
  • However, just because this function is evaluated in our brain doesn’t mean that it’s an opinion
    • Saying “murder is right” is just as mistaken as saying 2 + 3 = 6
  • Where does this function come from?
    • Natural selection
    • The fact that this function comes from our evolutionary heritage is not sufficient grounds to reject it
    • Instead of rejecting our moral intuitions and attempting to build morality from an unachievable emptiness, we should take our moral intuitions as a starting point and seek to improve them
    • There is no external source of morality outside of humanity, nor should we want there to be one
  • When attempting to persuade others that an action is moral we should assume that the other we’re persuading is fundamentally human-like – there is no argument that can push morality into a rock (or a paperclipper)
  • Our sense of “right” and our sense of “morality” are both pointing to the same enormously complex calculation that our brains are carrying out whenever we consider whether an action is moral