- Interpersonal morality should be constructed out of personal morality
- If five people come across a pie, and one of them wants the whole pie, then it is right for the four to prevent the fifth from taking the whole pie, even if they have to use force to do so
- Does the concept of individual rights have any meaning?
- When we talk about individual rights, we’re talking about violations where we expect a third party to intervene on the side of the person whose rights are being violated, and a fourth party to support the third party’s intervention
- However, this is not sufficient to construct a meta-ethic – if you fell back in time to when slavery was socially acceptable, it would still be morally right to help an escaped slave
- Rights only exist in relation to obligations to other people – there is no such thing as a right that does not impose an obligation on others
- Interpersonal morality is a special case of individual morality, not the other way around
- This is because only individuals are guaranteed to take unified action
- Groups can split, with one part doing one action and the other part doing some other action
- However, because we humans have been arguing about interpersonal morality for so long, we have specific adaptations for dealing with interpersonal morality
- One of these adaptations is universalizability
- We desire to frame moral arguments in a form that will allow them to jump between people
- Some of our moral arguments have become transpersonal – they transcend specific contexts and tribes
- Transpersonal moral arguments are moral arguments that reflect the psychological unity of humankind
- Even the most universalizable transpersonal moral argument won’t work on a rock or a paperclip maximizer – those are not things that you can have a moral conversation with
- It’s difficult to say how much moral agreement or disagreement there would be if people had better access to their moral intuitions
- Moral disagreements might be dissolved by considering different arguments or by looking at things from a different perspective
- Moral disagreements might be reinforced by the dispelling of illusions of moral agreement
- Suppose you try to build an AI that does what you want
- This AI could modify you to want something that’s easy to obtain, and then give you lots of that
- If you try to make it so that the AI can’t modify the programmer, then the AI can’t talk to the programmer, since to communicate is to modify
- There is a duality between this problem and the problem of moral philosophy
- In both cases merely wanting something doesn’t make it right
- We don’t have introspective access to our own morality
- We don’t ask ourselves, “What will I decide to do?” in an abstract sense; we ask ourselves that question in the context of having goals
- What we call “right” is a fixed framework that grows out of our shared human heritage
- The one sentence summation of Eliezer’s meta-ethics is: there is no pure ghostly essence of goodness apart from things like truth, happiness and sentient life
- Whenever people think about goodness, it’s in relation to specific other things
- However, due to the way our brains work, it feels like goodness is a separate thing, a noun that can exist on its own, rather than an adjective which has to be applied to other nouns
- “Goodness” can’t be replaced with “utility function”, since you can construct a mind that embodies any given utility function
- The moment you start asking, “Which utility function should I use,” you’re back to thinking about what you value, and which utility function would best preserve those values
- Your values can change in response to arguments, but there is no form of argument that can work independently of human minds
- So is morality then just a quirk of human psychology that ought to be abandoned?
- No. If we value things because we are human, then that is justification enough
- Worrying about the lack of truly universal moral arguments just causes existential angst for little gain
- An allegorical story around aliens whose sense of morality is based upon whether heaps of pebbles contain a prime number of pebbles
- Like humans and human morality, these aliens don’t have an indenpendent justification of why prime numbers are “correct” and composite numbers are “incorrect”
- When they build an AI, they naively think that the AI will automatically learn that prime numbers are correct and will only build prime-heaps
- The essay is an illustration of the orthogonality thesis, which states that any morality can be combined with any level of intelligence
- So if all morality in humans has the same starting point, then why are there so many moral disagreements?
- Humans do not have complete access to the output of their own morality
- Disagreement has two prerequisites
- Possibility of agreement
- Possibility of error
- There are numerous ways in which a person could be mistaken about their own morality
- Mistaken beliefs about the world
- Mistaken meta-ethics
- Unquestioned cultural influences
- Assuming that fellow humans share an entirely different moral reference frame than you is an extreme position to take
- Saying, “There is nothing to argue about, we are merely different optimization processes,” is something we should say to paperclip maximizers, not fellow humans