- Eliezer, in the past thought that the Singularity was the best way to make everyone more intelligent
- Thought that superintelligence implied supermorality
- Initially, Eliezer was very reluctant to put mathematical definitions around intelligence
- Wanted to avoid building narrow AIs optimized for particular tasks
- Deliberately avoided “traditional” artificial intelligence research, believing it to be a dead end
- This forced him to study other fields, such as neuroanatomy and cognitive science
- Even though this was mistake, studying those other fields had benefits
- However, the fact that Eliezer didn’t quantify intelligence as a mathematical function is a warning sign, in retrospect
- Eliezer pattern matched any warnings about the dangers of technology to people attempting a false compromise between technological progress and stasis
- There is strong historical evidence that holding back technological progress causes clear and lasting harm
- However, doing good isn’t the same as doing the opposite of evil
- Studying nanotechnology alerted Eliezer to the problem of X-Risk
- For the first time, he was that there was the possibility of unchecked technological progress leading to human extinction
- However, this realization did not cause Eliezer to reconsider his support for AI
- Eliezer started seeing AI as a way to safeguard humanity from other X-Risks
- Eliezer was very good at finding holes in others’ arguments
- However, there is no rule in nature that says that the best argument has to be correct
- People can be correct, even when they have inferior arguments
- Eliezer sets out to prove that building a superintelligence is indisputably moral
- However, Eliezer still hadn’t come up with a mathematical description of morality
- Therefore, any attempts to reason around this uncertainty were flawed
- You can’t assume any properties about the thing you’re uncertain about, because the very fact of your uncertainty means you don’t know those properties
- Eliezer first started to take AI X-Risk seriously when he considered what the AI would do on the off chance that there was no overall meaning to human life
- What would a “backup” morality look like?
- This prompted Eliezer to start thinking of morality in more rigorous terms so that it could be inscribed into a computer program
- This rigor caused Eliezer to begin to realize that superintelligences wouldn’t automatically be moral - that there were some instances in which they would be immoral