- Initial Thread:
- Wei Dai:
- Discussion is about what is preventing academia from recognizing certain steps in intellectual progress
- Why didn’t academics come up with Bitcoin or TDT
- Eliezer Yudkowsky
- Academic system doesn’t promote “real work”
- Trying to get productive work done in academia means ignoring all of incentives in academia pointing against productive work
- People who have trouble seeing the problems with academia are blinded by:
- Inadequate fluency with Meditations On Moloch
- Status blindness
- Assigning non-zero positive status to being in academia
- Can we get academics to take the non-academic work being done by MIRI, et. al. seriously?
- OpenPhil hasn’t been very good at getting research done on AI alignment
- The obvious strategy is to not subject yourself to academic incentives
- Abandon peer review (by academics)
- Mailing lists work better than journals
- Subthread 1:
- Wei Dai
- Academia has delivered deep and important results
- We need a theory that explains why academia has been able to do certain things but not others (or maybe why the situation has deteriorated)
- We should be worried that academia is not able to make progress on AI alignment
- Qiaochu Yuan
- Is it correct to speak of academia as a single entity?
- Wei Dai
- What distinguishes the parts of the academia that are productive from the parts that are not?
- Is the problem that academia is focusing on the wrong questions?
- How can we get academia to focus on higher priority topics?
- Subthread 2:
- Eliezer Yudkowsky
- Things have gotten worse in recent decades
- Maybe if we had the researchers from the ’40s, we’d do better
- OpenPhil is better than most funding sources, but they don’t “see past the pretend”
- Most human institutions don’t solve particularly hard mental problems
- Rob Bensinger
- It’s not actually clear that the researchers from the ’40s would do better given current knowledge
- Progress in quantum mechanics has proceeded similarly with progress in AI
- Academics were talking about AI alignment as a potential problem as far back as 1956
- Wei Dai
- Maybe human brains and the tools of the 20th century are just bad at philosophical issues
- We see a slowdown in all fields because we’re waiting on philosophical breakthroughs
- AI happens to be more affected by this slowdown than other fields
- Rob Bensinger
- Prior to 1880, human inquiry was good at exploring nonstandard narratives, but bad at rigorously demanding demanding testing and precision
- Between 1880 and 1980 we solved the problem by requiring precision and testing, which allowed science to get a lot of low-hanging fruit really fast
- But the problem with requiring precision and testing is that it prevents you from exploring “weird” problems at the edge of your conceptual boundaries
- The process of synthesizing “explore weird nonstandard hypothesis” with “demand precision and rigor” is one that’s progressing in fits and starts
- Subthread 3:
- Vladimir Slepnev
- Why isn’t Nick Bostrom excited about TDT/UDT?
- Academia has a tendency to go off in wrong directions, but its direction can be influenced with understanding and effort
- Wei Dai
- What are some examples of academia going off in the wrong direction and getting corrected by outsiders?
- Vladimir Slepnev
- It’s easier to influence the direction that academia goes in from the inside
- Maxim Kesin
- The price of getting into academia at a level high enough to influence the direction of a field is very high
- Wei Dai
- There’s a subset of the steps in each field that need to be done by outsiders or newcomers?
- Vladimir Slepnev
- Doesn’t understand the hate against academia
- Wei Dai
- People on LessWrong understood UDT just fine - why can’t academics get it
- Vladimir Slepnev
- Academia hasn’t accepted TDT/UDT because it hasn’t been framed correctly
- Subthread 4
- Stuart Armstrong
- The problem is both specialization and lack of urgency
- People found the paper about “interruptible agents” interesting, but not necessarily significant
- The paper was helpful, however, to help him learn how to model and to present ideas
- MIRI doesn’t tell people why they should care about why topics or results
- Subthread 5
- Eliezer Yudkowsky
- Most big organizations don’t do science
- Most big science organizations aren’t doing science, they’re performing rituals that look like science
- Achieving a research-level understanding of most topics is like climbing a mountain
- This climb isn’t progress, it’s debt
- The Debt
- Poor expostion - bad explanations of an idea
- Poor abstractions and notation
- Noise - no way to know which research papers you should be looking at
- Interpretive labor
- Tradeoff between the energy used to explain an idea and the energy required to understand it
- Many-to-one communication imposes a multiplier to the cost of understanding because everyone receiving the idea has to understand it for him or herself
- Specialization occurs because the cost of understanding increases as the number of people communicating increases
- Research debt is the accumulation of missing interpretive labor
- Clear Thinking - need better abstractions and notations
- Research distillation
- Need to make ideas understandable to people outside of a given field
- Requires deep scientific understanding, empathy, and design
- As labor intensive as coming up with the original discoveries in the first place
- Where are the distillers?
- No incentives to support distillation
- Distillation work isn’t seen as “real research”
- An ecosystem for distillation
- Distill Journal - venue to give traditional recognition to nontraditional contributions
- Distill Prize - $10,000 prize to acknowledge outstanding explanations of machine learning
- Distill Infrastructure - tools for making beautiful interactive essays
- Thinking Fast and Slow is great but it isn’t perfect
- Studies haven’t held up in the replication crisis
- Hot hand effect appears to be real
- Organ donation rates
- It’s much more difficult to opt-out of organ donation in countries that have organ donation by default
- Not a checkbox on the license form
- Prospect theory seems to be as unrealistic as perfect rationality and is much less mathematically tractable
- Voodoo death
- People dying after being cursed by witch doctors
- Even if magic isn’t real, if people believe in magic they’ll waste away out of fear
- The problem with voodoo death is that it seems plausible, but there isn’t any evidence for it
- A lot of psychological phenomena look pretty voodoo
- Placebo effect
- Voodoo effect in reverse
- Intially people were claiming all sorts of benefits to placebos
- New studies show that placebo effect is weak and largely limited to controlling pain
- Stereotype threat
- If people think that others think they’ll do bad on a test, they’ll actually do worse
- Doesn’t replicate well in large studies
- Self-esteem
- Popular in the mid-90s
- Failed to replicate in later studies
- Name preference effect
- Unconscious social priming
- Artificial surveillance cues don’t increase generosity
- Implicit association tests mostly don’t work
- Common thread in all examples: shift away from the power of the unconscious
- Maybe our conception of the unconscious overly broad
- Most scientific “mavericks” were either doubted for a short period of time or were part of moderate-sized dissenting movement
- After a few years (between 10 and 30) their contributions were recognized
- While scientific consensus may be flawed, it doesn’t ignore contrary evidence for long periods of time
- Replication crisis
- Scientists take the replication crisis pretty seriously
- Took about 10 years to go from something that only a few people were talking about to something that the mainstream consensus took seriously
- Rationalists were ahead of the curve, but not that far ahead
- Nutrition
- Most nutrition scientists don’t believe in the old paradigm of all calories being equal and fat being bad for you
- If the old paradigm continues to be popular, it’s because of inertia in the media and popular culture
- Social Justice
- There have been meta-analyses showing that Implicit Association Tests aren’t a good test for bias since 2009
- Problems with stereotype threat have been covered in the mainstream media
- While there are authors arguing against gender differences, they’re not considered to be part of the scientific consensus anymore
- The psychological consensus includes genetic psychological differences between population groups
- Nurture assumption and blank-slatism
- It took about 10 years for people to realize that genetics confounds studies of developmental outcomes
- Intelligence explosion and AI risk
- Many AI researchers take the notion of AI risk seriously
- While the scientific consensus hasn’t shifted wholly in favor of taking X-Risk seriously, it’s no longer completely against X-Risk
- IQ
- Most expert psychologists and applied psychologists agree that IQ measures cognitive ability and are culture neutral
- Even when people disagree with IQ, their disagreements seem to be limited and well-reasoned
- The process of trying and discarding ideas is the process by which science progresses
- Scientific consensus has been accurate to an almost unreasonable degree
- That said, we shouldn’t overly respect scientific consensus
- The only reason that consensus ever changes is because people look for evidence against the consensus, and then present it
- It’s really easy to be misinformed about what the consensus is