- Predictive Processing is the unifying framework theory of how the brain works
- Begins by asking, “How does raw sense data get turned into a unified picture of the world?”
- Brain is a multi-layer prediction machine
- 2 streams
- Bottom-up stream of sense-data that is progressively filtered and abstracted
- Top-down stream of predictions - starts with abstract models and generates predictions for lower layers
- Both streams are probabilistic
- 2 streams interact with each other at each layer of cognitive processing (using Bayes Theorem) to integrate the two sources of probabilisitic evidence
- Possible outcomes of this interaction
- Prediction and sense data match: models are good and layer stays quiet
- Low-precision sense-data mismatches high-precisiion predictions
- Prediction is probably more correct than sense data
- Reinterpret sense data to match prediction
- Unresolvable conflict between sense data and prediction
- Signals “suprisal”
- Alert that higher levels have to update their model
- Surprisal signals for the stream of “low-level” data for higher levels
- Perception is the result of these interactions
- Top-down models explain why our brains can resolve pictures from noisy images, read words that have the letters jumbled, and automatically skip over grammatical errors
- How does Predictive Processing explain pretty much everything about our perception?
- Attention
- Predictive processing measures the “confidence interval of predictions”
- High-attention: perception is driven by bottom-up stream of sense data
- Low attention: perception is driven by top-down stream of predictions from models
- Imagination, dreaming, etc
- The highest levels of our brain make predictions
- As these predictions filter down into lower levels, they’re turned into specific sensations that we ought to be perceiving (which are then compared to incoming sense data)
- Dreams and hallucinations are predictions that are either unconstrained or are weakly constrained by sense-data
- Priming
- Priming constrains the set of predictions, and thus makes it more likely that the brain will decode sensory data in a particular way
- Learning
- The brain is constantly creating models and generating sense-data predictions from them
- Models that perform well are retained, while models that do not perform well are discarded
- Hyperpriors - priors that one must have to form coherent models in the first place
- Synchronicity of senses - different senses describe the same world
- Object permanence - things don’t go away when we stop looking at them
- While hyperpriors may be innate, infant behavior indictates that even hyperpriors may be learned
- Motor behavior
- Predictive processing isn’t something that informs our motor control, it is our motor control
- Sometimes it’s easier to update the world to fit the model rather than updating the model to fit the world
- The driver for us to move our bodies is a prediction that our bodies will be in a particular location
- Placebo effect
- Placebos operate much like priming described above
- Bias the brain’s models to interpret sense data in particular ways
- Neurochemistry
- Predictive processing provides a coherent framework for what various neurotransmitters actually do
- NDMA-glutamatergic system - top-down predictions
- AMPA-glutamatergic system - bottom-up sense data
- Dopamine - confidence intervals and other metadata
- Autism & Schizophrenia
- Both autism and schizophrenia can be interpreted as problems with updating models
- Autism: models are too precise
- Reality constantly deviates from models, generating surprisal signals
- Explains why autistic people get irritated by minor things, like tags on clothing
- Also explains why autistic people do well in certain fields that require high-precision models
- Schizophrenia: models are too broad & agree with sense data too readily
- More likely to assign significance to weak or noisy sense data
- Aren’t fooled by optical illusions (which take advantage of models’ ability to overwrite sensory perceptions)
- Schizophrenic delusions are the result of the brain assigning significance to every bit of sense data, no matter how minor or noisy
- Predictive processing puts the rationalist project on a sounder scientific footing - shows the mechanisms by which our brains update; we just need to figure out how to make this system update better and faster
- Predictive Processing is presaged by Powers’ Perceptual Control Theory
- PCT is a good theory but it’s better seen as an approximation of Predictive Processing
- If Predictive Processing does turn out to be the grand unified theory of the mind, PCT will be seen as an important precursor
- The one thing that Predictive Processing has trouble with is instincts
- If the brain is one big prediction/verification machine, where do “hard-wired” instincts fit into that?
- Maybe sometimes there’s enough of an evolutionary advantage to “hard-wiring” certain features (like recognizing beaks with red dots, in the case of seagulls) that it happens
- Maybe instincts are the “default weights” assigned to models, and that most people go along with the defaults