Predictive Processing outline

architecture

The brain generates expectations about what will happen next and compares them to incoming signals; mismatches drive attention, learning, and behaviour.

The brain doesn't wait for the world to announce itself. It runs a generative model---a constantly updating forecast of what should happen next. Top-down predictions flow from higher areas towards sensory regions, meeting bottom-up signals coming in from the world. When predictions match the incoming data, processing is fast and fluent. When they don't match, you get a prediction error---a mismatch signal that spikes attention, charges the moment with affect, and triggers updating.

This architecture does three things at once. First, it makes perception efficient: if the model predicts accurately, the brain can fill in gaps and ignore redundant details, saving metabolic cost. Second, it makes learning targeted: prediction errors tell the system exactly where the model is wrong, so updates can be precise. Third, it explains why anticipation and outcome feel so similar---the same neural pathways activate whether you're expecting something or experiencing it. Anticipating reward recruits much of the same machinery as receiving reward, which is why threats to your needs trigger responses as strong as actual damage.

The key parameter is precision: how confident is the prediction, and how much should the error signal be weighted? When predictions are precise (high confidence), errors are treated as noise and the model stays stable. When predictions are imprecise (low confidence), errors drive rapid updating. Neuromodulators tune this precision dynamically, and bodily state shapes how prediction errors feel---urgent or interesting, threatening or exciting.

Practically, this means behaviour reflects whichever action minimises expected prediction error. You can change the world to match your predictions (make coffee because you predicted you'd have coffee). You can change your predictions to match the world (decide tea is fine after all). Or you can update the model so future predictions are better (learn the café is closed on Tuesdays). Most of the time, the system picks the cheapest option---and acting on a confident prediction is often cheaper than revising it, which is why habits persist even when you 'know better.'

How can you think with this?

Ways to think with this

Practical ways to use this neural mechanism in understanding behaviour

WIP: Treat predictions as defaults
Heuristic: Prediction Engine The brain predicts what should happen next---in the world and in the body. When predictions fail, you feel something, attention pivots, and behaviour updates.

The brain runs a generative forecast of what should happen next, and when predictions match incoming data, processing is fast and automatic. This means your expectations aren't decorative---they're the first draft of perception and action. The system will tend to confirm what it predicts unless the mismatch signal is strong enough to override it.

So what can you do? If you want a behaviour to run reliably, make it predictable. Design contexts so the right cue loads the right forecast before the moment arrives. And when you want to change behaviour, remember you're not just fighting the old response---you're fighting the prediction that generated it. Change what the system expects by changing the pattern it's learned to predict.

WIP: Filter by confidence
Heuristic: Bias vs Noise Bias trades flexibility for precision; noise trades precision for flexibility. Brains tune this trade‑off by context, stress, and uncertainty.

Predictive processing adjusts how much weight to give prediction errors based on precision---how confident is the model, and how trustworthy is the incoming signal? When predictions are precise, errors get treated as noise and the model stays stable. When predictions are imprecise, errors drive rapid updating. This is why the same mismatch can feel trivial in one context and urgent in another.

So what can you do? Manage the confidence of your predictions. If you're stuck in a rigid pattern, lower precision deliberately---introduce controlled variation so the system treats mismatches as signal rather than noise. If you're updating too readily and can't settle, increase precision by practising in stable conditions until the pattern consolidates. Confidence is tunable, and tuning it changes what updates and what persists.

WIP: Let defaults do the work

Because the brain minimises expected prediction error, most behaviour reflects whichever action is cheapest: change the world to match predictions, change predictions to match the world, or update the model for next time. Acting on a confident prediction is usually cheaper than revising it, which is why habits persist even when you 'know better'---the forecast loads automatically and running it costs less than stopping it.

So what can you do? Don't try to override defaults in the moment. Instead, retrain them in advance so the cheap option is the one you want. Preload the right prediction by designing cues that reliably precede the desired action, and practise until the forecast runs without effort. You're not adding willpower; you're making the default correct.

WIP: Everything filters through purpose
Heuristic: Everything is Ideology The brain maps perceptions to actions through frames that highlight some meanings and sacrifice others. We don't choose between truth and ideology---we choose between ideologies.

Predictive processing doesn't generate objective perceptions---it generates purposive ones. The model predicts what matters given your current goals, bodily state, and history. This means perception is always a distortion, always a frame, always an ideology in the sense that it transforms some chaos into meaning while sacrificing other meaning to chaos. There's no view from nowhere.

So what can you do? Recognise that changing your frame changes what you perceive and what actions become available. If a pattern isn't serving you, the issue isn't that you're 'seeing it wrong'---it's that the frame you're using makes certain features relevant and others invisible. Shift the goal, shift the context, shift the bodily state, and the predictions shift with them. You're not correcting bias; you're choosing a better distortion.

Sources

  • analects/making-meaning-in-the-brain.md
  • analects/predicting-human-behaviour.md
  • articles/interruption-theory-of-emotion-mandler.md
  • analects/addictive-work.md