"why you can always find a good reason for what you have already decided?"
The system has located a record matching this query. The following pattern has been catalogued extensively.
Rationalising Away
The brain constructs reasons after the fact, not before. You decide first, then find justifications---and the justifications feel like they were the reasons all along.
Have you ever wondered why you can always find a good reason for what you've already decided? You changed your mind about something and instantly found excellent reasons for the new position. You did something impulsive and quickly constructed a logical explanation for why it was actually the smart thing to do. You were caught doing something you shouldn't have, and before you'd even thought about it, an excuse was already forming.
This is rationalisation---the construction of reasons after the fact, to justify decisions that were actually made on different grounds. The word sounds accusatory, as if rationalisation is something dishonest people do. But it's not. It's what all brains do, all the time. The reasoning machinery doesn't exist primarily to find truth. It exists to construct justifications that are good enough to maintain coherence and social standing. Truth is sometimes a byproduct, but it was never the primary objective.
The pattern is everywhere. Jonathan Haidt demonstrated what he called "dumbfounding"---situations where people feel strongly about something but can't articulate why, and when their reasons are refuted, they don't change their minds. They just find new reasons, or say "I don't know why, I just know." The judgement came first; the reasons came second. And the specific rationalisation scripts people reach for are remarkably consistent: euphemistic labelling ("restructuring" for layoffs), displacement of responsibility ("I was just told to"), advantageous comparison ("at least we didn't do what they did"), diffusion of responsibility ("everyone was doing it"), distortion of consequences ("it wasn't that bad"). These aren't creative excuses. They're well-worn neural pathways, fired so often they've become automatic.
The system has identified patterns that may explain this behaviour. Each represents a recurring tendency observed across multiple records.
01.
Rationalisation is the cheap option
+
The lazy controller avoids costly deliberation whenever it can. When a decision has already been made---by gut reaction, by habit, by social pressure---re-evaluating that decision costs resources. Rationalising it costs almost nothing: you just need a reason that's good enough, not one that's actually true. The controller is doing a cost-benefit calculation, and justification almost always wins over genuine re-evaluation.
There's a further wrinkle: whether the controller opts for genuine re-evaluation or rationalisation depends partly on self-efficacy---how confident you are in your ability to navigate the conflict. If you've practised questioning decisions and succeeded, re-evaluation feels less costly. If you haven't, rationalisation is the path of least resistance. Confidence in your own reasoning isn't vanity; it's a variable that biases the controller's cost-benefit calculation.
This is why rationalisation scripts are so specific and so consistent across people. The brain doesn't invent fresh justifications each time---it reaches for the pre-built script that fits: "I was told to," "it's not my responsibility," "they deserved it." The scripts are chunked and rehearsed, which makes them fast to deploy and hard to resist.
So what can you do? Make re-evaluation cheaper than rationalisation. If you create an environment where questioning a decision is normal---where "wait, why did we actually decide that?" is a routine part of the process, not a challenge to authority---then re-evaluation loses its social cost and becomes competitive with rationalisation. Train people to recognise the specific scripts. Naming "displacement of responsibility" when it happens is like pointing out the card trick: it doesn't make the magician disappear, but it makes the trick much harder to perform unseen.
02.
The justification matches the prediction
The brain predicts what should happen next---in the world and in the body. When predictions fail, you feel something, attention pivots, and behaviour updates.
+
The prediction engine doesn't just predict what will happen---it predicts what should happen, and when reality matches the prediction, the system feels coherent. Rationalisation is the process of making the story coherent after the fact: you adjust the reasons until the prediction and the outcome align. This is why rationalisation feels like reasoning. From the inside, it genuinely seems like you're working out the logic. But the conclusion was fixed first, and the reasoning is reverse- engineered to reach it.
So what can you do? Introduce prediction failure deliberately. If you commit to writing down your reasoning before you know the outcome, you create a record that the prediction engine can't retroactively edit. Pre-mortems---asking "if this went wrong, why did it go wrong?"---force the system to generate reasons for an outcome it hasn't committed to, which interrupts the reverse-engineering process.
03.
Reasons come pre-packaged
Brains link features into meaningful chunks; attention binds chunks into goal‑directed episodes---fast to use, hard to see past.
+
Justifications aren't assembled from scratch; they're retrieved as chunks. "I was following orders" is a chunk. "Everyone else was doing it" is a chunk. "It's not as bad as it sounds" is a chunk. These are socially learned patterns---you've heard them used by others, seen them accepted, and stored them as ready-made justification scripts. When the controller needs a reason, it doesn't reason from first principles; it scans the available chunks and selects the one that fits the situation well enough.
So what can you do? Provide better chunks. If the only justification scripts available in your team's vocabulary are displacement, diffusion, and distortion, those are the ones that will get used. But if "we challenge each other" is a chunk, and "I wasn't sure so I checked" is a chunk, and "we don't do that here" is a chunk, those become available too. The brain selects from what's stored, so populate the store with scripts that lead to better outcomes.
04.
Your map decides what counts as a reason
The brain maps perceptions to actions through frames that highlight some
meanings and sacrifice others. We don't choose between truth and
ideology---we choose between ideologies.
+
Not all justifications feel equally convincing, and what counts as a "good reason" depends on your ideological map. In a culture that values individual autonomy, "it's my choice" shuts down debate. In a culture that values group harmony, "it's what everyone wanted" does the same job. The rationalisation scripts that work---that actually resolve the discomfort and let you move on---are the ones that align with the values your map already treats as legitimate.
So what can you do? Recognise that rationalisation is not just an individual failure---it's maintained by the social and institutional map that determines which reasons count. If the institution accepts "I was following orders" as a legitimate justification, it will continue to be used. Changing which justifications are accepted requires changing the institutional culture: making it clear that certain scripts are not acceptable, and modelling the alternative. The map updates through experience, not through being told.