Ethical Decision-Making
Why good people do bad things. Five mechanisms---gut reactions, rationalisation, environmental pressure, group conformity, and institutional culture---each nudge behaviour away from good intentions, and they compound.
Most ethical failures aren't committed by bad people. They're committed by good people in systems that stack against good intentions. Not monsters, not psychopaths---people who love their families, follow the rules, and would be horrified to think of themselves as capable of cruelty or corruption. And yet they cheat, they cover up, they follow orders they know are wrong, they look the other way when they shouldn't.
The explanation isn't character. It's machinery. Five mechanisms---each one well-understood through Neurotypica's heuristics and architecture---can each nudge behaviour in the wrong direction. Individually, each nudge is small. But they compound. And when they all push the same way, good intentions don't stand a chance.
This is different from the say-do-gap, which is about the gap between your individual intentions and your actions---the biscuit you eat despite your diet, the phone you check despite your plan. Here, the problem is bigger than one person. It's about how multiple layers of the system---from the gut to the institution---conspire against the right choice.
The cascade
The five mechanisms run from micro to macro: from the fastest, most automatic processes inside a single nervous system, up through the social and institutional structures that shape what the nervous system learns in the first place. Each level can fail independently, but the real danger is when they fail together.
The alarm. Something isn't right---your nervous system detects pattern violations and tags them with a gut-level valence before you can explain why. In the ethical domain, this is your moral intuition: the tightening in your chest when someone describes a decision that doesn't sit right, the stomach-drop when you hear an order that feels wrong. The alarm is fast, cheap, and inarticulate. It doesn't tell you what is wrong---it tells you that your prediction model has flagged a violation. When it fires, it's data worth pausing for. When it doesn't fire, that absence is its own kind of danger: the situation may have been normalised, the slow creep of deviation may have recalibrated what counts as acceptable, and the alarm simply stays silent.
The intervention at this level is calibration. Exposure to different ethical frameworks, different perspectives on what counts as a violation, scenario rehearsal under realistic conditions---all of these update the prediction model so the alarm fires when it should. And because the alarm runs ahead of reasoning, it needs to be well-calibrated before the moment arrives. Implementation intentions---"if I feel this, I do that"---wire the right response to the right signal so the cheap system produces the right output even when deliberation can't afford to show up.
The justification. Rationalising away---the brain constructs reasons after the fact to justify what the gut already decided. In the ethical domain, this becomes what Albert Bandura called moral disengagement: a specific set of rationalisation scripts that let people maintain a moral self-image while behaving unethically. The scripts are remarkably consistent: euphemistic labelling ("collateral damage", "enhanced interrogation"), displacement of responsibility ("I was following orders"), advantageous comparison ("at least we didn't do what they did"), diffusion of responsibility ("everyone was doing it"), distortion of consequences ("it wasn't that bad"). These aren't creative excuses. They're well-worn neural pathways, chunked and rehearsed, fired so often they've become automatic.
The intervention at this level has two parts. First, make the scripts visible: train people to recognise moral disengagement by name. "That's displacement of responsibility" is like pointing out the card trick---it doesn't make the magician disappear, but it makes the trick much harder to perform unseen. Second, build self-efficacy in moral reasoning. Whether the lazy controller opts for genuine engagement or disengagement depends partly on how confident you are in your ability to navigate moral conflict. Dilemma rehearsal---working through ethical scenarios where you have to articulate and defend a position---builds that confidence. The more fluent you are at engaging, the less costly the controller estimates it to be.
The environment. Situations shape behaviour---the situation you're in predicts your behaviour far better than the kind of person you are. In the ethical domain, this means that environmental factors---time pressure, fatigue, supervision, how options are presented---determine whether the ethical option is even available. When you're sleep-deprived, under-resourced, and facing ambiguity, the input-output machine loads the fastest, most practised response, not the most ethical one. The ethical option---pausing, consulting, escalating---requires effort, time, and institutional support that the situation may not provide.
The intervention at this level is design. Make the ethical option the easy option: accessible reporting channels, mandatory pauses at key decision points, supervision when stakes are high, adequate rest and staffing. And critically, "train as you fight": if you want ethical behaviour under operational conditions, practise it under operational conditions, not just in classrooms. Pathways learned in calm contexts don't fire under stress, because the contextual cues are absent.
But situations don't just shape behaviour directly---they amplify or suppress every other mechanism in the cascade. A high-threat environment heightens arousal, which amplifies the gut alarm. Time pressure suppresses deliberation, biasing the system toward rationalisation rather than genuine reasoning. The presence of peers activates group conformity. The situation isn't a separate influence---it's a multiplier that runs through every other level, which is what makes it so powerful and why, at the time of the situation, it's often too late to intervene at this level alone.
The group. Thinking like the group---groups coordinate which patterns are salient through shared practices, and your brain maps those regularities automatically. In the ethical domain, this means that group norms determine what counts as acceptable behaviour, and dissent feels like betrayal. In tightly bonded groups, ethical beliefs are chunked together with group identity, social capital, and loyalty. Challenging a decision means risking your standing, your relationships, and your sense of belonging. The social map links ethical compliance with group membership so tightly that separating them---doing the right thing at the cost of your group---feels like tearing apart the structure you depend on.
The intervention at this level is structural. Build bridging capital alongside bonding capital: cross-unit rotations, external oversight, independent reporting chains. And make questioning part of the group identity, not a threat to it. If "we challenge each other" is a recognised value---not just stated but enacted---then challenging a bad decision earns social capital rather than costing it. Authority figures who model dissent from within the group show that questioning is part of "who we are."
The scaffolding. Rules on the wall---every institution has two sets of values: the ones it displays and the ones it practises. In the ethical domain, the institutional culture is where moral content enters the system in the first place. The gut alarm, the reasoning scripts, the group norms---these are all mechanisms that process moral content, but they don't generate it. The norms that define what counts as right and wrong, the standards that the alarm fires against, the frameworks that reasoning reaches for---all of these are supplied by the culture, the institution, the tradition. When the scaffolding is hypocritical, everything downstream runs on corrupted inputs.
The intervention at this level is consistency. Close the gap between espoused and enacted values---from the enacted side, not the espoused side. Adding more values training when the daily experience contradicts the values just reinforces the lesson that values are talk, not action. Instead, change what actually gets rewarded and punished. Promote people who embody the values. Correct violations visibly and consistently. Leaders must understand that their behaviour writes the institutional ideology that everyone else's brain maps as the real rules. The standard you walk past is the standard you accept.
How the levels compound
No single level causes ethical failure on its own. Each one contributes a small bias, and the biases compound. Consider a unit on a high-stress deployment. Sleep-deprived, under-resourced, facing an ambiguous threat. The rules of engagement are clear on paper, but the situation on the ground doesn't match the clean scenarios from training. A squad leader makes a call that, in hindsight, crosses a line. But at the time:
- His gut said "threat"---and his body was already mobilised before he thought about it.
- His reasoning defaulted to the most practised script, because his training hadn't prepared him for this exact ambiguity.
- The environment---darkness, fatigue, time pressure---made the aggressive option the easiest one.
- His group expected him to be decisive, and questioning the call would have cost him standing.
- The institution had been quietly signalling for months that results matter more than process, that the rules are aspirational rather than operational.
This is what makes ethical failure so difficult to prevent: it's not one thing going wrong, it's everything going slightly wrong at once. And it's what makes the scaffolding level the most powerful lever for change. The institution sets the baseline that all other levels operate from. Get the scaffolding right and the gut recalibrates, the reasoning scripts improve, the situation gets designed, and the group norms align. Get the scaffolding wrong and no amount of individual training can compensate.
Read
Start with the cascade above, then explore each phenomenon in depth:
- Something isn't right --- gut-level pattern-violation detection. How the alarm works, why it's fast, and why it's sometimes wrong.
- Rationalising away --- post-hoc justification. Why "thinking it through" often just rationalises what the gut already decided.
- Situations shape behaviour --- why the environment predicts your actions better than your character.
- Thinking like the group --- how group norms absorb you without deliberate choice.
- Rules on the wall --- why the values on display rarely match the behaviour in the corridor.
Experience
- Oil Pricing --- a multi-round prisoner's dilemma. Trust builds or shatters, representatives negotiate promises their teams won't honour, and the gap between stated strategy and actual behaviour becomes impossible to ignore.
- Promised Lands --- a three-stage moral ranking exercise. Watch your own reasoning shift when the incentives do. If you can generate a convincing argument for a position you don't hold, what does that tell you about arguments you do hold?
- Lines to Take --- consolidate messages, handle hostile questions, and understand why the redirect works on the listener's brain.
See also
- The say-do gap --- the individual version of the intention-action gap. Why knowing better doesn't mean doing better.
- Habits --- trained routines resist change. Normalisation of deviance is a habit problem.
- Different person in different places --- context-dependent behaviour. Why someone can be principled at work and not at home, or vice versa.
Limits
This context presents ethical failure as a systems problem, which it usually is. But systems explanations can become systems excuses. The fact that multiple levels contributed to a failure doesn't mean no individual is responsible---it means responsibility is distributed across levels, and interventions are needed at all of them. The levels interact in messy, non-linear ways that resist clean separation into neat stages. And while designing better systems is the most effective long-term strategy, it doesn't help in the moment---when you're the one facing the decision, the system is whatever it is, and you still have to choose.