TACKLING BEHAVIOURAL BIASES IN IT PORTFOLIO DELIVERY
Paul Mansell
The modern technology delivery portfolio engenders high-velocity and complex endeavours to meet what are often quite broad and even ambiguous requirements — an environment within which decision-making is rarely straightforward. From strategic planning to defect triage, tendencies towards certain behaviours can colour judgment at every level of management.
Studies have shown that up to 66% of technology projects end in partial or total failure (Standish Group, 2020), with a significant portion of these outcomes driven not by technical flaws but by human factors such as over-optimism, strategic distortion, and reluctance to abandon failing initiatives. Research by McKinsey and Oxford University found that 17% of large IT projects go so badly wrong that they threaten the existence of the organisation — often due to unrealistic expectations and flawed assumptions rather than delivery complexity (McKinsey & Oxford, 2012). While technical challenges play a role, a growing body of research points to the judgement of individuals as a primary source of failure — where predispositions shape key decisions long before delivery begins (Flyvbjerg et al., 2021).
While data and analytics promise objectivity, they, too, can become brittle when stripped of people’s intuition, insight, and lived context. This article explores the hidden modal forces shaping delivery outcomes — and offers practical ways to restore the balance between structured analysis and realistic decision-making.
Biases that Most Undermine Quality in IT Projects
Among the many biases that affect decision-making, six stand out for their persistent and damaging influence on delivery quality, as identified by Flyvbjerg, Budzier, and Lunn (2021):
NOTE: Flyvbjerg, Budzier, and Lunn actually identify and list ten Behavioural Biases in their 2021 paper.
1. Optimism Bias — “It’ll all work out fine”
We tend to believe things will go better than they realistically will — overestimating positive outcomes and underestimating risks. Testing timelines are compressed due to unrealistic schedules. Defect likelihood is underestimated, and negative test scenarios are skipped.
2. Planning Fallacy — “We’ll deliver faster and cheaper than anyone ever has”
The Planning Fallacy is the close cousin of Optimism Bias. It is the chronic underestimation of time, effort, and complexity despite evidence to the contrary (Kahneman & Tversky, 1979). Project teams under-allocate time for integration, system, user acceptance and regression testing. Quality gates are sacrificed when delays pile up.
3. Overconfidence Bias — “Trust us — we know what we’re doing”
Project teams overestimate their own understanding, experience, or control over the environment. Risk logs and QA recommendations are dismissed too early. Confidence trumps evidence in decision-making. Investigations into root causes are ignored until late-stage defects appear.
4. Anchoring — “But our first estimate said three months…”
Initial numbers — however arbitrary — become fixed reference points. Even new evidence struggles to dislodge them (Tversky & Kahneman, 1974). Test scope is reduced to “meet the plan” instead of reflecting actual system readiness. Budget or effort adjustments are resisted, even when justified by defect patterns or complexity.
5. Escalation of Commitment — “We’ve come too far to change course now”
Also known as the sunk cost fallacy, this decision trap causes teams to throw good resources after bad, even in the face of apparent failure (Staw, 1976). Flawed solutions are pushed toward ‘Go-live’ rather than being redesigned or cancelled. Known defects are deferred indefinitely to avoid backtracking, and honest retrospectives are often sidestepped to protect past decisions from scrutiny.
6. Strategic Misrepresentation — “We can’t say that — it won’t get funded”
This form of distortion can be deliberate or unconscious. In some cases, teams or sponsors intentionally soften metrics, downplay issues, or minimise risks to secure approvals — a political strategy often described as strategic misrepresentation (Flyvbjerg et al., 2021). In other instances, the distortion is more subtle. For example, biases such as motivated reasoning or groupthink lead stakeholders to frame the business case more favourably without the overt intent to deceive.
Either way, the result is the same. Defect reporting is sanitised. Quality debt accumulates under the radar, and the focus shifts from doing things right to saying things right.
Data as a Countermeasure: Analytics for Bias Mitigation
Schedule & Cost Variance Analytics highlight unrealistic planning assumptions.Defect Leakage Rates surface hidden quality risks before go-live.Many portfolio delivery organisations are building reliable data platforms and scaled analytics to overcome the cognitive traps of bias. These systems provide cold, hard facts, early warnings and evidence-based insights that can counter behaviourally- and emotionally-driven inclinations at their source.
Examples of these are as follows:
- Defect Leakage Rates surface hidden quality risks before go-live.
- Schedule & Cost Variance Analytics highlight unrealistic planning assumptions.
- Test Performance Indices flag areas where overconfidence or anchoring may be masking risk.
- Portfolio Benchmarks expose uniqueness and base rate neglect by comparing delivery outcomes across projects.
However, there is a catch — Data alone isn’t enough.
Tempering Hyper-Rationalisation: Restoring Decision Realism
While analytics help counter habitual trains of thought, an over-dependence on data models can lead to hyper-rationalisation — where decision-making becomes detached from human judgment, experience, and context. As Thaler (2015) notes, hyper-rational models often assume conditions that don’t exist in the real world: perfect information, logical consistency, and emotion-free agents.
To restore decision realism, we must balance rational structure with people-centred insight — by embracing three counterweights: informed heuristics, behavioural guardrails, and contextualised decision-making.
1. Informed Heuristics: Experience as a Rational Tool

ENQUIRE ABOUT OUR MENTORING PROGRAMMES
TAKE THE TAL ONLINE TEST MATURITY SURVEY
Heuristics — well-informed, experience-based rules of thumb — allow teams to act decisively without exhaustive analysis. In uncertain conditions, they can be more reliable than hyper-rational models (Gigerenzer, 2001).
Flyvbjerg and Gardner (2023) offer examples like:
- “Think slow, act fast” — Mitigate risk early through prototypes or rehearsal.
- “Say no and walk away” — Avoid scope bloat and unproductive effort.
These heuristics reflect practical wisdom or phronesis — decisions grounded in what works, not what looks optimal on paper (Bondale, 2023).
2. Behavioural Guardrails: Structuring for Better Decisions
Behavioural guardrails are structured activities and principles designed to provide checks and balances in delivery environments. They serve to guide decision-making behaviour without resorting to micromanagement. Guardrails establish clear boundaries, escalation triggers, and embedded reviews, ensuring teams have room to act while staying aligned with delivery discipline and organisational intent.
Smart (2020) describes these as principles that scale across organisations, shaping millions of decisions by embedding expectations and values. Research by van der Meulen et al. (2024), published in MIT Sloan Management Review, found that organisations employing structured guardrails for areas like data use, purpose alignment, and policy enforcement significantly outperformed their peers in revenue growth, profitability, and adaptability.
Practical examples are as follows:
- Premortems to challenge assumptions and reduce optimism bias.
- Escalation thresholds to interrupt sunk cost patterns and escalation of commitment.
- “Freedom and responsibility” cultures (e.g., Netflix) that empower teams to act autonomously within clearly understood limits (DeGrandis, 2016).
By institutionalising reflection, escalation, and value alignment, behavioural guardrails help prevent failures rooted in unchecked pre-judgement or overconfident reasoning while still promoting agility and ownership.
3. Contextualised Decision-Making: Making Sense of the Situation
Good decisions aren’t just technically correct — they’re contextually appropriate. That means adapting strategy to the delivery environment, team capacity, risk profile, and organisational maturity.
The National Academies (2023) show how contextualised planning enables capital projects to serve long-term goals. In IT delivery, frameworks like Cynefin (Lillie et al., 2024) help teams match decisions to complexity — choosing whether to analyse, act, or experiment.
Contextualisation prevents rational models from misfiring in the wrong environment.
So, Can We Be Honest About Project Outlook?
We can — but only if we recognise that bias is structural, not personal, and rationality is powerful but imperfect.
By combining:
- Evidence-based analytics,
- Guardrails that reflect lessons and values,
- Heuristics that honour experience, and
- Decisions grounded in context,
…we create delivery systems that are more than smart — they’re realistic.
Honesty in delivery doesn’t mean expecting perfection. It means designing for imperfection — and delivering with clarity, balance, and integrity.
The most successful delivery leaders won’t be those who unquestioningly trust models or rely solely on gut feel. They’ll be those who cultivate structured awareness: recognising when to trust the data, when to question it, and when to defer to wisdom, principle, or context. In doing so, they move beyond perfectionism and toward resilience — delivering with eyes open, tools in hand, and bias firmly in check.
References & Further Reading
Flyvbjerg, B., Budzier, A., & Lunn, D. (2021). Top Ten Behavioral Biases in Project Management. Project Management Journal, 52(6), 531–546.
Thaler, R. H. (2015). Misbehaving: The Making of Behavioral Economics. W. W. Norton & Company.
Kahneman, D., & Tversky, A. (1979). Intuitive prediction: Biases and corrective procedures.
Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157).
Staw, B. M. (1976). Knee-Deep in the Big Muddy: A Study of Escalating Commitment to a Chosen Course of Action. Organisational Behavior and Human Performance, 16(1), 27–44.
Gigerenzer, G. (2001). The Adaptive Toolbox. In G. Gigerenzer & R. Selten (Eds.), Bounded Rationality: The Adaptive Toolbox. MIT Press.
Flyvbjerg, B., & Gardner, D. (2023). How Big Things Get Done. Penguin Random House.
Bondale, K. (2023). Applying the heuristics of How Big Things Get Done. ProjectManagement.com.
Smart, J. (2020). Sooner Safer Happier: Antipatterns and Patterns for Business Agility. IT Revolution Press.
DeGrandis, D. (2016). Using Guardrails to Guide Decision Making. Planview Blog.
van der Meulen, N. (2024). The Four Guardrails That Enable Agility. MIT Sloan Management Review.
Lillie, T. et al. (2024). A conceptual framework for agility in sociotechnical contexts. South African Computer Journal, 36(1).
National Academies of Sciences, Engineering, and Medicine. (2023). Technical Assessment of the Capital Facility Needs of the National Institute of Standards and Technology.
TAKE THE TAL ONLINE TEST MATURITY SURVEY
ENQUIRE ABOUT OUR MENTORING PROGRAMMES
Sign Up for Regular Comms and Updates
Get the latest breaking developments delivered straight to your inbox.
Share this Article
Posted by Paul Mansell
We're Here To Help!
Office
85 Great Portland Street, First Floor, London, England, W1W 7LT
Hours
M-F: 09:00 – 17:00
S-S: Closed
0 Comments