top of page

The Eight Scenarios: Mapping Positive and Negative Uncertainty in Data & AI Projects

  • Writer: Daniel Rolles
    Daniel Rolles
  • 4 days ago
  • 7 min read

The Eight Scenarios: Understanding Positive and Negative Uncertainty


Introduction


The uncertainty matrix contains four quadrants, but each quadrant splits into positive (opportunities) and negative (risks) scenarios. This gives us eight distinct situations that require different governance approaches.


Traditional PMO governance works well for four of these scenarios and fails catastrophically for the other four.




Known Knowns (We Know We Know)


Known Known - POSITIVE Example


What we know:

  • Our data warehouse contains 15 years of transaction history

  • We have clean, validated customer demographic data

  • Our data scientists have proven expertise in propensity modelling

The opportunity:

We can confidently build a next-best-action recommendation engine because we have all the necessary ingredients: historical data, customer context, and technical capability.


Why it matters:

This lets us execute with certainty. We can plan resources, set timelines, and deliver predictably. PMO governance works perfectly here - standard project execution, clear deliverables, predictable outcomes.


Business impact: Deliver £500K recommendation engine on time and budget.



Known Known - NEGATIVE Example


What we know:

  • Our current fraud detection system has a 15% false positive rate

  • This costs us £2M annually in manual reviews

  • Customer satisfaction drops 20 points when legitimate transactions are blocked

The risk:

We know exactly what's wrong and what it's costing us. This is a quantified problem requiring a solution.


Why it matters:

This is a documented issue that needs addressing. PMO governance works perfectly here - issue management, corrective action planning, measurable success criteria.


Business impact: Fix a £2M annual cost with measurable improvement targets.




Known Unknowns (We Know We Don't Know)


Known Unknown - POSITIVE Example


What we don't know:

Whether a particular customer segment (high-frequency, low-value transactions) would respond to targeted promotions. We've never marketed to this segment because we assumed they were price-sensitive bargain hunters.


The opportunity:

We've identified a potential new revenue stream but don't know if it will work. We can design experiments to find out: run A/B tests, measure conversion rates, calculate ROI.


Why it matters:

This is a recognised opportunity we can systematically explore. PMO governance handles this well - we create a proof of concept, set success criteria (e.g., "If conversion rate >8%, roll out fully"), run time-boxed experiments, measure results.


Business impact: Potential £2M additional annual revenue if hypothesis proves true - and we know exactly how to test it.



Known Unknown - NEGATIVE Example


What we don't know:

Whether our AI-powered loan approval system will inadvertently discriminate against protected demographic groups. We know bias is a risk but don't know if our specific model exhibits it.


The risk:

We've identified a potential compliance and reputational hazard. We don't know if the problem exists or how severe it might be.


Why it matters:

This is an identified risk we can systematically mitigate. PMO governance handles this well - we create fairness testing protocols, establish demographic parity metrics, set up ongoing monitoring, assign risk owners.


Business impact: Avoid potential £10M regulatory fine and reputational damage through proactive risk management.




Unknown Knowns (We Don't Know We Know)


Unknown Known - POSITIVE Example


What we've forgotten:

In 2018, when building our customer segmentation model, the team made a deliberate decision: "Deep learning clustering is too computationally expensive and unexplainable for our use case. We'll use k-means instead."


This assumption was documented in a technical specification. The team lead who made this decision left in 2020. The document is buried in SharePoint. No current team member knows this trade-off was ever discussed.


The buried opportunity:

It's now 2025. The original assumption is completely invalid:

  • Cloud GPU costs have dropped 90%

  • Explainability tools for deep learning are mature and regulatory-compliant

  • Your competitors are discovering customer micro-segments you're completely missing

Why it matters:

Your organisation is sitting on a massive buried opportunity. Someone knew this was a trade-off once, but that knowledge has been "laundered" into "we use k-means for segmentation" without anyone remembering *why*.


PMO governance actively created this problem: The documentation that was meant to preserve knowledge actually buried it. The handoff processes that were meant to ensure continuity actually lost context. The time that was meant to build maturity actually eroded memory.


Business impact: Potential £5M revenue from precision micro-targeting - but completely invisible because no one knows to even look for this opportunity. Your PMO dashboard shows "customer segmentation: green status, stable, no action required."



Unknown Known - NEGATIVE Example


What we've forgotten:

In 2016, when building your fraud detection system, the data science team made critical assumptions about the training data:

  • "Our 2013-2015 transaction history captures normal customer behaviour"

  • "Our customer demographics are representative of our target market"

  • "Card-present transactions will remain 70% of our volume"

These assumptions were true at the time. They were documented in a model validation report signed off by the Chief Risk Officer. That CRO retired in 2019. The report is archived in a compliance repository. The current fraud team has never read it.


The buried time bomb:

It's now 2025. Every single assumption is catastrophically false:

  • Customer behaviour has fundamentally changed (mobile, contactless, e-commerce explosion)

  • Your customer base has shifted dramatically (younger, more international, different payment preferences)

  • Card-present transactions are now <20% of volume

Why it matters:

Your fraud detection model is systematically broken. It's missing entire categories of new fraud patterns because they didn't exist in 2013-2015 training data. It's falsely flagging legitimate new customer behaviours as suspicious because they look "abnormal" compared to 2013-2015 patterns.


But no one knows this because the original assumptions are buried. The model keeps running with high confidence scores. Your PMO dashboard shows "fraud detection: green status, 99.2% uptime, on track."


PMO governance actively created this disaster: The model validation process that was meant to ensure quality actually buried the assumptions. The documentation that was meant to provide audit trails actually obscured the context. The stability that was meant to demonstrate maturity actually prevented necessary evolution.


Business impact: You're heading toward either:

  • A major fraud incident (£20M+ loss when criminals exploit your blind spots), or

  • Regulatory action for discriminatory false positives (£15M+ fine plus reputational damage)

The explosion is inevitable. The only question is when. And it's completely invisible to traditional governance.




Unknown Unknowns (We Don't Know We Don't Know)


Unknown Unknown - POSITIVE Example


What we couldn't have known:

While building a customer churn prediction model, the data science team discovers something completely unexpected: a strong correlation between specific customer service interaction patterns and likelihood to become high-value advocates (not just retained customers, but active promoters).


This pattern was invisible in the original project scope. No one was looking for it. It emerged serendipitously during exploratory data analysis.


The unexpected opportunity:

You've stumbled onto a way to identify potential brand advocates early, enabling:

  • Targeted VIP programs

  • Ambassador recruitment

  • Early warning for advocacy risk

Why it matters:

This is a black swan benefit - something you couldn't have planned for because you didn't know it existed.


PMO governance cannot budget for this: Traditional project planning can't allocate resources to discoveries you don't know are possible. You can't set KPIs for serendipity. You can't put "unexpected insights" in a risk/opportunity register.


Business impact: Potential £3M from improved advocacy programs - but only if you have organizational slack and permission to explore unexpected findings. Many PMO frameworks would classify this as "scope creep" and shut it down.



Unknown Unknown - NEGATIVE Example


What we couldn't have known:

Your credit scoring model has been running successfully for 3 years. Unknown to you, it has developed a critical vulnerability: it inadvertently uses proxies for protected characteristics (zip codes correlating with ethnicity, purchasing patterns correlating with family status) in ways that create systematic bias.


This wasn't in your model design. It wasn't in your testing scenarios. It emerged from complex interactions between features in production data that were different from training data patterns.


The unexpected disaster:

A regulator's audit discovers the pattern. Or a journalist investigates. Or a class action lawsuit is filed. You had no idea this vulnerability existed until it detonated.


Why it matters:

This is a black swan catastrophe - something you couldn't have planned for because you didn't know the risk existed.


PMO governance is systematically insufficient: Traditional contingency reserves assume symmetric, bounded risks. They allocate 10-20% buffers for "unexpected issues." But this isn't a 20% overrun - it's a potential existential threat (£50M+ in fines, settlements, and reputational damage).


Business impact: Your "5% contingency reserve" is meaningless against a £50M regulatory fine. Your risk register's "medium probability, medium impact" ratings completely missed the fat-tailed nature of AI model failures.


This is Extremistan, not Mediocristan. Your Mediocristan governance framework left you completely exposed.



Summary: Where PMO Works and Where It Fails


PMO Works — Quadrants 1 & 2 (4 scenarios)


  • Known Known / Positive — Standard execution: clear deliverables, predictable outcomes

  • Known Known / Negative — Issue management: known problems with known solutions

  • Known Unknown / Positive — Opportunity capture via PoCs: identified opportunities tested systematically

  • Known Unknown / Negative — Risk mitigation: identified risks managed proactively


PMO Fails or Actively Harms — Quadrants 3 & 4 (4 scenarios)


  • Unknown Known / Positive — Cannot surface buried opportunities: PMO has no mechanism to find forgotten knowledge

  • Unknown Known / NegativeACTIVELY CREATES buried time bombs: PMO's documentation and handoff processes bury assumptions rather than surfacing them

  • Unknown Unknown / Positive — Cannot budget for serendipity: PMO frameworks classify unexpected discoveries as scope creep

  • Unknown Unknown / Negative — Systematically underestimates catastrophic risk: PMO assumes Mediocristan distributions; data and AI operate in Extremistan


The Critical Insight


Traditional PMO governance doesn't just fail to help with Unknown Knowns and Unknown Unknowns - it actively makes things worse.


For Unknown Knowns: The very processes meant to preserve knowledge (documentation, handoffs, validation) actually bury it, creating false confidence.


For Unknown Unknowns: The very assumptions meant to enable planning (symmetric risk, bounded variation, predictable distributions) are catastrophically wrong in AI/data projects.


You're navigating Extremistan with a Mediocristan map. And your governance framework is drawing the map.



About the Author


Daniel Rolles is the CEO and Founder of BearingNode, where he leads the firm's mission to help organisations unlock the commercial value of their data whilst enhancing their risk management capabilities.


As CEO, Daniel drives BearingNode's strategic vision and thought leadership in data transformation, analytics strategy, and the evolving regulatory landscape. He regularly shares insights through industry publications and speaking engagements, focusing on practical approaches to data governance, AI implementation, and performance transformation in regulated environments. He is one of the key authors of BearingNode's Data and Information Observability Framework.


With over 30 years of experience in Data, Analytics and AI, Daniel has successfully built and led D&A teams across multiple industries including Financial Services (investment, commercial and retail banking, investment management and insurance), Healthcare, and Real Estate. His expertise spans consulting, commercial leadership, and delivery management, with a particular focus on data governance and regulatory compliance.


Daniel holds a Bachelor of Economics (University of Sydney), Masters of Science (Birkbeck College, University of London), and Executive MBA (London Business School).


Based in London, Daniel is passionate about financial inclusion and social impact. He serves as a Trustee for Crosslight Advice, a debt advisory and financial literacy charity based in West London that provides vital support to individuals facing financial vulnerability.


Connect with Daniel on [LinkedIn] or learn more about BearingNode's approach to data and analytics transformation at [BearingNode].

bottom of page