top of page

Anatomy of Uncertainty - Why Data & AI Projects Need a New Navigation Map

  • Writer: Daniel Rolles
    Daniel Rolles
  • Nov 9
  • 14 min read

Key Points


  1. Gen AI projects reveal the inadequacy of inherited frameworks more starkly than ever

  2. Pilot fatigue / "failure" in GenAI, Analytics and Data Governance / Data Observability may be good if they result in organisational learning

  3. Objective is to get business value -- thinking like a VC and embracing uncertainty is critical


The Meeting Scene

The project steering committee settles in for the monthly review. The Chief Data Officer (CDO) opens her risk register on the screen—a familiar grid of reds, ambers, and greens. "Unstructured Data Quality is amber," she explains, "but we've got mitigation plans in place."


The Business Stakeholders nod. This feels like control.


But three months later, the Generative AI pilot collapses. Not because of any risk in the register; the Risk Register for the project was maintained perfectly.


It fails because nobody questioned whether:


  • The Unstructured Data (with its data quality challenges) was effective for entity resolution

  • The outputs, while technically correct, would be trusted by business users in customer-facing contexts

  • GenAI output evaluation and Observability weren't factored into solution design and plan


The risks were managed. The uncertainty wasn't even seen.


This scene plays out in organisations everywhere, especially now as leaders rush to deploy AI capabilities without the vocabulary to distinguish between what can be managed and what must be navigated. The problem isn't that we're bad at project management. It's that we're using the wrong frameworks entirely.


The Inherited Problem: Knightian Confusion


Over a century ago (1921) economist Frank Knight published "Risk, Uncertainty and Profit," making a distinction that would shape generations of thinking about business planning. He argued that "risk" involves known probabilities that can be calculated and priced, while "uncertainty" involves situations where you can't even enumerate the possibilities, much less assign them probabilities.


Knight's framework has proven remarkably durable. Gerd Gigerenzer's "Risk Savvy" builds on it to critique how we misunderstand probability. Niall Ferguson's recent "Doom: The Politics of Catastrophe" revisits it to explain pandemic response failures. The distinction has become so embedded in organisational and societal culture that we barely notice it anymore.


Knight was right that not everything is quantifiable. But his binary distinction (risk vs uncertainty) has made organisations worse at navigating uncertainty, not better. We don't need a special framework for 'genuine Knightian uncertainty'—we need a framework that recognizes uncertainty exists EVERYWHERE and helps us navigate all its forms.


It has gotten to the point where scholars like Cass Sunstein continue to defend this binary in recent work, perpetuating the idea that 'genuine Knightian uncertainty' requires special treatment. But for data and AI projects, this framing actively misleads us.


The problem with Knight's dichotomy isn't that he was wrong about unquantifiability—not everything can be measured with probabilities, and pretending otherwise is dangerous. The problem is that his binary framing (risk vs uncertainty) has led organisations to treat uncertainty as a special condition requiring special frameworks, rather than recognizing that uncertainty exists everywhere—in all four quadrants of knowledge.


Some uncertainty is quantifiable, some isn't. But the question isn't 'is this risk or uncertainty?' It's "what kind of uncertainty are we dealing with, where does it live in our map, and how should we navigate it?".


This matters because the Knightian framing has taught organisations to treat all uncertainty as simply unquantified risk—as if everything unknown would eventually become quantifiable with enough analysis, enough data, enough planning. "If you can't measure it, you can't manage it" The implication: uncertainty is just risk we haven't properly measured yet. Do more analysis, build better models, and you can transform uncertainty into manageable risk.


For certain types of projects—building infrastructure, executing established processes—traditional approaches work reasonably well. But for data, analytics, and AI initiatives, this inherited framework actively misleads us. It makes us confuse confidence with competence, mistake risk mitigation for uncertainty management, and miss the most dangerous unknowns entirely.


You'll never hear BearingNode say that measurement isn't valuable: things that should be measured absolutely should be measured. However, we also embrace Taleb's view from "The Black Swan": if you try to apply measurement frameworks where they don't belong—for instance, applying quantitative approaches designed for Mediocristan (normally distributed outcomes) to Extremistan (fat-tailed, unpredictable outcomes) which most data and AI projects inhabit—you're using a map of New York to navigate London. False precision is as dangerous as no precision at all; if not more so, it is false comfort.


Data and AI projects live in Extremistan, which is why VCs understand them better than traditional PMOs. In a portfolio approach, you expect 95% to fail or underperform, because you're looking for the 5% that generate 10x returns and the 1% that are truly transformational. One successful model, one breakthrough insight, one game-changing application can create more value than 99 failed experiments. Traditional project management—designed for Mediocristan where averages matter—completely misses this dynamic.


But the Extremistan dynamics extend beyond individual projects to competitive strategy: we've seen this pattern in the internet age, where winner-takes-most markets emerged through network effects—Google in search, Amazon in retail, Netflix in streaming. If GenAI follows similar dynamics, being in the 5% of successful implementations isn't just about ROI—it's about competitive survival.


The companies that figure out how to navigate GenAI uncertainty—to surface their Unknowns -- regardless which quadrant of the autonomy framework they sit within, test assumptions rapidly, and embrace portfolio thinking—will capture disproportionate market share, while the 95% that fail to move beyond pilots risk irrelevance.


The result? We build elaborate risk registers while the actual sources of project failure lurk in assumptions we never thought to articulate or validate (unknowns knowns) and the true uncertainties which can't be quantified (Unknown Unknowns).


The Core Distinction: Uncertainty ≠ Unquantified Risk


Let's be clear about what we mean by uncertainty in the context of data, analytics and AI projects. Uncertainty isn't simply "risk we haven't quantified yet." It's a fundamentally different state that requires different thinking and different management approaches.


Risk implies a bounded solution space. You know what might go wrong. You might not know the exact probability or impact, but you can enumerate the possibilities: the integration might take longer than expected, the data quality might be worse than hoped, key resources might become unavailable. These go in your risk register. You build contingencies. You monitor and mitigate.


Uncertainty means the problem and solution space itself is unknown or unstable. You're not sure what "done" looks like. You don't know if the approach will work. You can't enumerate the failure modes because you don't fully understand the problem yet. The goalposts might move—not because of poor requirements management, but because discovery changes what's possible or desirable.


This distinction becomes critical when we consider three characteristics that make data and AI projects genuinely uncertain:


First, emergent behaviour. When you build a process automation (without using Gen AI), you know what you're building. When you deploy a Foundational LLM into a complex business process, you're creating a system whose behaviour emerges from the interaction of the model, the data, the users, and the existing process. You literally cannot predict all the outcomes because they emerge from the interaction itself. We move from a deterministic to a probabilistic world.


Second, discovery-driven work. Many analytics initiatives are fundamentally about discovery: can we predict customer churn? Is there signal in this unstructured data? Will this AI capability create value in our process? These questions involve both Unknown Unknowns to surface and Unknown Knowns to validate. The answer isn't in the risk register—it emerges through experimentation, not planning. The project succeeds not by hitting the original target but by learning fast enough to pivot toward where actual value lies, which might be adjacent to or entirely different from what you originally envisioned.


Third, socio-technical complexity. Your AI project doesn't fail because of a technical risk. It fails because the marketing team's incentives don't align with data sharing. Because the executive sponsor got promoted and their replacement has different priorities. Because "customer value" means something different to every stakeholder and nobody noticed until after you built the thing. These aren't risks to mitigate — they're uncertainties to navigate.


The critical insight: treating uncertainty as if its merely unquantified risk leads to systematic management failures. You overplan. You create false confidence. You measure success against predictions that were never achievable. Most dangerously, you miss the actual sources of project failure and adjacent opportunity because they were never on your radar to begin with.


We need a different framework—one that helps us see uncertainty clearly so we can manage it appropriately.


The Framework: The Four Quadrants


The framework we use at BearingNode maps two dimensions of knowledge: what we know versus what we don't know, and our awareness of that knowledge state. This creates four distinct quadrants, each requiring fundamentally different management approaches:

ree

As the framework shows, these aren't just categories—they're states that uncertainty moves between as projects progress. Let's explore each quadrant in detail.



ree

Known Knowns: The Domain of Confidence


These are the things you know with genuine confidence. For a data platform project, this might include: infrastructure costs, team composition, regulatory requirements you must meet, existing system constraints you must work within.


Known Knowns are comfortable. They belong in your project plan as fixed constraints or confident assumptions. They form the foundation on which you build. The management approach is straightforward: plan, execute, verify.


The danger isn't the Known Knowns themselves—it's miscategorising something as a Known Known when it's actually in another quadrant. When a project leader says "we know the business requirements," they might mean "we have a requirements document" (which is a Known Known), but that's different from "we understand what will create business value" (which is often an Unknown Known—an assumption masquerading as fact).



ree

Known Unknowns: The Risk Register


This is the familiar territory of traditional project risk management. You know you don't know exactly how long data integration will take, so you put it in the risk register with a range and a mitigation plan. You know data quality will be an issue; you just don't know how bad, so you allocate time for data cleansing. You know you might lose a key team member, so you plan for knowledge transfer.


Known Unknowns are manageable because they're visible. You can assign owners, create mitigation plans, monitor progress. Most project management methodologies are optimized for this quadrant. Stage-gate processes, contingency buffers, risk review meetings—all designed to manage Known Unknowns.


The limitation: most organizations treat the Known Unknowns quadrant as if it encompasses all uncertainty. If it's not in the risk register, it's not on the radar. This blindness to the other quadrants is where projects actually fail.



ree

Unknown Knowns: The Danger Zone


This is the most treacherous quadrant, and the one most ignored by traditional project management. Unknown Knowns are the things you act on as if they're true but haven't validated. They're assumptions that have hardened into implicit facts. They're organizational truths that nobody questions.


In Data, Analytics and AI projects, Unknown Knowns show up everywhere:


  • Semantic assumptions: "Customer" means the same thing across all our systems. (It doesn't.)

  • Organisational assumptions: The business users will adopt this tool once we build it. (They won't, not without significant change management.)

  • Political realities: We have permission to access this data. (You have formal permission, but the data custodian will slow-roll every request because of ancient political grudges.)

  • Value assumptions: Improving prediction accuracy will drive business value. (Maybe, or maybe the business process can't actually act on more granular predictions.)


Unknown Knowns are dangerous precisely because they're invisible. They don't make it into risk registers because nobody realizes they're assumptions rather than facts. They're organizational culture, political history, semantic confusion—things everyone assumes everyone else understands the same way.


From a project management perspective, Unknown Knowns are where errors live before they become Known knowns (i.e. project issues). That data quality issue? It was always there; you just assumed the data meant what you thought it meant. That stakeholder resistance? It was always latent; you just assumed alignment. The project didn't encounter new risks—it surfaced assumptions that were wrong from day one.


The management challenge: how do you surface assumptions you don't know you're making? How do you question what everyone treats as obvious?



ree

Unknown Unknowns: True Black Swans


These are the genuine surprises—things nobody could have reasonably foreseen.


Unknown Unknowns can't be planned for, by definition. But they can be prepared for through organizational resilience, modular architecture, and adaptive governance. The management approach isn't prevention—it's rapid response and adaptation.


The distinction matters: most things that get labelled "Unknown Unknowns" after a project failure were actually Unknown Knowns that nobody surfaced. When a post-mortem concludes "nobody could have seen this coming," the honest question is: was it truly unforeseeable, or was it an assumption we didn't realize we were making?


Movement Between Quadrants: The Real Work


The power of this framework isn't just categorization—it's understanding how things move between quadrants, because that movement is where project management actually happens.


Discovery is the process of moving things from Unknown Unknowns or Unknown Knowns into Known Unknowns or Known Knowns. When you run a proof of concept, you're deliberately surfacing unknowns: will this approach work? Is there signal in this data? Can we achieve acceptable performance?


Learning happens when you move something from Unknown Knowns to Known Unknowns—when you surface an assumption and realize it needs validation. Consider the 'shadow AI economy' documented in recent research: while only 40% of companies purchased official LLM subscriptions, over 90% of employees were using personal AI tools for work.


The companies treating 'official tool adoption' as a Known Known ('if we buy it, they'll use it') missed the Unknown Known that employees already had better tools. The learning happened when organizations discovered this gap—moving from invisible assumption to visible reality that required response. This often feels like a project setback ("we just discovered the business users don't actually understand what they're asking for"), but it's actually progress. You've moved from invisible risk to visible, manageable risk.


Error is the painful movement from Unknown Knowns to Known Knowns—when you discover your assumption was wrong, often after you've built something based on it. This is expensive, which is why surfacing Unknown Knowns early is so valuable.


The framework gives you a diagnostic tool: for any significant project assumption or plan element, which quadrant is this really in? And what work do we need to do to move it to a more manageable quadrant?


Why This Framework Matters


This isn't academic taxonomy. The way you frame uncertainty shapes how you manage projects, allocate resources, measure success, and—critically—how you communicate with stakeholders.


For Proof of Concept (PoC) and Proof of Value (PoV) work, the entire purpose is discovery—yet recent research shows 95% of GenAI pilots fail to reach production, not due to technical risks, but due to unvalidated assumptions about adoption, workflow fit, and business value. A PoC doesn't fail because it doesn't produce the originally envisioned outcome; it fails if it doesn't reduce uncertainty enough to make an informed decisions. Yet organisations constantly judge PoCs by delivery against initial plans rather than by learning velocity.


For GenAI initiatives, which nearly every organisation is now exploring, the uncertainty is massive and multi-layered. Will the technology work for our use case? Will it produce acceptable outputs? Will users trust it? Will it create value? Can we govern it? These aren't risk register items—they're genuine uncertainties that require experimentation, rapid iteration, and honest acknowledgment that we're navigating rather than executing.


For data platform initiatives, the technical risks might be manageable, but the Unknown Knowns are everywhere: assumptions about how data will be used, what "good enough" quality means, whether business users will actually adopt self-service capabilities, whether the operating model can sustain what you're building. These projects don't fail technically—they fail organizationally, usually because of assumptions that were never surfaced.


For Data Governance and Data Observability, the irony is particularly acute. These efforts are explicitly about making data risks visible and manageable—reducing uncertainty about data quality, lineage, and usage. Yet they're often managed with enormous Unknown Knowns of their own. The business case promises "releasing engineering time" through better data discoverability and "reducing risk" through better governance, but these ROI assumptions are rarely validated.


Why four quadrants instead of a simple risk register? Because the management approach for each quadrant is fundamentally different. Known Knowns need execution. Known Unknowns need mitigation. Unknown Knowns need *surfacing*—you can't mitigate an assumption you don't know you're making. And Unknown Unknowns need resilience, not planning.


The framework changes how we think about project success and failure. A project that delivers exactly what was originally planned might have failed to learn anything important, while a project that pivots dramatically might be succeeding brilliantly at uncertainty reduction. Traditional project management treats deviation from plan as failure. Uncertainty management treats learning as success.


Most importantly, this framework gives you a vocabulary for honest conversations with stakeholders. Instead of presenting a confident plan for an inherently uncertain initiative, you can say: "These elements are Known Knowns we can commit to. These are Known Unknowns we're actively managing. These are assumptions we need to validate—our Unknown Knowns. And we're building in the resilience to handle genuine surprises." That's not lack of confidence—it's appropriate confidence calibrated to reality.


The Rumsfeld Moment: Why All the Quadrants Matter


The phrase 'known knowns, known unknowns, unknown unknowns' entered popular consciousness through Donald Rumsfeld's 2002 press conference about Iraq War intelligence. But the most revealing moment came years later in Errol Morris's documentary *The Unknown Known*, when Morris pressed Rumsfeld to acknowledge the missing quadrant—to reread the memo and confront the Unknown Knowns. Rumsfeld's visible discomfort wasn't accidental.


The Iraq War's foundational failures weren't Unknown Unknowns—they were Unknown Knowns. Assumptions about WMDs, post-invasion stability, political alignment—all held with such confidence they were never properly validated.



This is the Anatomy of Uncertainty: it doesn't just live in the quadrant we fear most (the Unknown Unknowns). It hides in all four quadrants, but most dangerously in the Unknown Knowns—the assumptions we hold so confidently we never think to question.


Your GenAI pilot doesn't fail because of a black swan event. It fails because nobody validated the assumption that business users would trust AI-generated outputs, or that the workflow could actually accommodate the technology, or that 'customer value' meant the same thing to every stakeholder.


Understanding uncertainty means acknowledging it exists everywhere—in what we know, what we know we don't know, and critically, in what we think we know but haven't validated. The four-quadrant framework makes this visible. And once visible, it becomes navigable.


Starting to Think Differently


The anatomy of uncertainty reveals something uncomfortable: much of what we've treated as project management is actually risk management, and we've been applying risk management tools to uncertainty problems. The result is false confidence, surprise failures, and waste.


The four-quadrant framework doesn't eliminate uncertainty—nothing can. But it transforms invisible risks into navigable challenges. It helps you distinguish what you can plan from what you must discover, surfaces the assumptions that sink projects, and creates the vocabulary for honest stakeholder conversations.


The framework alone won't solve your project problems, but it enables better problem-solving by revealing which type of uncertainty you're facing and which management approaches fit. The actual work—surfacing Unknown Knowns, validating assumptions, navigating genuine uncertainty—requires specific practices and capabilities we'll explore in the next posts.


Because once you can see uncertainty clearly, you can navigate it deliberately. And in an era of rapid AI advancement and accelerating change, that capability increasingly separates the organisations that thrive from those that merely survive.


At BearingNode, we are risk and uncertainty aware, not averse. Understanding the anatomy of uncertainty is at the heart of how we work with clients. We've been complimented many times on how our approach to proposals differs: we help clients understand scoping and explicitly articulate the assumptions we're making before we commence the engagement. This isn't caution—it's clarity that enables confident navigation of genuine uncertainty.


In the next post, we'll move from diagnosis to treatment: how do you actually manage projects when you've correctly identified what kind of uncertainty you're dealing with? How do you map these quadrants to governance approaches? What does it mean to manage for discovery rather than just delivery?



References


Challapally, A., Pease, C., Raskar, R., & Chari, P. (2025). The GenAI Divide: State of AI in Business 2025. MIT NANDA Project.


Ferguson, Niall. *Doom: The Politics of Catastrophe*. Penguin Press, 2021. Amazon


Gigerenzer, Gerd. *Risk Savvy: How to Make Good Decisions*. Viking, 2014. Amazon


Knight, Frank H. *Risk, Uncertainty and Profit*. Houghton Mifflin Company, 1921. Amazon


Morris, Errol, dir. *The Unknown Known*. Radius-TWC, 2013.


Taleb, Nassim Nicholas. *The Black Swan: The Impact of the Highly Improbable*. Random House, 2007. Amazon



About the Author


Daniel (Dan) Rolles is the CEO and Founder of BearingNode, where he leads the firm's mission to help organisations unlock the commercial value of their data whilst enhancing their risk management capabilities.


As CEO, Daniel drives BearingNode's strategic vision and thought leadership in data transformation, analytics strategy, and the evolving regulatory landscape. He regularly shares insights through industry publications and speaking engagements, focusing on practical approaches to data governance, AI implementation, and performance transformation in regulated environments. He is one of the key authors of BearingNode's Data and Information Observability Framework.


With over 30 years of experience in Data, Analytics and AI, Daniel has successfully built and led D&A teams across multiple industries including Financial Services (investment, commercial and retail banking, investment management and insurance), Healthcare, and Real Estate. His expertise spans consulting, commercial leadership, and delivery management, with a particular focus on data governance and regulatory compliance.


Daniel holds a Bachelor of Economics (University of Sydney), Masters of Science (Birkbeck College, University of London), and Executive MBA (London Business School).


Based in London, Daniel is passionate about financial inclusion and social impact. He serves as a Trustee for Crosslight Advice, a debt advisory and financial literacy charity based in West London that provides vital support to individuals facing financial vulnerability.



bottom of page