top of page

Anatomy of Uncertainty Part 3: Why AI Projects Need Different Governance

  • Writer: Daniel Rolles
    Daniel Rolles
  • Mar 11
  • 13 min read

Key Points


  1. Traditional project governance isn't broken—it's designed for a different world

  2. Registering uncertainty creates the illusion of managing it

  3. AI projects migrate between uncertainty quadrants during execution, outpacing governance frameworks built for stable problem spaces

  4. Organisations need portfolio thinking (like VCs), not project thinking (like PMOs)


The Project in Crisis

The project review meeting had gone badly. The AI initiative was weeks into delivery, and it was clear something was fundamentally wrong. Not catastrophically wrong—no spectacular failure, no obvious crisis—just that nagging sense that the project was drifting, that assumptions weren't holding, that the confidence everyone had at kickoff was quietly evaporating.


A senior member of the client team stood up and grabbed the whiteboard marker. She was experienced, confident, from one of the major consulting firms. "Let's get back to basics," she announced. "Project Management 101."


She began drawing the familiar shapes of control: scope triangle, timeline, resources, dependencies, stage gates. Each line on the whiteboard was a promise of predictability. Each box, a container for uncertainty. "We just need to apply proper methodology," she explained. "Clear requirements. Defined milestones. Accountability at each stage."


Around the table, heads nodded. The whiteboard looked like control. It looked like getting back on track.


But drawing boxes around uncertainty doesn't make it go away—it just makes it invisible.


Not because the methodology was bad—it wasn't. Not because the experienced stakeholder was incompetent—she wasn't. But because they were trying to apply Mediocristan governance to an Extremistan problem. They were unconsciously incompetent: they didn't know what they didn't know (Quadrant 4), they thought they knew things that weren't true (Quadrant 3), and they were trying to manage it all with tools designed for Quadrant 1.


This wasn't a junior PM making rookie mistakes. This was an experienced professional at a major organisation, doing exactly what she'd been trained to do. And that's precisely the problem.


The Theatre of Control


In Post 1, we established that uncertainty exists in four quadrants, each requiring different approaches. In Post 2, we showed how those quadrants contain both opportunities and risks—eight distinct scenarios, only four of which traditional PMO governance can handle.


But there's a subtler problem than simply having the wrong tools. It's the belief that putting something in a register means you're managing it.


Think about what happens in practice. A risk gets identified. It goes into the risk register. An owner is assigned. A mitigation plan is written. The status is reviewed monthly. Everyone feels better. But has the uncertainty actually been reduced? Or have we just created a bureaucratic container for it?


What we're witnessing is a theatre of control—a collective performance that creates the convincing appearance of managing uncertainty without actually reducing it.¹ The CDO presents the register. The board scrutinises. Actions are assigned. Statuses are updated. RAG colours shift from red to amber. Everyone has a role. The performance is convincing. But the uncertainty hasn't changed—it's just been dressed in the language of control: probability ratings, impact scores, mitigation actions.


This is how "if you can't measure it, you can't manage it" has quietly morphed into something more dangerous: "if you register it, you ARE managing it." The act of documentation has become a substitute for the act of understanding. The ritual of registration creates the subjective feeling of control—just as choosing your own lottery ticket creates the feeling of influencing the draw.


Our stakeholder at the whiteboard was doing exactly this—trying to register and control uncertainty through stage gates, milestones, and dependencies. Tools that assume the world is predictable enough for planning and control to work. Tools designed for what Taleb calls Mediocristan.


But as we established in Post 1, most data and AI projects don't live in Mediocristan. They live in Extremistan—where fat-tailed distributions mean averages are meaningless, planning is provisional, and one outcome in twenty determines all the value. You cannot register your way through Extremistan.


¹ The psychological basis for this is well-established. Ellen Langer's research on the "illusion of control" (1975) demonstrated that people systematically overestimate their influence over outcomes when given familiar cues of agency—choice, involvement, ritual—even when those outcomes are entirely determined by chance. Risk registers and stage-gate reviews provide exactly these cues at organisational scale.


The Governance Map: Two Worlds, One Framework



The visual above reveals the structural problem. On the left: the four quadrants of uncertainty from Post 1. On the right: the project controls that organisations actually use.


The upper box—Budgeted/Planned—represents Mediocristan governance: Proof of Concept, Stage-Gate processes, Contingency. These assume uncertainty can be managed through planning and progressive de-risking. For Quadrants 1 and 2, this assumption holds reasonably well.


The lower box—Unbudgeted/Unplanned—represents what happens when uncertainty from Quadrants 3 and 4 materialises. There's no budget allocated, no plan in place, because the risks were never identified. Unknown Known assumptions weren't in the register because nobody knew they were assumptions. Unknown Unknown events weren't in the contingency because nobody could have enumerated them.


Notice what happens at the boundaries. When a Known Unknown (Q2) materialises within expected parameters, it draws from contingency—exactly as designed. Mediocristan governance handles this well. But when an Unknown Known (Q3) is invalidated—when an assumption you didn't know you were making turns out to be wrong—it lands directly in Unbudgeted/Unplanned territory. No contingency was allocated for a risk that was never identified. The project is suddenly in crisis, and the governance framework has no response except to escalate.


This is the mismatch: most AI project uncertainty lives in Q3 and Q4, but most governance is designed for Q1 and Q2.


Where Each Quadrant Meets Governance


Q1 (Known Knowns) is where traditional project management genuinely applies. You know what you're building, you know how to build it, execution excellence matters. Stage-gate governance makes sense here.


But this quadrant is rarer than organisations believe for AI and data projects. Much of what looks like Q1 is actually Q3—false confidence masquerading as genuine knowledge. "We know our data quality is good enough" might be a Known Known if you've recently validated it. More often, it's an Unknown Known—an assumption that hardened into fact without anyone noticing.


Q2 (Known Unknowns) is where PoC and contingency approaches make sense—provided the unknowns behave like Mediocristan. "Will this database handle the load?" is a Q2 question you can test and resolve. Run the benchmark, get the answer, proceed.


But "Will this AI create business value?" is also a Q2 question—and it lives in Extremistan. This is where the critical distinction between a Proof of Concept and a Proof of Value becomes visible. A PoC asks: "Can we make this work technically?" That's a Mediocristan question with a testable answer. A Proof of Value asks something far harder: "Will this create genuine business value in context—given our people, our processes, our data, our culture?"


You can't PoC your way to a PoV answer. The PoC might validate technical feasibility while completely missing that adoption will be binary (complete success or complete failure), that the workflow can't accommodate the technology, that the market will shift before production, or that you're building the right thing for the wrong process. Most organisations run PoCs when they need PoVs—and then wonder why technically successful pilots fail to reach production. We'll explore what a proper Proof of Value actually looks like in the next post.


Q3 (Unknown Knowns) is where false confidence meets Extremistan reality, and where our stakeholder at the whiteboard lived. The pattern is always the same: an assumption held so confidently it was never questioned, operating in a domain where being wrong doesn't mean slightly wrong—it means catastrophically wrong.


"We know users will adopt this" (Mediocristan assumption). Reality: adoption is binary—no meaningful average between success and failure (Extremistan). "We've done this before, we know the patterns" (Mediocristan assumption). Reality: AI technology shifted three times since you last did this (Extremistan).


Mediocristan governance actively prevents you from discovering you're in Extremistan. Stage gates ask "are we on track?" not "are our fundamental assumptions about how value is created still valid?" Risk registers track variance from plan, not whether the plan makes sense.


Q4 (Unknown Unknowns) is pure Extremistan, and traditional governance becomes theatre. Technology shifts mid-project. Novel failure modes emerge only at scale. Competitive disruption arrives from directions you never imagined. You cannot stage-gate your way through a landscape that's constantly reshaping itself.


The Rate of Change Multiplier


Here's what makes this mismatch increasingly urgent: AI technology evolution is accelerating the speed at which projects migrate between quadrants.


In stable technology environments, your 18-month project plan holds. The landscape in month 18 resembles month 1 closely enough that your original assumptions remain valid. This is Mediocristan—and traditional governance was built for it.


In the current AI landscape, 18 months contains three to four major capability shifts: new model architectures that invalidate your approach, order-of-magnitude cost reductions that change the economics, new modalities that enable what was impossible, competitive moves that redefine what "good" looks like.


Each shift can push your project from one quadrant to another. What was a Known Unknown in January ("Will this LLM approach work?") becomes an Unknown Unknown by June when a new model architecture emerges that changes the entire solution space. What was a Known Known ("We'll use RAG for our knowledge base") becomes an Unknown Known when new capabilities make your architectural assumptions obsolete—but nobody on the project has noticed yet because they're heads-down executing the plan.


Your governance framework says "execute the plan." But the plan was written for a world that no longer exists.


This is why the mismatch is accelerating. It's not just that organisations are applying Mediocristan governance to Extremistan problems—it's that the rate of change is compressing the time between "this plan makes sense" and "this plan is obsolete" from years to months. Governance frameworks that were merely suboptimal in slower-moving domains become actively dangerous in AI.


Three Patterns of Governance Failure


The mismatch between Mediocristan governance and Extremistan reality shows up in three recurring patterns. If you've worked on AI initiatives, you'll recognise at least one.


Pattern 1: The Confident Disaster


This is what happened in our opening scene. The team believed they were in a Q1/Q2 situation—known problem, manageable unknowns, stage-gate governance appropriate. They were actually in Q3—buried assumptions about data quality, organisational readiness, and technology choices that nobody knew to question.


The governance framework they applied didn't just fail to help—it prevented discovery. Stage gates asked "are we hitting milestones?" when they should have been asking "are we still building the right thing?" The team executed rigorously against a plan built on false foundations. By the time the assumptions surfaced as project issues, significant time and budget had been consumed.


The senior stakeholder drawing PM 101 on the whiteboard wasn't incompetent. She was unconsciously incompetent about which world she was in. She didn't know to ask: "Is this Mediocristan or Extremistan?" Her framework didn't have that question.


Pattern 2: The False PoC


Another common pattern: "We'll run a PoC to de-risk this." The implicit assumption is that you can learn your way to predictability—test, validate, then scale with confidence.


This works in Mediocristan. Database performance? Test it, know the answer, proceed. Integration complexity? Build a prototype, validate the approach, plan confidently.


It fails in Extremistan. The PoC succeeds technically, but production fails catastrophically because edge cases have fat tails you didn't test. Or the PoC fails, but a different approach would have succeeded spectacularly—except you killed it based on average-case outcomes. Or the PoC validates the wrong things entirely because you didn't know what actually mattered.


The trap is subtle: PoCs assume the uncertainty is learnable—that with enough testing, you can convert Extremistan to Mediocristan. But when one approach in twenty will be the breakthrough and you can't predict which one, a single PoC isn't de-risking. It's a single draw from a power law distribution dressed up as scientific validation.


Pattern 3: Portfolio Blindness


The most insidious pattern: treating each AI project as a standalone investment decision.


Traditional governance requires each project to justify its own ROI, hit its own milestones, and demonstrate success independently. This makes perfect sense in Mediocristan, where outcomes cluster around averages and each project's success is largely independent.


But this "standalone investment" fallacy isn't new. We've known since the 1990s that technology investments don't create value in isolation. Brynjolfsson, Hitt, and Yang's landmark research on ERP implementations² showed that for every dollar spent on IT hardware and software, organisations needed to invest roughly ten times as much in complementary intangible assets—process redesign, training, organisational restructuring, change management. Less than 20% of a typical SAP implementation cost was the technology itself. The rest was the organisational co-investment required to make the technology productive.


The lesson from the ERP era was clear: technology value comes from the system of complementary investments, not the technology alone. And yet, a quarter of a century later, organisations are repeating exactly the same mistake with AI—evaluating each AI project as a standalone technology investment, divorced from the organisational transformation required to realise its value.


In Extremistan, this compounds into something worse. You need twenty projects to find the one breakthrough. Nineteen will fail or underperform—this is the system working correctly. But individual project governance will kill the winners too early (because they look like failures initially) or continue the losers too long (because of sunk cost fallacy and commitment to the plan).


This is why venture capitalists understand AI project dynamics better than most PMOs. VCs don't try to predict which startup will succeed—they know they can't. Instead, they construct portfolios expecting massive failure rates, focus governance on rapid learning and fast kills, and accept that power law dynamics mean most investments will return nothing while one pays for everything. Their governance isn't about detailed planning, variance tracking, or stage gates. It's about portfolio construction, maintaining optionality, and creating conditions where breakthrough can emerge while quickly killing what won't work.


Most PMOs optimise for average success rate across projects. In Extremistan, that's the wrong optimisation function entirely. You're not trying to make every project succeed—you're trying to find the one that transforms the portfolio while failing fast on the rest.


The Uncomfortable Truth


Look at your PMO methodology. Look at your business case templates. Look at your stage-gate processes.


They all assume Mediocristan: defined requirements that stay stable, quantified benefits based on normal distributions, planned timelines that assume stable technology, risk registers that treat uncertainty as variance from plan.


AI projects are Extremistan: requirements emerge through discovery, benefits follow power law distributions, the technology landscape shifts mid-project, and uncertainty isn't variance—it's fundamental unknowability.


² Brynjolfsson, E., Hitt, L. M., & Yang, S. (2002). "Intangible Assets: Computers and Organizational Capital." *Brookings Papers on Economic Activity*. This established the roughly 10-to-1 ratio of organisational co-investment required to realise returns from enterprise technology.


This isn't about "better" project management. You can't fix this with more rigorous planning. Can't solve it with tighter controls. Can't stage-gate your way through fat-tailed distributions. The map isn't wrong—it's just for a different city. PM 101 is excellent for building bridges. It's the wrong framework for venture bets.


And at the current rate of AI change, organisations that can't make this distinction won't survive. Not because they execute poorly—but because they're executing beautifully on the wrong framework entirely.


Conclusion: Know Your World


In Post 1, we established that uncertainty isn't one thing—it exists in four distinct quadrants. In Post 2, we mapped those quadrants across both risks and opportunities. Now we've added a critical second dimension: which world are you in?


In Mediocristan, traditional governance works. Planning makes sense. Execution excellence matters. Average outcomes are meaningful.


In Extremistan, traditional governance is theatre. Planning is provisional. Discovery matters more than execution. Power law dynamics mean one outcome dominates all others.


Most AI projects sit at the intersection of high uncertainty (Q2/Q3/Q4) and Extremistan dynamics (fat-tailed outcomes)—yet they're being governed as if they were Q1 Mediocristan problems. Plan and control. Stage gates and milestones. Risk registers and RAG statuses.


This is why 95% fail. Not because of poor execution. Not because of inadequate planning. But because the fundamental governance framework assumes a world that doesn't exist for these projects.


The most dangerous combination is Unknown Knowns (Q3) in Extremistan. Not because you're in the dark—that's Q4, and at least Q4 comes with humility. Q3 is worse: you're confident you know things that aren't true, and you're operating in a domain where being wrong doesn't mean slightly wrong—it means catastrophically wrong.


And it's not just one buried assumption. It's a web of them, held by different people across the organisation, each believing their version is the obvious, shared reality. The data team assumes "good enough quality" means one thing. The business sponsor assumes it means something else. The delivery lead assumes the requirements are settled. The business users assume everyone knows they'll evolve. Nobody articulates any of this because each person thinks it's obvious.


The result is organisational friction that masquerades as something else entirely—interpersonal conflict, political resistance, "lack of alignment," "communication issues." But the real cause is epistemological: people aren't disagreeing about what to do. They're operating in different realities and don't know it. Your governance framework reinforces rather than challenges this false confidence—stage gates ask "are we on track?" not "do we all mean the same thing when we say 'on track'?"


When this web of unstated, incompatible assumptions finally tears—and in Extremistan, it doesn't fray gradually, it snaps—the post-mortem will say "stakeholder misalignment" when the real diagnosis is Unknown Knowns colliding. You planned confidently. Executed rigorously. Failed spectacularly. And blamed the wrong cause entirely.


What Comes Next: From Diagnosis to Treatment


We've diagnosed the fundamental problem: organisations apply Mediocristan governance to Extremistan problems, creating expensive theatre while real uncertainty goes unmanaged.


But diagnosis without treatment is just complaining. Understanding the mismatch doesn't help unless you know what to do about it. The questions you're probably asking:


"OK, so stage gates don't work the way we're using them. So how *should* we use them?"


"If PoCs aren't enough, what does a proper Proof of Value actually look like?"


"You've told me Unknown Knowns are the most dangerous quadrant. How do I actually surface assumptions I don't know I'm making?"


"How do I have this conversation with my steering committee without sounding like I don't know what I'm doing?"


These are the right questions. And they require practical, operational answers—not more theory.


In the next post, we'll move from framework to practice: repurposing stage gates as kill gates, designing PoVs that test real value (not just technology), surfacing Unknown Knowns systematically, and having the conversation with stakeholders that maintains confidence while acknowledging uncertainty.


Because right now, you understand why traditional governance fails. But you still need to deliver projects, satisfy stakeholders, and navigate organisational politics. The bridge from diagnosis to treatment is where we're going next.




References


Brynjolfsson, E., Hitt, L. M. & Yang, S. (2002). Intangible Assets: Computers and Organizational Capital. *Brookings Papers on Economic Activity*, 2002(1), 137–198.


Brynjolfsson, E. & Hitt, L. M. (2000). Beyond Computation: Information Technology, Organizational Transformation and Business Performance. *Journal of Economic Perspectives*, 14(4), 23–48.


Challapally, A., Pease, C., Raskar, R., & Chari, P. (2025). The GenAI Divide: State of AI in Business 2025. MIT NANDA Project.


Ferguson, Niall. *Doom: The Politics of Catastrophe*. Penguin Press, 2021.


Langer, E. J. (1975). The Illusion of Control. *Journal of Personality and Social Psychology*, 32(2), 311–328.


Taleb, Nassim Nicholas. *The Black Swan: The Impact of the Highly Improbable*. Random House, 2007.


About the Author


Daniel (Dan) Rolles is the CEO and Founder of BearingNode, where he leads the firm's mission to help organisations unlock the commercial value of their data whilst enhancing their risk management capabilities.


As CEO, Daniel drives BearingNode's strategic vision and thought leadership in data transformation, analytics strategy, and the evolving regulatory landscape. He regularly shares insights through industry publications and speaking engagements, focusing on practical approaches to data governance, AI implementation, and performance transformation in regulated environments. He is one of the key authors of BearingNode's Data and Information Observability Framework.


With over 30 years of experience in Data, Analytics and AI, Daniel has successfully built and led D&A teams across multiple industries including Financial Services (investment, commercial and retail banking, investment management and insurance), Healthcare, and Real Estate. His expertise spans consulting, commercial leadership, and delivery management, with a particular focus on data governance and regulatory compliance.


Daniel holds a Bachelor of Economics (University of Sydney), Masters of Science (Birkbeck College, University of London), and Executive MBA (London Business School).


Based in London, Daniel is passionate about financial inclusion and social impact. He serves as a Trustee for Crosslight Advice, a debt advisory and financial literacy charity based in West London that provides vital support to individuals facing financial vulnerability.


Connect with Daniel on [LinkedIn] or learn more about BearingNode's approach to data and analytics transformation at [BearingNode].


About BearingNode: We help organisations navigate uncertainty in data and AI initiatives. Our approach combines deep technical expertise with governance frameworks designed for Extremistan, not Mediocristan. Learn more

bottom of page