top of page

PART 1: Whole-Body Thinking - Good Decisions Have Never Been Purely Data Driven

  • Writer: Barry Green
    Barry Green
  • Mar 26
  • 8 min read

Updated: 5 days ago

By Barry Green, Advisory Board Member, BearingNode & Author of 'Data Means Business'


About This Series, The Whole‑Body Revolution: Human intelligence reimagined for the AI era


We are operating in an era defined by an apparent paradox: data has never been more abundant, AI has never been more capable. So shouldn't decision-making be easier?


However, decisions are being made in an increasingly complex ecosystem — political, geopolitical, economic, regulatory — at increasing pace. That is why the challenge is not whether AI can think for us. It is whether we are becoming too willing to let it.


This two-part series addresses that paradox directly. It argues that the answer is not more technology, more dashboards, or faster models. It is better thinking and the kind of leadership that makes better thinking organisationally possible.


Part 1: Whole-Body Thinking - Good Decisions have never been purely data driven explores the nature of sound decision making itself. Drawing on the concept of whole-body thinking, it makes the case that good decisions have never been purely rational or purely data driven. They depend on the integration of logic, experience, intuition, ethical judgement, and human context — and on the organisational conditions that allow diverse perspectives to be heard, tested, and used.


Part 2: The Whole-Body Leader builds on that foundation. Individual judgement, however well developed, is not enough on its own. It must be carried forward by leaders with the clarity, conviction, and courage to translate it into collective action — and to build cultures where good decision making becomes habit rather than exception.


Together, the two parts offer a practical framework for leaders navigating the real challenge of the AI era: not how to adopt the technology, but how to think and lead well enough to use it wisely.


Whole-Body Thinking - Good decisions have never been purely data driven


We are living through a period in which data is abundant, AI is increasingly accessible, and decision-making is under more pressure than ever. In that environment, it is tempting to believe that better decisions will naturally follow from more dashboards, more models, and faster answers.


They do not.


Good decision making has never been purely rational, purely data driven, or purely technological. It depends on how well we interpret information, how clearly we understand its limitations, and how consciously we balance evidence, experience, ethics, and context. That is why whole-body thinking matters.


Whole-body thinking offers a more complete lens for understanding how people make decisions. It looks beyond the brain alone and recognises that judgement is shaped not only by logic, but also by experience, intuition, emotion, ethics, and lived context. In simple terms: the head helps us reason, the gut helps us recognise patterns from experience, and the heart helps us weigh values, consequences, and human impact.


This matters because the current conversation around AI often overstates what machines can do — and understates what responsible decision making actually requires.


AI can support thinking — but it cannot replace judgement


The warning bears repeating: we need to stop outsourcing our thinking to AI. This is a point made in the second edition of *Data Means Business* co-authored by Barry Green and Jason Foster. Not because AI lacks power — it doesn't — but because accountability, context, and ethical judgement cannot be delegated to a model. AI is only as good as the information behind it, and it can and does hallucinate. The risk is not that AI is unintelligent. The risk is that we stop being critical consumers of its outputs.


Modern AI is increasingly sophisticated. Techniques such as self-supervised learning and hierarchical reinforcement learning are designed to simulate aspects of human cognitive capability. They can identify patterns, generate plausible responses, accelerate analysis, and support productivity at a scale that would have seemed extraordinary only a few years ago.


But we should be precise about what this means.


AI can emulate parts of cognition. It cannot replicate accountability. It cannot carry responsibility. It cannot understand consequences in the way people do. It does not possess ethics, lived experience, or organisational memory. It does not know when a conclusion is technically plausible but contextually wrong.


That is why the challenge is not whether AI can think for us. It is whether we are becoming too willing to let it.


If anything, the rise of AI increases the demand for better human thinking — not less.


Better decisions require more than data


For years, organisations have spoken about making "data-driven decisions" as though the presence of data automatically improves the quality of the outcome. In practice, that has always been too simplistic — and it was always a short sighted framing.


Data is only useful when people understand what it represents, where it came from, how complete it is, how current it is, what assumptions sit beneath it, and whether it is appropriate for the decision at hand. In other words, good decisions depend not just on data, but on observable and governed information.


Whole-body thinking is most effective when the information underpinning a decision is itself trustworthy — discoverable, well governed, understood in context, and used with clear accountability. Without that foundation, even the sharpest human judgement is working with one hand tied behind its back.


Whole-body thinking should not be seen as an alternative to good data practice. It is strongest when paired with it.


A resilient organisation does not choose between intuition and evidence. It creates the conditions in which evidence can be trusted, assumptions can be challenged, and human judgement can be applied well. Governed information creates better conditions for judgement. If we want people to think well, we must give them information they can trust.


That trust does not emerge by accident. It comes from governance: clear ownership, shared definitions, metadata, classification, quality controls, lineage, stewardship, and accountability. These are not bureaucratic overheads. They are the operating conditions that make good decision making possible.


Too often, organisations focus on analytical outputs while neglecting the foundations beneath them. A dashboard may look precise while hiding unresolved issues in source quality. A model may appear intelligent while relying on poorly classified, weakly governed, or contextually incomplete information. A generative AI tool may sound authoritative while drawing on material that is outdated, biased, or simply wrong.


In each case, the problem is not the tool. The problem is the absence of observability and governance around the information the tool depends on.


Whole-body thinking, applied properly, means asking better questions before acting:


  • Do we understand the data behind this conclusion?

  • Do we trust its quality and provenance?

  • Are we interpreting it in context?

  • What are we missing?

  • Who owns this information and who is accountable for it?

  • What assumptions are we making — and are they visible?

  • What are the ethical implications of acting on it?

  • Where should human judgement override automation?

  • What underlying bias is evident and is it material to the outcome?


These are governance questions as much as thinking questions.


Organisational whole-body thinking


Whole-body thinking is not just an individual capability. It is also an organisational one.


Experienced people often understand the limits of their own knowledge. They recognise uncertainty. They know when the facts are incomplete. They understand that confidence and correctness are not the same thing. That humility is a strength, not a weakness.


The best decisions rarely come from a single expert, a single dataset, or a single model. They come from the interaction between different forms of knowledge: commercial, operational, technical, regulatory, human, and ethical. They come from combining diverse perspectives rather than flattening them.


This is why organisational whole-body thinking matters.


It is about building decision environments in which different perspectives can be surfaced, tested, and used constructively. It is about leveraging neurodiversity, domain expertise, and healthy disagreement to improve outcomes. Managing people who disagree and hold different perspectives takes effort — but it is precisely that tension, combined with good data and thoughtful use of AI, that leads to stronger, more resilient decisions.


Building that kind of organisational capability is not accidental. It requires intentional design across people, process, data, technology, and governance. The organisations that make the best decisions are not necessarily the ones with the most data or the most advanced AI. They are the ones that have created the conditions for good judgement to operate — with clear ownership, visible assumptions, and the psychological safety to challenge.


The goal is not consensus at any cost. The goal is stronger, more resilient decisions.


From ecosystem thinking to operating model thinking


Our businesses and society form an ecosystem. Decisions do not happen in isolation. They affect customers, colleagues, regulators, partners, systems, and outcomes over time. A local optimisation in one part of the business can create hidden risk somewhere else.


To act effectively in that ecosystem, organisations need more than good intent. They need an operating model that supports joined-up decision making across people, process, technology, data, and governance.


That means:


  • People who are encouraged to think critically and challenge constructively

  • Processes that make assumptions, approvals, and exceptions visible

  • Data that is discoverable, classified, trustworthy, and usable

  • Technology that supports transparency rather than obscuring complexity

  • Governance that enables accountability without becoming performative

  • AI that augments human capability within clear, understood controls


This is where observability becomes practical rather than abstract.


If an organisation cannot see its information clearly, it cannot use it responsibly. If it cannot track where decisions are drawing from, it cannot govern them properly. If it cannot explain why a recommendation was made, it will struggle to defend it when challenged.


In regulated and high-consequence environments especially, that is no longer acceptable.


Responsible AI needs observable information


The value of AI is inseparable from the quality and governance of the information it relies on.


If inputs are poor, outputs will be unreliable. If metadata is absent, context will be lost. If ownership is unclear, accountability will drift. If controls are weak, risks will accumulate quietly — until they become visible in the worst possible way.


So the right question is not simply: *"How do we use AI?"*


It is: How do we use AI in a way that is observable, governed, and aligned to human accountability?


That requires a practical mindset:


  • Use AI to accelerate synthesis — not to bypass scrutiny

  • Use AI to support exploration — not to replace expertise

  • Use AI to widen options — not to abdicate ownership

  • Use AI where the quality, lineage, and limits of the underlying information are understood


In that sense, responsible AI is not a separate agenda from data governance or observability. It depends on them.


A better way forward


We should move beyond the false choice between instinct and data, or between people and AI.


The future belongs to organisations that can combine:


  • Trustworthy information

  • Strong governance

  • Observable data and decision pathways

  • Diverse human perspectives

  • Disciplined use of AI


Whole-body thinking gives us a useful way to describe the human side of that equation. Data and Information Observability gives us the informational foundation. Together, they offer something more robust than intuition, analytics, or automation can provide on their own.


In an era of constant complexity, the real competitive advantage is not faster answers. It is better judgement — and the organisational conditions that make better judgement possible.



Barry Green, BearingNode Advisory Board Member

About the author: Barry Green is an Advisory Board Member at BearingNode and co-author of Data Means Business. With a career spent helping organisations connect strategic ambition with practical execution, Barry brings a rare combination of data strategy expertise and human-centred leadership thinking — themes that sit at the heart of this article and BearingNode's mission.


Published by BearingNode — helping organisations navigate complexity through data, analytics, and AI.

bottom of page