top of page

The Future of AI is Here—But Who's Actually Flying the Plane?

  • Writer: BearingNode Marketing Team
    BearingNode Marketing Team
  • Sep 18, 2025
  • 8 min read

As AI systems move from experimental projects to business-critical applications, a fundamental question emerges: who's actually governing your AI? This isn't theoretical anymore—organisations are deploying AI that influences everything from credit decisions to healthcare recommendations, often without adequate oversight or understanding of the decisions being made.


At Big Data London 2025, this transition is playing out in real-time. Eddie and I will be there, exploring how organisations are evolving from basic AI implementation to comprehensive AI observability and governance frameworks that actually work in practice.


The Sessions That Matter: From Theory to Reality


The conference agenda reflects where the industry has landed—organisations need practical frameworks, not more theoretical discussions. We've identified three critical sessions that showcase how leading companies are solving the AI governance challenge:


The Core Challenge: Who's Really in Control?


AI Governance: Who's Flying Your AI?



Nick Jewell, Senior Solution Engineer, Dataiku

AI, Data Science & MLOps Theatre | Thursday, 25th Sep | 12:40 - 13:10


This session cuts straight to the heart of the problem. Traditional data governance approaches simply don't work for AI systems that amplify risks from bias to black-box decision-making. Nick presents a practical capability framework for full-lifecycle AI governance—the kind that manages model behaviour, builds stakeholder trust, and ensures AI systems actually perform as intended over time.


The Technical Foundation: Context at Scale


FastMCP: Model Context Pragmatism



Adam Azzam, VP Product, Prefect

AI, Data Science & MLOps Theatre | Wednesday, 24th Sep | 10:40 - 11:10


MCP are poised to be the connective tissue between large‑language models and the data they need. Despite this, many projects still stall at the prototype stage. This session distills what actually works in production, what still breaks, and why. Adam will explore the real environments where MCP is thriving, the hurdles teams hit when they move to productise it, and share a view of the road ahead as context becomes a first‑class product surface. Essential knowledge for organisations building robust AI governance frameworks.


The Evolution: Beyond Human-Only Governance


The Evolution of Data Governance: From Human-Led to AI-Autonomous Systems



Swaroop Jagadish, CEO & Co-Founder, DataHub

Andrew Mohammed, Director of Data & AI, OVO Energy

Data & AI Governance Theatre | Thursday, 25th Sep | 12:40 - 13:10


As AI reshapes every aspect of data management, organisations worldwide are witnessing a fundamental transformation in how data governance operates. This panel discussion brings together forward-thinking customers to explore the revolutionary journey from traditional governance models to AI-autonomous systems. The expert panelists will share real-world experiences navigating the four critical stages of this evolution: AI-assisted governance, AI-driven governance, AI-run governance, and ultimately AI-autonomous governance. The session addresses key questions around trust, accountability, and the changing role of data professionals in an increasingly automated governance landscape.


Context Engineering: The Universal Data Bridge


From Metadata to AI Mastery: DataHub's MCP-Powered Context Engine



Swaroop Jagadish, CEO & Co-Founder, DataHub

Data & AI Strategy Theatre | Wednesday, 24th Sep | 15:20 - 15:50


AI agents need seamless access to enterprise data to deliver real value, but this creates significant governance challenges. DataHub's new MCP server demonstrates how organisations can create universal bridges that connect AI agents to entire data infrastructures through a single, governable interface. This session shows how forward-thinking data leaders are breaking down data silos while maintaining control and observability over how AI agents discover and interact with data across Snowflake, Databricks, BigQuery, and other platforms.


Why AI Observability and Governance Must Work Together


The evidence is clear—organisations can no longer treat AI governance as a separate concern from AI observability. These are distinct but complementary capabilities that must work in tandem:


Observability Without Governance is Blind: Technical monitoring of AI systems provides data, but without governance frameworks to interpret that data and define appropriate responses, observability becomes just expensive noise. You can see what's happening, but you don't know what to do about it.


Governance Without Observability is Powerless: Governance policies and frameworks are meaningless without the observability systems to monitor compliance, detect violations, and measure effectiveness. You can define what should happen, but you have no way to verify it's actually working.


Together, They Enable Control: When observability and governance work together, organisations gain true control over their AI systems—they can see what's happening, understand whether it aligns with policies and expectations, and take informed action when intervention is needed.


The Fundamental Truth: governance ≠ observability


We've spent years building data observability capabilities, but AI observability feels like a whole new frontier. The governance frameworks that worked for traditional data don't quite fit when you're dealing with models that learn and adapt. However, the challenge isn't scaling existing approaches—it's rethinking the fundamental relationship between governance and observability.


governance ≠ observability


This isn't just a technical distinction—it's the foundation for effective AI management. Recent conversations with innovative startups like Calvin Risk and Quantly have reinforced this reality. They're building AI governance frameworks that go way beyond "policy and hope" approaches, recognising that governance and observability must work as integrated but distinct capabilities.


The observability piece is particularly fascinating: how do you actually see what your AI is doing in production? Companies like Serene are using AI to identify vulnerable customers in real-time, requiring observability systems that can monitor AI behaviour while governance frameworks ensure ethical application and regulatory compliance.


For CDOs and CDAOs, this represents a fundamental shift. Traditional data governance assumed relatively static data flows and predictable processing patterns. AI systems that learn, adapt, and make autonomous decisions require observability capabilities that can track dynamic behaviour alongside governance frameworks that can respond to changing circumstances.


The sessions at Big Data London demonstrate this critical intersection:


Model Performance Monitoring: AI models degrade over time as data distributions shift. Without continuous monitoring, models that worked well in testing can fail silently in production, making decisions with outdated or biased understanding.


Bias Detection and Real-World Impact: AI bias isn't just a theoretical concern—it affects real people's lives through hiring decisions, loan approvals, and healthcare recommendations. Effective observability systems must detect and alert on bias patterns before they cause harm.


Regulatory Compliance: AI regulations are coming fast across multiple jurisdictions. The EU AI Act is already in force, with similar frameworks emerging globally. Organisations need observability systems that can demonstrate compliance, not just claim it.


Business Risk Management: AI systems that operate without proper observability represent significant business risks—from regulatory fines to reputational damage to operational failures that directly impact revenue.


The BearingNode Approach: Governance + Observability Working Together


Our Connected Operating Model philosophy recognises that AI governance and observability are distinct capabilities that must work together seamlessly. Neither can succeed in isolation:


Governance Defines the "What" and "Why": Governance frameworks establish policies, define acceptable behaviour, set risk thresholds, and create accountability structures. They answer questions like "What should our AI systems do?" and "Why are these the right decisions?"


Observability Provides the "How" and "When": Observability systems monitor actual AI behaviour, detect deviations from expected performance, measure compliance with governance policies, and trigger interventions. They answer questions like "How are our AI systems actually performing?" and "When do we need to take action?"


Integration Creates the "So What": When governance and observability work together, they enable organisations to move from reactive problem-solving to proactive AI management. The combination answers the critical question: "So what do we do now?"


This integration must happen across four critical dimensions:


Governance Integration: AI governance policies must align with existing data governance, risk management, and regulatory compliance frameworks. Isolated AI governance creates gaps and conflicts that undermine overall effectiveness.


Operational Excellence: AI observability must integrate seamlessly into existing DevOps and DataOps workflows. Teams need unified dashboards, automated alerting, and clear escalation procedures that work within their current operational frameworks.


Technology Architecture: AI observability systems must connect with existing monitoring, logging, and analytics infrastructure. Creating separate AI observability silos defeats the purpose—everything must work together to provide comprehensive visibility.


Value Connection: Every AI observability metric must connect clearly to business value and risk. Technical metrics are meaningless unless they translate into actionable insights about business performance, regulatory compliance, or risk exposure.


From Conference Insights to Business Impact


The sessions at Big Data London address challenges our clients face every day. When Nick Jewell asks "Who's Flying Your AI?", he's highlighting the governance gaps we see in financial services, where AI models influence everything from trading decisions to customer service without adequate oversight.


The evolution toward AI-autonomous governance systems isn't theoretical—it's happening now across industries:


  • Financial Services: AI models are making lending decisions faster than humans can review them, requiring new approaches to fair lending compliance and risk management. The disintermediation that AI agents might create in financial services represents a game-changer conversation every established institution needs to be having—then linked to observability and governance.

  • Healthcare: AI diagnostic tools are becoming standard practice, demanding observability systems that ensure patient safety while maintaining clinical efficiency

  • Retail and Manufacturing: AI optimisation systems are controlling supply chains and inventory decisions, requiring transparency into automated decision-making that affects business operations

  • Social Impact: Innovative applications like using AI to identify vulnerable customers in real-time demonstrate the potential for proactive rather than reactive support. As a trustee at Crosslight Advice—where we support people facing financial vulnerability every day—this intersection of AI capability with governance frameworks for ethical application could be transformational for those who need it most.

Meet the BearingNode Team at Big Data London


Daniel Rolles, CEO & Founder leads BearingNode's mission to simplify complex decision-making through comprehensive data and AI observability. With over 30 years of experience spanning Financial Services, Healthcare, and Real Estate, Daniel has seen firsthand how organisations struggle with AI governance in regulated environments. He's a key architect of BearingNode's Connected Operating Model and D/I O11y framework, focusing on practical solutions that connect technical AI capabilities with business value and regulatory compliance.


Connect with Daniel on LinkedIn.


Eddie Short, Advisory Board Member brings exceptional expertise in principled, ethical, and risk-aware decision-making frameworks. Eddie's approach to treating data as a valuable economic asset aligns perfectly with BearingNode's mission to help organisations unlock commercial value from their AI investments while maintaining proper oversight and control. His insights have been instrumental in developing our framework's approach to connecting AI governance theory with measurable business outcomes.


Connect with Eddie on LinkedIn.


Together, we represent BearingNode's comprehensive approach to AI observability and governance—combining practical implementation experience with strategic advisory expertise that helps organisations navigate the complex intersection of AI innovation and business accountability.


The Choice is Clear: Connect Governance and Observability


The sessions at Big Data London demonstrate a crucial inflection point—organisations can no longer treat AI governance and observability as separate concerns. The technology has advanced too quickly, the business stakes have become too high, and regulatory frameworks are evolving too rapidly for disconnected approaches to work.


The question isn't whether you need AI governance OR AI observability—it's whether you'll connect them effectively to create comprehensive AI control, or continue to struggle with incomplete solutions that fail when they're needed most.


Governance without observability leaves you blind to what's actually happening. Observability without governance leaves you paralysed when action is needed. Together, they provide the foundation for AI systems that are both powerful and trustworthy.


If you're ready to move beyond the false choice between governance and observability, let's connect at Big Data London. Eddie and I will be there to discuss how the D/I O11y framework helps organisations integrate these critical capabilities—with practical, implementable solutions that connect governance policies to observability systems and both to measurable business value.


The future of AI is observable, governable, and trustworthy. Let's build it together.

bottom of page