Data for AI, AI for Data: Two Worlds Colliding in London
- Daniel Rolles

- Oct 18
- 4 min read
A Day Across London's Data and AI Ecosystem
Thursday this week London delivered! It was at its global thinking best.
Morning found me at Collibra's Data Citizens event with Chief Data Officers and governance leaders.
By afternoon, I was in Shoreditch with AWS and NVIDIA's team, hands-on with Python notebooks evaluating RAG systems alongside start-ups and AI practitioners.
Two events. Two communities.
One unavoidable conclusion: these worlds are hurtling together faster than most people realise.
Morning: Enterprise Data Governance
Collibra's Data Citizens event showcased their 2025-2026 roadmap with five strategic themes: unified platform experience, governance automation, data products and marketplace, AI governance, and enterprise-grade connectivity.
The detail that caught my attention: Collibra is committing to OpenLineage as a core standard. When a platform vendor of this scale chooses open standards over proprietary approaches, it signals something significant about where the industry is heading.
Their Unstructured AI capability—smart discovery, automated semantic layers, high-accuracy enterprise search—demonstrates recognition that governance for AI requires fundamentally different approaches than traditional structured data management.
The conversations throughout the day reinforced a pattern I've observed: traditional policy-driven governance struggles with AI's velocity and complexity.
Afternoon: AI Engineering Observability / EVALS
By mid-afternoon, I'd shifted from governance discussions to hands-on technical work at the AWS GenAI Loft. NVIDIA's team walked us through evaluation frameworks for Retrieval-Augmented Generation (RAG) systems.
Working through Python notebooks with the RAGAs open-source evaluation framework, we examined metrics for domain-specific language handling, temporal context preservation, retrieval accuracy, and independent assessment of generation steps.
This was implementation—building actual evaluation and validation systems for AI in production.
The Pattern: Observability as Common Language
Here's what struck me moving between these two worlds: they're solving mirror-image problems.
The governance community asks: "How do we govern AI systems? How do we trace lineage through AI transformations? How do we demonstrate compliance when unstructured data feeds LLMs?"
The AI engineering community asks: "How do we validate RAG systems? How do we ensure retrieval accuracy? How do we prove our models use appropriate data?"
Both are describing observability challenges.
The governance practitioners need visibility into what's happening in AI systems. The AI engineers need visibility into the data feeding their models. Traditional monitoring tells you a system is running. Observability tells you why it's behaving the way it is.
This distinction matters. When a RAG system returns unexpected results, you need to trace: Which documents were retrieved? Why those documents? What transformations occurred? Who accessed what data? For what purpose?
When a governance team needs to demonstrate BCBS239 compliance for AI-driven risk reporting, they need the same visibility: complete lineage from source data through model inference to business decision.
The technical infrastructure required is identical. The business questions converge.
Why Open Standards Matter
Collibra's OpenLineage commitment deserves recognition—not just for data lineage, but for what it signals about compliance infrastructure.
In regulated industries, vendor lock-in is itself a risk. When your compliance depends on proprietary lineage tools, you've created a single point of failure. When auditors ask "prove your AI system used the right data," you need vendor-neutral audit trails.
Open standards like OpenLineage enable:
Multi-vendor observability strategies
Cross-cloud lineage tracking
Independent verification of compliance claims
Community-driven patterns for emerging challenges
What Changed Today
Today wasn't about new concepts. It was about watching convergence accelerate.
Morning evidence: Major governance vendor commits to open lineage standards, recognising proprietary approaches won't scale in AI-augmented environments.
Afternoon evidence: AI evaluation frameworks require comprehensive data quality, lineage, and governance infrastructure to function.
The implication: Organisations treating data governance and AI engineering as separate domains are creating integration debt they'll pay later.
The evaluation metrics we built in Python notebooks this afternoon? They're governance controls. The lineage systems governance teams deploy? They're AI validation infrastructure.
Same technical foundation. Different business language.
For Data and Analytics Leaders
If you're responsible for Data, Analytics and AI strategy, today's pattern suggests three considerations:
Observability Over Policy: Visibility into data flows—from source through transformation to consumption—provides more reliable governance than policy enforcement alone. When you can observe what's actually happening, you can respond to what is rather than what should be.
Standards as Strategic Infrastructure: Multi-cloud, multi-vendor environments are the reality. Proprietary lineage and governance tools create dependencies that constrain strategic flexibility. Open standards provide portability and vendor negotiating position.
Convergence as Opportunity: The gap between data governance teams and AI engineering teams represents integration opportunity. Organisations that bridge this gap early—through shared observability infrastructure—gain structural advantages.
Looking Forward
The shifts we're tracking aren't speculative. They're beginning:
Increased vendor adoption of open standards
Governance platforms integrating AI evaluation capabilities
Regulatory frameworks requiring AI traceability
Recognition that observability is prerequisite for both data governance and AI validation
London demonstrated why this matters. Two communities, mirror-image problems, converging solutions.
Watch this space!
About the Author

Daniel (Dan) Rolles is the CEO and Founder of BearingNode, where he leads the firm's mission to help organisations unlock the commercial value of their data whilst enhancing their risk management capabilities.
As CEO, Daniel drives BearingNode's strategic vision and thought leadership in data transformation, analytics strategy, and the evolving regulatory landscape. He regularly shares insights through industry publications and speaking engagements, focusing on practical approaches to data governance, AI implementation, and performance transformation in regulated environments. He is one of the key authors of BearingNode's Data and Information Observability Framework.
With over 30 years of experience in Data, Analytics and AI, Daniel has successfully built and led D&A teams across multiple industries including Financial Services (investment, commercial and retail banking, investment management and insurance), Healthcare, and Real Estate. His expertise spans consulting, commercial leadership, and delivery management, with a particular focus on data governance and regulatory compliance.
Daniel holds a Bachelor of Economics (University of Sydney), Masters of Science (Birkbeck College, University of London), and Executive MBA (London Business School).
Based in London, Daniel is passionate about financial inclusion and social impact. He serves as a Trustee for Crosslight Advice, a debt advisory and financial literacy charity based in West London that provides vital support to individuals facing financial vulnerability.
Connect with Daniel on LinkedIn or learn more about BearingNode's approach to data and analytics transformation at BearingNode.















