Research
Pieces on how visionary teams connect raw data to decisions: agents, lineage, coherence, and what actually holds up in production.
Pieces
Demand forecasting for brands: integrated data before the forecasting model
How the history a forecast uses limits its quality, what integrated data means for brand-led companies without requiring one central server, and what to fix before relying on a model.
The semantic layer: what it is and how teams adopt it
What a semantic layer is in practice, how Iceberg and a catalog connect, how DataHub and OpenLineage record lineage in one place, and a phased adoption sequence for platform teams.
AI agents, memory, and databases: the shared-knowledge gap and why the semantic layer matters
Why chat memory is not organizational memory, how fragmented databases yield wrong agent answers, why a semantic layer matters, why it is hard to build, and what teams do about it in practice.
A document-centric knowledge base: authority, metadata, and when to skip RAG
What a knowledge base is for, when organizations need one, and how versioned docs with structured metadata can support programs and agents without vector retrieval as the main dependency.
The enterprise AI agent problem is data, not the model
Why programs are usually limited by data, not the model size: integration and clear definitions matter more than a larger model.
Analytics agents in the enterprise: patterns that hold under audit
Why an open chat over raw tables is not enough, how to route agents through approved metrics and definitions, and the controls mature programs use.
Why I write here: aligned definitions, conflict, and the data stack
A charter for Wade's research—why inconsistent definitions matter more than missing dashboards, and what tracing facts to sources means when many teams own the numbers.