Enterprise Context Is the AI Moat Nobody Built: Knowledge Graphs, Taxonomies, and Why Models Aren't Enough

Juan Sequeda from data.world (now ServiceNow) explains why the most expensive AI failures trace back to one missing layer — and why the 'context wars' are just beginning.

B
Brittany Hobbs · · 8 min read
Editorial photograph: Enterprise Context Is the AI Moat Nobody Built: Knowledge Graphs, Taxonomies, and Why Models Aren't Enough
Photo: Generated via Flux 1.1 Pro
Overview
  • Enterprise AI products fail primarily because models lack business context — they don't know what 'order,' 'customer,' or 'net revenue' means in your organization.
  • Knowledge graphs increase LLM accuracy dramatically by providing structured, deterministic context that skills files and prompt engineering cannot replicate at scale.
  • The 'context wars' are underway: ServiceNow acquired data.world, and every major platform is competing to own and manage enterprise business context.
  • A 'knowledge first' approach — defining business concepts before choosing tools — produces AI systems where the data lineage is boring, and boring is good.

Why do enterprise AI products fail?

The most common answer is hallucinations. The second most common is governance. Juan Sequeda, principal scientist at ServiceNow (formerly co-founder of data.world), argues on Episode 3 of the Product Impact Podcast that both miss the root cause.

The root cause is context — or rather, its absence.

"These models need to understand your business, to understand the context of your organization. That's been the big realization. If you don't have that context right now — what do you mean by customer? What do you mean by net revenue? — it's not a simple answer. It can be very nuanced."

Juan Sequeda, Product Impact Podcast S02E03

When an enterprise deploys an LLM-powered product without business context, the model doesn't know that "order" means different things to the sales team (checkout click), the finance team (payment received), and the fulfillment team (shipment delivered). It doesn't know that "today" means Pacific time for the US team and AEST for the Australian office. It doesn't know that the data in six different databases uses different schemas, different labels, and different assumptions.

The model is powerful. The model is also ignorant of everything that makes the business specific. That ignorance is where enterprise AI fails silently — producing outputs that look correct and are wrong.

What is enterprise context?

Enterprise context is the layer of knowledge that sits between raw data and meaningful business decisions. It includes:

Taxonomies — the agreed-upon categories and hierarchies that classify business concepts. What are the product categories? How is the customer segmented? What are the stages of the sales pipeline? Taxonomies are the shared vocabulary that makes data comparable across teams.

Knowledge graphs — structured representations of business entities and their relationships. A knowledge graph knows that Customer A is connected to Order B, which contains Product C, which belongs to Category D. When an LLM can query a knowledge graph, it doesn't have to guess what "customer" means — the graph defines it precisely.

Semantic layers — the translation layer between technical data structures and business meaning. A semantic layer knows that the "rev_net_q4" column in the finance database and the "quarterly_revenue" field in the CRM refer to the same concept, calculated the same way. Without it, the LLM treats them as different things and produces inconsistent answers.

Business rules — the deterministic logic that governs how decisions should be made. What is the refund policy? What triggers an escalation? What qualifies a lead? These rules need to be explicit and accessible, not buried in process documentation that the model can't read.

Sequeda's research at ServiceNow demonstrated the difference quantitatively: LLM question-answering accuracy increases dramatically when a knowledge graph provides structured context, compared to the same LLM operating without one.

What are the context wars?

Sequeda predicted what he calls the "context wars" — a competition among platforms to own and manage enterprise business context.

"Different platforms will say, no, no, no — context is so important, this is why I should manage your context. No, no, I should manage your context. This is why you can see what's happening with all the acquisitions."

Juan Sequeda, Product Impact Podcast S02E03

ServiceNow's acquisition of data.world is one example. Microsoft's embedding of Copilot across the Microsoft 365 suite is another — Copilot's value proposition is that it has context from your email, calendar, documents, and meetings. Google's strategy of embedding specialized Gemini tools within each product is a third approach — context comes from the specific application, not a general-purpose model.

The stakes are structural: whoever manages an enterprise's context layer controls the quality of every AI output that depends on it. A model can be swapped. Context cannot be easily ported.

Skills files vs. knowledge graphs: what works?

A new fork in the road has emerged. Claude's skills files, Claude Code's model context protocol, and similar approaches let teams write their context as natural language documents that the LLM reads before responding. This is faster to set up than a formal knowledge graph. It works for many use cases.

Sequeda sees this as a pendulum swing — and warns against going all the way to one side:

"Right now people are saying, all you need to do is manage that context just as skills, as steps. That is literally taking this pendulum and swinging it to the completely one side and saying the LLM will be more powerful, we just need to give more context. People are testing that stuff and it's working for some stuff. But that's putting all the eggs in one basket."

The distinction matters by use case:

Where skills files work: Exploratory analysis, drafting, research synthesis, internal tools where approximate answers are acceptable. The cost of a wrong answer is low. The speed of setup justifies the tradeoff.

Where knowledge graphs are required: Regulatory reporting, financial calculations, customer-facing decisions, audit trails. The cost of a wrong answer is high. You need deterministic context that produces the same answer every time, with explainable lineage showing exactly where a number came from.

Sequeda's prediction: an 80/20 split will emerge, varying by industry. Some percentage of enterprise context needs to be managed deterministically (knowledge graphs, semantic layers). The rest can be managed probabilistically (skills files, prompt context). But if any context needs to be deterministic, you'll build the deterministic layer anyway — and then it becomes the foundation for everything else.

What is a knowledge-first approach?

Sequeda advocates for what he calls a "knowledge first" mentality — defining your business concepts, relationships, and rules before choosing your AI tools:

"If you have a knowledge-first mentality, your data lineage is going to be boring. And boring is good. Because I can very quickly explain to you — that number is calculated through this formula, it runs from this system, we capture our data here. That's the type of stuff we need to get to."

The alternative — a "data first" approach — starts from existing systems and data structures and tries to make AI work on top of them. This produces the integration complexity, inconsistency, and silent failures that plague most enterprise AI deployments.

Knowledge first means:

  1. Define what your business concepts mean before connecting data sources. What is a customer? What is an order? What does "today" mean across time zones?
  2. Map the relationships between concepts. Which customers are connected to which orders, products, accounts, and support tickets?
  3. Establish deterministic business rules for the contexts that require them. What triggers an escalation? How is net revenue calculated?
  4. Then choose your implementation — knowledge graph, semantic layer, skills files, or a combination. The tools serve the knowledge, not the other way around.

How should enterprises evaluate AI context infrastructure?

Sequeda's guidance is specific: don't boil the ocean.

"Start with the projects that are delivering an outcome tied to one of the top-level business objectives of your company for the year. Not only what are your KPIs, but what are your OKRs. Your company should have five of them that are reported by the CEO."

The context layer should be built in service of measurable business outcomes, not as an abstract infrastructure project. An AI system that can accurately answer "how many orders did we have this quarter?" — where every stakeholder agrees on what "order" and "this quarter" mean — is more valuable than a general-purpose copilot that produces plausible but unverifiable answers.

This is one of the signals I'm tracking in my ongoing research into AI value in enterprise deployments. The organizations getting real value from enterprise AI are almost always the ones that invested in context infrastructure before investing in model capability. The ones struggling are the ones that deployed powerful models into organizations where nobody agreed on what the data means. PH1 Research works with product teams measuring this exact gap. AI Value Acceleration diagnoses where enterprise AI stalls at the context layer — the missing infrastructure between the model and the business.


Listen: Product Impact Podcast S02E03 — Juan Sequeda on Enterprise Context and Knowledge Graphs

Related:
- SEO Had 25 Years of Certainty. HubSpot Shipped Their Vision for AEO. — HubSpot's context advantage (CRM data powering AEO)
- Microsoft's Copilot Problem Isn't Adoption. It's Coerced Adoption. — Google's context-per-product strategy vs Microsoft's universal assistant
- Juan Sequeda — Person page

Sources:
- Product Impact Podcast S02E03 — primary source for all Sequeda quotes
- ServiceNow — acquired data.world (Sequeda's company)

B
Brittany Hobbs

Co-host, Product Impact Podcast

View all articles →

Hosted by Arpy Dragffy and Brittany Hobbs. Arpy runs PH1 Research, a product adoption research firm, and leads AI Value Acceleration, enterprise AI consulting.

Get AI product impact news weekly

Subscribe

Latest Episodes

All episodes
7: $490 Billion in AI Spend Is Delivering Nothing — Orchestration Is the Fix
EP 7

7: $490 Billion in AI Spend Is Delivering Nothing — Orchestration Is the Fix

Apr 17, 2026
6. Robert Brunner Was the Secret to Beats' & Apple's Success — Now He's Redefining AI for the Physical World
EP 6

6. Robert Brunner Was the Secret to Beats' & Apple's Success — Now He's Redefining AI for the Physical World

Apr 9, 2026
5. The Human Impact of AI We Need to Measure [Helen & Dave Edwards]
EP 5

5. The Human Impact of AI We Need to Measure [Helen & Dave Edwards]

Mar 30, 2026
4. The AI Agent Era Will Change How We Work
EP 4

4. The AI Agent Era Will Change How We Work

Mar 19, 2026
3. Win The AI Context Wars — Unlock The Value of Data [Juan Sequeda ]
EP 3

3. Win The AI Context Wars — Unlock The Value of Data [Juan Sequeda ]

Mar 12, 2026

Related

6