TL;DR
AI transformation stands on three pillars: Schema (the company’s blueprint), Memory (structured knowledge), and Runtime (agents and their orchestration). Building only one pillar - typically Runtime, because agents are the visible part - gets you isolated use cases that don’t scale. Building all three creates a foundation that remains stable even as technology evolves.
Contents
- Why do so many AI projects fail?
- What are the three pillars?
- Pillar 1: Schema - Rules and Architecture
- Pillar 2: Memory - Data and Knowledge
- Pillar 3: Runtime - Agents and Orchestration
- How do the three pillars connect?
- What does this mean for midmarket companies?
- FAQ
Why do so many AI projects fail?
In short: Because they start with the solution instead of the foundation.
Most companies start AI with a concrete use case: a chatbot for customer service, an agent for proposal review, an automation in recruiting. That’s understandable - you want to see results quickly.
The problem isn’t the use case. The problem is that it gets built in isolation. The chatbot gets its own data layer, the proposal agent gets its own, the recruiting automation gets its own. Every use case is an island. And islands don’t scale.
When the third or fourth agent arrives, the pattern becomes clear: data is redundant, answers are inconsistent, and every new use case costs almost as much as the first. What’s missing isn’t a better tool - what’s missing is an architecture.
What are the three pillars?
In short: Schema (rules), Memory (data), and Runtime (agents) - three layers that together form the operating system for AI in business.
Imagine you’re not building a single agent, but a system in which any number of agents can operate. For that, you need three things:
-
Schema - What is the company? What rules exist? What can agents do, what can’t they? What data types exist? What is the blueprint?
-
Memory - How do agents know what’s happening in the company? Which customers exist, which orders, which products, how does it all connect?
-
Runtime - Which agents exist, what tools do they have, how do they interact, what data do they work with?
Without any one of these pillars, the whole thing doesn’t work. You can build a chatbot that works without all three - but you can’t build an AI transformation on top of it.
Pillar 1: Schema - Rules and Architecture
In short: The Schema is the company’s blueprint for humans and machines - it defines what exists, what’s allowed, and how everything connects.
Schema is the most abstract pillar, and that’s why it gets skipped most often. It answers questions like: What is this company? What processes exist? What roles? What data types? Which agents exist and what are they allowed to do?
Think of the Schema as the operating manual - but one that’s readable not just by humans, but also by machines. It’s the description of the enterprise in a form that agents can use to understand their own boundaries.
Specifically, the Schema includes:
- Company structure - departments, teams, responsibilities
- Process definitions - which workflows exist, what steps they contain
- Agent registry - which agents exist, what their scope is, what tools they use
- Permissions - what can an agent do, what can’t it? Can the sales agent access financial data? Can the support agent grant discounts?
- Data types and schemas - what entities exist, what fields do they have, what validation rules apply
Without Schema, agents operate in a vacuum. They know what they’re supposed to do (Runtime), they have access to data (Memory), but they don’t know the rules of engagement. This leads to agents that work technically but make decisions that don’t match business reality.
Pillar 2: Memory - Data and Knowledge
In short: The Knowledge Graph is the company’s structured memory - not just documents, but entities and their relationships.
Memory is the pillar that gets underestimated the most. It’s not about loading documents into a vector database. It’s about building a structured representation of the business: Which customers exist? Which orders? Which products? Who manages whom? What connects to what?
This is the Knowledge Graph - a graph database that maps entities (customers, orders, employees) and the relationships between them. The underlying structure is called the ontology: a formal model that defines what types of things exist in the business and how they relate.
When it comes to data management, there are clear rules: some data stays exclusively in the ERP, some lives only in the Knowledge Graph, some is synchronized. An order is still created and managed in the ERP. But the relationship “Customer A has three open orders, is managed by Employee B, who is currently working on Project C” - that’s graph knowledge. And that’s exactly what an agent needs to work with context.
For a deeper dive into this pillar, see Agent Memory: Why AI Needs Structured Memory to Work in Business where we cover Knowledge Graphs, ontologies, and practical implementation in detail.
Pillar 3: Runtime - Agents and Orchestration
In short: Runtime is the execution layer - which agents exist, what tools they use, and how they collaborate.
Runtime is the pillar everyone talks about. This is where the visible work happens: one agent reviews proposals, another answers customer inquiries, a third writes reports. This is the exciting part of AI transformation.
But Runtime without Memory is like an employee who knows their tasks but knows nothing about the company. They can execute tasks, but they don’t understand context. They don’t know that the customer they’re writing a proposal for filed a complaint last week. They don’t know that the product they’re recommending has a long lead time.
Runtime encompasses:
- Agents - specialized AI units with clear tasks and responsibilities
- Tools - the capabilities agents can access (API calls, database queries, calculations)
- Orchestration - how agents communicate with each other, who calls whom and when, how results get passed along
- Data access - which parts of Memory each agent can access
Orchestration is the most demanding part. A single agent is quick to build. But a system where a sales agent automatically asks the inventory agent about availability during proposal creation and the finance agent checks the customer’s creditworthiness - that’s orchestration. And it requires clear rules.
How do the three pillars connect?
In short: They’re interdependent - each pillar needs the other two to deliver its full value.
A practical example: a manufacturing company wants to use AI in order processing.
-
Without Schema: Agents are working, data is available, but there’s no framework. The sales agent grants discounts that aren’t sanctioned. The support agent makes commitments that production can’t fulfill. The agents do things that seem sensible individually but create chaos collectively.
-
Without Memory: The agent doesn’t know the customer. It doesn’t know that this customer has special pricing, that there are open complaints, or that the last project was delayed by three months. It processes the order by standard rules - technically correct, commercially wrong.
-
Without Runtime: The data is there, the rules are defined, but nothing happens. The knowledge sits in the graph, but no agent uses it. That’s like a perfectly documented process description that nobody reads.
The three pillars don’t need to be perfect simultaneously. In practice, you start with Schema (what is the company, which processes do we want to map), then build Memory (what data do we need, how do we structure it), and only then comes Runtime (which agents do we deploy on this foundation).
What does this mean for midmarket companies?
In short: The foundation is manageable - and once built, it doesn’t change, even as technology evolves.
The best comparison: it’s like a website project. You define what you want, how it should look, what data gets displayed. The difference: AI infrastructure runs in the background, it’s more abstract, but the process is similarly structured.
The decisive point for midmarket companies: the company’s ontology doesn’t change. A manufacturing business has customers, orders, products, employees, locations, machines. This fundamental structure is stable. What changes is the technology - which models you use, which orchestration frameworks, which tools. But the foundation stays.
This means: the effort for Memory and Schema is a one-time investment. Not in the sense of “build once and never touch again” - of course the ontology and ruleset grow with the business. But the core structure stands, and every new use case can build on it instead of starting from zero.
And that’s exactly the difference between “we have a chatbot” and “we have an AI infrastructure.” The chatbot is a project. The infrastructure is an asset.
FAQ
Can’t we just start with a single agent?
Yes, and you should. The point isn’t that you need to build everything at once. The point is that you build that single agent in a way that fits the larger architecture. Define the ontology for the relevant scope, structure the data in the Knowledge Graph, set the rules. The first agent benefits from it, and the second can build directly on top.
How long does it take to build the foundation?
The ontology for a midmarket company’s core domain can be developed in a few workshops. The initial population of the Knowledge Graph - depending on data quality and system landscape - is a project of several weeks. The Schema grows iteratively. Realistically, first productive agents on the new foundation are possible within two to three months.
What happens to our existing systems?
Nothing. ERP, CRM, project management tools stay as they are. The Knowledge Graph is a layer above that brings data from these systems together. Some data gets synchronized, some stays exclusively in the source system. This isn’t about replacing existing IT - it’s about creating a semantic layer that agents can work on.
Isn’t this overkill for a midmarket company?
Quite the opposite. In midmarket companies, complexity is often manageable enough to build the foundation quickly. A corporation with 200 IT systems faces a different challenge than a company with an ERP, a CRM, and three specialized systems. The ontology is smaller, data flows are clearer, decision paths are shorter.
Which pillar is the most important?
No single one - that’s the point. But if you have to prioritize: Schema and Memory before Runtime. The data and the rules need to be in place before agents can work meaningfully. Runtime is the result, not the starting point.
Conclusion
AI transformation isn’t a tool problem - it’s an architecture problem. The three pillars Schema, Memory, and Runtime form the foundation on which scalable AI in business stands. Building this foundation once creates an asset that endures regardless of technological evolution. The company’s ontology doesn’t change. What changes are the models, the tools, the orchestration frameworks - but those can be swapped without touching the foundation.
The first step isn’t building an agent. The first step is answering the question: What is our company - in a language that machines can understand too?
If you’d like to know what the three pillars could look like in your organization, let’s talk.
Based on an episode of the SNKI podcast “KI im B2B” with Manuel Zorzi and Michael Kirchberger: Watch now →