TL;DR
AI agents are becoming more autonomous — they research, make decisions, take actions. But without clear rules, they optimize for the wrong goal. Intent Engineering means documenting company values, processes, and decision rules so agents can understand and follow them. This is the most sustainable investment companies can make right now.
Contents
- What’s the difference between Prompt, Context, and Intent?
- Why isn’t Context Engineering enough anymore?
- What can go wrong without Intent?
- How does Intent Engineering work in practice?
- What should companies do now?
- FAQ
What’s the difference between Prompt, Context, and Intent?
In short: Three stages of how we control AI — from simple instructions to company philosophy.
Prompt Engineering was the beginning: I give a task, I get an answer. Back and forth. The AI works only with what’s in its training data.
Context Engineering came next: I provide not just the task, but also relevant information — company data, current documents, customer data. The AI now knows more than just its training data.
Intent Engineering is the next stage: I provide not just task and information, but also the goal — what does the company actually want to achieve? Which values apply? What rules exist?
Why isn’t Context Engineering enough anymore?
In short: Because autonomous agents don’t just answer — they act. And without a clear goal, they act wrong.
Previously, we asked AI, it answered. Done. We could check the result before anything happened.
Now there are agents like Claude Code, OpenAI Codex, or specialized enterprise agents. They get a task and run:
- They research independently
- They create task lists
- They execute actions
- They come back hours later with a result
These agents gather context themselves. They decide which information they need. But they don’t automatically know what the company actually wants.
What can go wrong without Intent?
In short: The agent achieves the goal — but in a way that harms the company.
An example from customer support:
You tell the agent: “Close support tickets as quickly as possible.”
The agent does exactly that. Tickets are closed one after another. The metrics look fantastic.
But: Customers are frustrated. The agent gave no option to escalate. It didn’t distinguish between important and unimportant customers. It showed no goodwill where it would have been appropriate.
The goal was achieved. The intent was missed.
What was missing:
- Customer satisfaction as an overarching goal
- Rules for when escalation is allowed
- Differentiation by customer value
- Discretionary room for special cases
How does Intent Engineering work in practice?
In short: Rules in machine-readable form — SOPs, policies, decision guidelines that the agent can retrieve dynamically.
We do the same with humans: We give work instructions, explain company values, define escalation paths. Not everything is hard-coded — much of it is guidelines, discretionary room.
For agents, we need the same:
- Process documentation: How does what work?
- Decision rules: Who can do what when?
- Value hierarchy: What’s more important than what?
- Escalation paths: When must a human decide?
This information can’t all fit in a prompt. It must be dynamically retrievable — the agent asks when it’s unsure.
One option: MCP servers that provide rules and processes to the agent. The agent can get the relevant information when it needs it. How this works in practice is shown in our case study on sales automation at schoene neue kinder.
What should companies do now?
In short: Document processes and rules — in a way machines can understand. This is the most sustainable investment.
Technology will change. Whether Claude, GPT, Gemini, or the next player — that will evolve.
But a company’s rules change more slowly:
- How does our service delivery work?
- What are our values?
- Who can decide what when?
- Where do we need human oversight?
Documenting these things now — in a form that agents can understand — is the most important preparation. Why this groundwork matters even more than picking the right tool is explored in Why AI projects are never done.
Getting started:
- Choose one process — not everything at once
- Human-in-the-loop — agent works, human checks
- Refine rules — learn from mistakes, sharpen guidelines
- Gradually let go — if it works, grant more autonomy
FAQ
Is Intent Engineering production-ready?
No, we’re at the beginning. The models can do it — Claude Opus, GPT-5, Gemini 3 are smart enough. But the infrastructure for providing rules isn’t standardized yet. It works, but it’s “bleeding edge,” not “cutting edge.”
Can’t I just put everything in the prompt?
For simple tasks, yes. But complex decision rules that are situation-dependent don’t fit in a prompt. They need to be retrieved dynamically — the relevant rule for the situation.
Do I need technical know-how for this?
For implementation, yes — MCP servers, agent configuration, etc. But the more important part is non-technical: Actually defining the processes and rules. That’s organizational and process work.
How is this different from classic process documentation?
Classic process documentation is written for humans. Intent Engineering means structuring this documentation so machines can interpret it — more precise, more structured, machine-readable.
When should I start?
Now. Not because the technology is ready, but because the groundwork — understanding processes, defining rules — takes time. When agents are truly ready, you want to be prepared.
Conclusion
Intent Engineering is the next stage of how we work with AI. From simple instructions to real collaboration with autonomous agents.
The models are ready. The question is: Are our companies?
The most important task now: Document processes. Define rules. Establish boundaries. So we can show them to the machines when they’re ready.
Because that doesn’t change — no matter what technology comes next.
This article is based on a conversation between Manuel Zorzi and Michael Kirchberger about the evolution from Prompt Engineering to Intent Engineering. Watch the full podcast episode on YouTube →