Skip to content

Your AI assistant doesn’t know you. That’s not a prompting problem.

Mark Goodchild··4 min read
Production AI

The reason your AI assistant keeps disappointing you on anything business-specific isnt the model, and it isnt your prompts. Its several missing knowledge layers. Until you understand the difference between the four types of knowledge that make an AI assistant genuinely useful, youll keep hitting the same ceiling.

Most organisations are well-served on one type and almost entirely unserved on the other three. Building those layers is the difference between AI that makes you generically faster and AI that makes you specifically better.

Four types of knowledge

The first is general knowledge: facts, concepts, frameworks, how things work. AI assistants are extraordinary at this. Its also the least valuable to you specifically. Anyone can access it, it doesnt differentiate your work, and its why AI tools feel simultaneously impressive and slightly beside the point.

The second is organisational knowledge: how your business operates, what you charge and why, how you approach a client problem, what your brand voice means in practice, the standards you hold work to. None of it exists anywhere an AI can access. Every time you want it in a conversation, you have to put it there yourself. Organisational knowledge has collective ownership. A business can update its pricing, revise its standards, evolve its positioning, but it needs governance. Someone has to decide whats current, whats canonical, and who has the right to change it.

The third is personal knowledge: how you think, how you write, the communication instincts youve developed over years of senior work. Not your businesss voice, yours. The reasoning patterns behind your decisions, the things you always do and never do, the judgement that took a career to develop. This is fundamentally different from organisational knowledge: the ownership is unambiguous. Nobody else has the right to change it. The challenge is custodianship. Is it actually encoded anywhere? And if it is, does it still sound like you, or a version of you thats drifted?

The fourth is current context: the specific client, document, or situation in front of you right now. You can address this by pasting it in or describing the scenario. It works for one conversation. Then its gone.

Most people handle types one and four reasonably well. The gap is almost entirely in types two and three, and thats where the compounding value sits.

Why the workarounds dont work

Youve probably tried most of them. Pasting context into every prompt: youre doing it from scratch each time, inconsistently, and whatever you include one day wont match what you include the next. Custom GPTs pre-load context, but the knowledge is locked to one platform and gone the moment you switch models. Prompt libraries in Notion or similar look sensible until nobody maintains them and nobody finds the right one when they need it.

These reduce friction. None of them give your knowledge a permanent home, and none of them make a meaningful distinction between what belongs to the business and what belongs to you.

The problem is architectural

When your assistant gives you a generic answer to a question about your own business, it isnt failing because it isnt clever enough. It genuinely doesnt have the information. Nobody built the layer that connects your knowledge to its capabilities.

That layer does not currently exist off the shelf in any form worth using. Its a design decision: a structured way of encoding both organisational and personal knowledge so that any AI tool can draw on it reliably, regardless of which model youre using or which platform youre on. It also requires thinking carefully about ownership, because the rules for who can update your pricing are not the same as the rules for who can update how you write.

The organisations and individuals getting the most from AI are the ones whove asked that question seriously. It sits well upstream of which model you choose.