Prompt engineering is real. It also misses the point.
What's actually happening when you "engineer" a prompt
You sit down with a hard problem. You write a careful prompt. You add the context, structure the question, specify the output format, anticipate the failure modes. The AI produces something good. You celebrate.
Then you close the tab. And tomorrow when the same problem comes back, you start over. You build the prompt again. Maybe you're 20% faster because you remember some of it. Mostly you're not.
This is the trap: prompt engineering optimizes the wrong layer. The prompt is ephemeral. You're solving the same problem over and over because your work product evaporates between sessions.
The leverage isn't in the prompt. It's in what the prompt is reaching into.
Micromanaging the genius
Imagine you hired a brilliant specialist. Genuinely smart, deeply capable in their domain.
Would you walk in every morning and give them a 500-word brief explaining who you are, what the company does, and what you want them to focus on today? Of course not. You'd build them an environment. Show them the project, the team, the standards, the running state. Define what success looks like. And then trust them to work.
Prompt engineering is the equivalent of writing a fresh brief every time the specialist walks in.
The right move is upstream. Build the environment. Define the success criteria. Let the AI operate against persistent context.
What you should be building instead
A Versioned Context — the project's source of truth. What it is, why it exists, what good looks like. The AI reads it once and operates inside it.
A Living Journey — the running record of decisions made. The AI maintains this. You don't write it; you read it when you need to know what's happened.
Sessions, not transactions — every session reads the context, does work, updates the journey, and closes. Next session picks up where the last one left off. Nothing is lost.
When this is in place, you stop optimizing prompts. The prompt becomes "what's next?" because the AI already knows everything else.
What prompt engineering still has
There's still craft in how you ask a question. Specificity helps. Constraints help. Examples help. Pretending otherwise is silly.
But that craft compounds when there's a context for it to compound into. A great prompt against a stateless system is a one-shot. A good-enough prompt against a well-structured environment is the start of something that gets better every session.
The leverage is the environment, not the prompt.