Agentic Coding Mastery • for Software Engineers

You approved the AI’s output. The architectural decisions in your code are not all yours.

You are reading back a function the AI generated. Somewhere in the middle of it your eye stops moving forward. Not because there is an error. Because you cannot locate which architectural decisions were yours.

You scan back. You look for the reasoning. And for a second — not long, but long enough — you cannot find it. You approved the output and kept going, and the path back to your own thinking is not there.

That is not a rare edge case. That is Tuesday.

Here is the name for what you just recognized: you are good enough to catch the bad output. That is why you are missing the real problem.

You have enough skill to steer the model when it drifts. That is exactly what hides the second-order problem — that you improvise it differently every session, and the output is good enough that nothing forces a reckoning until it does.

That pattern has a name. The Fluency Trap: the more fluent your workaround, the less visible the structural gap underneath it. Your skill is not the solution to the inconsistency. Your skill is what makes the inconsistency survivable — and therefore invisible — until it is not.

And the Fluency Trap produces something specific. Something that accumulates in your codebase session by session: The Borrowed Architecture — architectural decisions that are not yours. The agent inferred them. You accepted them. They live in your committed code, borrowed from a model that will infer something different tomorrow.

The Borrowed Architecture takes three forms:

Convention drift

The agent borrows a convention from its training data instead of loading yours. Same prompt, different session, different convention each time. The code is valid. The convention is not yours.

Assumption propagation

The agent makes an architectural assumption you did not specify. It passes tests because the assumption is reasonable in isolation. It fails in production because the assumption contradicts a decision you made six months ago.

Gate erosion

You started reviewing everything. The agent kept producing clean output. The review got shorter. What used to take an hour now takes minutes — not because the output got simpler, but because your threshold moved. The boundary between “I checked this” and “I approved this” migrated without a decision. The Borrowed Architecture grew in the space your review used to occupy.

The certainty you have about your CI pipeline — green means green, you do not re-run it hoping for a different result — took years of infrastructure to build. Your agent does not have that infrastructure yet. Three pieces are missing: persistent context that survives between sessions, explicit gates that prevent trust from drifting, and an observability layer that makes agent decisions visible before they surface as failures. Those three absences are what let the Borrowed Architecture accumulate.

The tools will improve. They always do. But the infrastructure gap is not a tool limitation — it is an architectural absence. Better models produce better output faster, which accelerates convention drift, deepens assumption propagation, and widens gate erosion. The gap grows with the tool, not despite it.

How much of your outcome depends on the prompt, and how much depends on everything the prompt cannot control? The infrastructure determines most of it. That infrastructure has a name.

Agentic Coding Mastery

The engineering discipline that sits between “I am using AI tools” and “I have a repeatable system I would stake my reputation on.” I was looking at the same diff for the third time — three sessions, same codebase, three architecturally incompatible covariance estimators — when I stopped asking how to prompt better and started building what was actually missing. 44 arxiv papers later, it turned out the gap was never the prompt.