Founding pricing — first 50 seats or through August 1, 2026, whichever comes first. After that, the price steps up to $797.
Agentic Coding Mastery • Foundation Founding Edition · First 50 Seats or Through August 1, 2026

The Infrastructure Your AI Coding Tools Assume You Already Have

22 modules. 14 frameworks. 11 operational artifacts configured for your real codebase. The layer that converts "it works when I pay attention" into the same quiet certainty you already have about your CI pipeline.

You want your work to become reliable — and you know the moment that told you it was not. Your eye stopping in the middle of a function you could not trace back to your own reasoning. The Fluency Trap producing what you named on the landing page: convention drift, assumption propagation, gate erosion. The Borrowed Architecture forming in your committed code.

That certainty you have about your CI pipeline — green means green, you do not re-run it hoping for a different result — took years of infrastructure to build. Test coverage thresholds, lint gates, deployment checks, observability hooks. But the certainty itself is now automatic. You do not think about it.

You have never had that with your AI coding tool. Not because you have not been trying. Because the infrastructure does not exist yet.

Here is what that shape looks like in a real codebase — and what I discovered when I stopped treating each session as isolated.

I was staring at the same diff for the third time. Three sessions on the same codebase — a quantitative trading framework I have been building — and three architecturally incompatible solutions to the same problem. I had asked an agent to implement a covariance estimator. First session: sample covariance with N-1 divisor. Second session: EWMA with configurable lambda. Third session: Ledoit-Wolf shrinkage toward scaled identity. All three mathematically valid. All three produce different portfolio weights. The codebase uses Ledoit-Wolf — but the agent did not know that without persistent context, so it borrowed a different estimator each time.

Three sessions, one class
# Monday session:
class CovarianceEstimator:        # Sample covariance, N-1 divisor

# Tuesday session (same codebase):
class CovarianceEstimator:        # EWMA, configurable lambda

# Wednesday session (same codebase):
class CovarianceEstimator:        # Ledoit-Wolf shrinkage

Three sessions, three borrowed estimators, one codebase. Convention drift in plain sight.

I had been using AI coding tools for seventeen months and had never once had the quiet certainty I feel about my test suite.

Three weeks before that three-estimator session, I had a session I did not run twice. I reviewed the output once, it looked correct, I committed it. Nine days later, a colleague was tracing an allocation failure in a backtest with crisis-era data on a Friday afternoon. Four hours of debugging before we found it: the agent had borrowed one covariance estimator where the rest of the pipeline expected a different one — correct in isolation, wrong in context. The mismatch did not surface until the backtest hit market stress. Not wrong numbers. Allocations that no risk manager would approve. Assumption propagation — the agent’s reasonable-in-isolation choice contradicting a decision I had made six months earlier.

No production incident. This time.

Here is what that moment clarified: I had 36 years of production engineering experience. I had spent 24 of them at TD Bank leading teams across regulated systems. I had written 10+ technical books on software architecture. I had built infrastructure for every other dependency in my stack — CI/CD, observability, container orchestration. If anyone should have had the operational trust I feel about my pipeline, it was me. That I did not have it was not a prompting failure. It was an infrastructure problem: no one had built the layer that makes trust possible. The Borrowed Architecture was growing with every session. And my skill was what kept me from seeing it.

That was the moment I stopped asking how to prompt better and started building what was actually missing.

01 The Mechanism

What Has Been Preventing the Trust

After seventeen months of systematic research — my own production systems, 44 arxiv-cited papers — every friction point traces back to one of three structural absences. Not failures. Absences. The infrastructure that makes green mean green does not exist in the default setup.

Persistent context does not exist — convention drift is the default

Your CI pipeline does not re-infer your test thresholds every morning. It loads them. Your agent re-infers everything — conventions, architectural decisions, which estimator your project uses — from whatever files land in the context window. Every session starts from zero. That is why I got three architecturally incompatible estimators: no persistent knowledge of which one my project used. Three weeks earlier, the session I did not run twice was the same absence — except I committed before I noticed. Each session adds another layer of convention drift to the Borrowed Architecture.

You cannot have certainty about output when the system that produced the output does not know what you have already decided.

Explicit gates do not exist — gate erosion is inevitable

You started with small tasks. The agent handled them. So you gave it bigger ones. The boundary between “full autonomy” and “human review required” migrated on its own, one successful commit at a time. Today you are approving multi-file changes in minutes that you would spend an hour reviewing from a human colleague.

Your CI pipeline has explicit gates — coverage thresholds, security scans, deployment checks. Without equivalent gates for agent output, gate erosion is the only possible outcome. The boundary moved by accumulation. You did not decide it. It happened. The Borrowed Architecture does not just contain inferred decisions. It contains unreviewed decisions. That is worse.

The observability layer does not exist — assumption propagation is invisible

Your test suite catches human failure modes. Agent-generated code fails differently — at the assumption layer. My covariance estimator session passed every test. The failure was in the assumptions about which estimator the pipeline expected, not in the mathematics. Assumption propagation — invisible until the backtest hit market stress.

Your pipeline has an observability stack. You know what it saw, what it decided, what failed and why. You do not have that for your agent. When something goes wrong, you start from symptoms four hours into a Friday trace, not four minutes into a morning check. The Borrowed Architecture is invisible precisely because nothing is watching it form.

They compound. The first absence makes the second invisible — you cannot calibrate trust in a system with no memory. The second expands the third — more scope means more unreviewed assumptions. The third makes the first seem harmless — because the failures have not surfaced yet.

That compounding is the Fluency Trap. Each absence is survivable on its own — your skill patches over it. Together, they create the Borrowed Architecture: a structural gap that widens with every session. The gap between the certainty you have about your pipeline and the certainty you do not have about your agent is not a prompting problem. It is an infrastructure problem. And the infrastructure, once built, does the opposite of erode — each convention you specify seals the next drift before it forms, each gate you set contains the next assumption before it propagates, each observation you log surfaces the next deviation before it compounds. The system gets more reliable the longer it runs.

02 A Monday With the Infrastructure

What Monday Morning Feels Like When the Infrastructure Exists

6:51 AM

Terminal open. Your agent loads the Context Architecture Map you built in Unit 1. Conventions, architectural constraints, which estimator the project uses, which optimization algorithm, which fill model — loaded by design, not borrowed from training data. The low-grade vigilance you did not know you were carrying is not there. The infrastructure is watching the thing you used to watch manually. Convention drift sealed. Every decision is specified.

7:08 AM

A feature request from Friday. You check your Autonomy Calibration Ladder. Level 3 — full agent autonomy with human review of the diff. The agent produces one solution — because the correct estimator, the correct conventions, and the correct validation chain are in the Context Architecture Map, not in the model’s training data. You approve the diff in ninety seconds. There is no version of this that could have produced my three-estimator session. Gate erosion contained — the boundary is explicit. Reliable.

7:23 AM

The agent proposes a schema change. You recognize the pattern in four seconds: assumption propagation — textbook default instead of project convention. Your Failure Classification Map named this pattern before the agent produced it. You catch it in ninety seconds instead of four hours on a Friday. The observability log shows what the agent inferred, what it should have loaded, and where the deviation occurred. No archaeology. No Borrowed Architecture to trace. Assumption propagation surfaced the moment it tried to form.

7:41 AM

Standup. Someone suggests the agent for a cross-service migration. You walk the team through two assumption-layer risks nobody else identified — and halfway through your explanation, you notice you are not translating. You are thinking in the vocabulary. Convention drift. Assumption propagation. Gate erosion. The words arrived without effort, the way “coverage threshold” and “deployment gate” arrive when you talk about your CI pipeline. The framework is not something you learned. It is something you use. The Borrowed Architecture has a name, and the name changed how you see every agent-generated diff.

8:07 AM

You check the observability log. Four minutes. That quiet certainty — the kind you feel about your CI pipeline, your test suite, your deployment process — now extends to the one tool in your stack that used to run on hope. Your eye moves forward. The Fluency Trap is closed. Borrowed became owned. Reliable. And the infrastructure is stronger today than it was yesterday — because every session that runs through it seals another convention, contains another gate boundary, surfaces another deviation before it compounds. It cures.

That is the workflow you build, module by module, across 22 structured lessons backed by 44 arxiv-cited research papers.

03 Pattern Recognition

You Have Built This Before

You have done this exact thing. The CI pipeline started as a script you ran manually and hoped passed. You did not trust it. Then you added coverage thresholds, and you trusted the coverage. Then you added lint gates, and you trusted the style. Then you added deployment checks, and you trusted the release. Each layer of infrastructure converted hope into certainty. You built it because getting by was not good enough.

The deployment process was the same story. It started as a checklist on a wiki page you followed by hand. Then it became a script. Then the script became a pipeline with rollback gates and health checks and canary stages. Each layer reduced the surface area of hope. Today you deploy with the same certainty you run tests — because you built the infrastructure that makes certainty possible.

The agent is not a different category. It is the next tool in the sequence. And you already know that the distance between “it works when I pay attention” and “I trust it” is exactly the infrastructure you are about to build.

You built the CI certainty. You built the deployment certainty. What you want now — reliable agent output, every session — is the same pattern applied to the one tool in your stack that still runs on borrowed decisions.

04 The Artifacts

The 11 Artifacts That Build the Infrastructure

You finish Foundation with eleven operational artifacts configured for your actual codebase — not notes from a course, not templates from a demo project. Each one closes a specific gap between your current setup and the certainty you already know from the rest of your stack.

Persistent Context · Sealing Drift

1

Context Architecture Map

The artifact that would have prevented my committed session. You map which context flows into your agent by design instead of by inference — which estimator the project uses, which optimization algorithm, which fill model. When the agent loads this context, it produces one solution — not three. (Most engineers discover their context is 40% noise on the first pass.)

2

Intelligence Loop Design

Your agent makes the same assumption Tuesday that it made Monday. This artifact makes corrections persist across sessions automatically. (The twenty-minute Monday re-explaining ritual drops to under two minutes within a week.)

Explicit Gates · Containing Erosion

3

Human Gate Protocol

The absence of this single artifact is how I approved an estimator decision in ninety seconds that needed an hour of review. Where human judgment is required, what triggers escalation, what the review criteria are at each gate.

4

Autonomy Calibration Ladder

Per-task autonomy levels with explicit engineering criteria. Boilerplate gets full autonomy. Architecture decisions get human gates. The boundary never drifts on its own.

5

Reversible Execution Setup

Before the agent introduces an assumption you did not catch, rollback is available. (Cost without this artifact: cancel your afternoon. Cost with it: 10-second reset.)

Observability · Surfacing Propagation

6

Agent Observability Stack

The four-layer diagnostic trail behind your 8:07 AM log check. What the agent saw, what it decided, what it produced, and where it deviated. (Four minutes instead of four hours.)

7

Failure Classification Map

Agent output fails in predictable, categorizable ways. My estimator mismatch was a known pattern: assumption propagation. (Typically surfaces 4–6 patterns that account for 80% of agent-generated defects in a given codebase.)

Measurement · The Foundation Under All Three

8

Reliability Surface Assessment

You stop estimating whether your agent “works most of the time” and start measuring against engineering tolerances. The same way you measure test coverage, not good intentions.

9

Codebase Readiness Index

Which parts of your codebase produce reliable agent output, scored across four dimensions. (Most codebases have 2–3 zones where agent output is structurally unreliable.)

10

Token Budget Profile

Where your context window budget actually goes, per-stage. (Most engineers are running 30–50% overhead re-explaining architectural constraints their agent forgot.)

11

Agent Security Surface Review

Your agent has read/write access to your codebase. This artifact maps the attack surface the same way you would for any other tool with those permissions. (Engineers who audit this first discover 2–3 implicit trust assumptions they did not know they had granted.)

05 The Curriculum

What You'll Build, Module by Module

22Modules
14Frameworks
44Research Citations
Foundation curriculum: four units covering the individual practitioner system
UnitFocusWhat You Build
0: OrientationWhere do I start?Self-placement diagnostic, Claude Code configuration
1: The Prompt EngineerWhat does your agent actually know?Context Architecture Map, Intelligence Loop Design
2: The Context ArchitectHow does your agent learn from mistakes?Reversible Execution Setup, Intelligence Loop integration
3: The Production EngineerIs this production-ready?Reliability Surface, Codebase Readiness, Observability, Autonomy Ladder, Human Gates, Failure Classification, Token Budget, Security Surface

Self-paced. Text-based. Five to nine hours. Every framework backed by peer-reviewed research — 44 arxiv citations linked to the actual papers. Foundation covers Units 0–3: the complete individual practitioner system. 14 of the 21 total frameworks. 11 operational artifacts built for your codebase.

Units 4–5, the capstone audit, and multi-agent orchestration are Mastery scope — and your $497 applies as 100% credit toward Mastery within 60 days.

06 Audience

Who This Is For

Engineers with 5–20 years of experience

You use AI coding tools daily and want results that are reliable, not lucky. Self-paced, self-directed. No cohort pacing, no live facilitation.

“I already know how to prompt.”

Good. Prompting is three modules out of twenty-two. The other nineteen build the infrastructure that makes output reliable — including the observability layer that would have flagged my estimator mismatch before I committed it.

“I have been getting by fine.”

You built infrastructure for every other tool in your stack because you wanted certainty, not fine. Getting by is not how you built those tools. It is not the relationship you have with your CI pipeline, your deployment process, your test suite. You wanted reliable. That is why you clicked.

“Won’t the tools just get better?”

They will. Better models produce better output faster — which accelerates convention drift, deepens assumption propagation, and widens gate erosion. The infrastructure gap grows with the tool, not despite it. Your CI pipeline did not become unnecessary when compilers improved. The infrastructure is what makes the improvement usable.

Not for beginners. Not for prompt collectors. If you need multi-agent orchestration now, that is Mastery.

07 The Offer

The Founding Terms

Foundation

The Individual Practitioner System

Self-paced · Text-based · 5–9 hours
$497 $797 Founding
  • 22 modules across Units 0–3 — the complete individual practitioner system
  • 14 frameworks backed by peer-reviewed research
  • 11 operational artifacts configured for your real codebase
  • 44 arxiv citations linked to the actual papers
  • Full-refund guarantee if framework depth doesn't match what you'd expect from 36 years of production engineering
  • 100% credit toward Mastery within 60 days — upgrade and pay only the difference
Get Foundation — $497 Founding Price

Lifetime access · No cohort, no live calls · Refund anytime in the first 30 days

The Founding Window

Founding pricing holds for the first 50 seats or through August 1, 2026, whichever comes first. After that, the price steps up to $797. A cap, a date, a price, a window.

Foundation vs. Mastery comparison
FoundationMastery
Modules2234
Frameworks1421
Operational artifacts1121
Multi-agent + unattended execution
Human-factors layer
Capstone System Audit
Founding price$497$997

No testimonials. No case studies. This is a founding edition — you are evaluating the engineering: 14 frameworks, 11 artifacts, 44 research citations. The price reflects that you are early. The depth does not.

From Pierre

The Decision

You have had the certainty I am describing. You have it right now, about your test suite. About your deployment pipeline. About your CI gates.

Think about the session you did not run twice. The commit that passed your test suite. The diff that looked clean in review. You already know the Borrowed Architecture is in those commits — convention drift, assumption propagation, gate erosion, compounding quietly.

The three architecturally incompatible estimators were never a prompting problem. They were the Fluency Trap — three structural absences masked by seventeen months of skill. The infrastructure that closes them does the opposite of erode.

Each convention specified seals the next drift before it forms. Each gate set contains the next assumption before it propagates. Each observation logged surfaces the next deviation before it compounds. It cures.

Imagine opening the next architecture review knowing every AI-generated decision in the codebase was specified, gated, and observable. That quiet certainty extending to the one tool that used to run on hope. The CI pipeline started as a script. The deployment started as a checklist. Each became infrastructure because you built it into infrastructure. Getting by is not the relationship you built with those tools. You wanted reliable. Build the infrastructure. The Borrowed Architecture becomes what your pipeline already is — yours.

Each convention specified.

Each gate set.

Each observation logged.

The certainty you have about your CI pipeline, extending to the one tool that used to run on hope.

Excelsior,
Pierre⁄ Founder, Curio Chat Academy
Pierre Boutquin

About Pierre

36 years building production software. 24 at TD Bank, leading teams across regulated systems. 10+ technical books. Built a production-grade quantitative trading framework with Claude Code — 70,500 lines, 14 source projects, published on NuGet — then discovered that the infrastructure separating a weekend prototype from a reliable daily workflow did not exist yet. I built it. This is how I teach it.