The Loss Function That Writes Back
Coding agents, harness design, and a daily practice for getting less wrong.
The first time an agent saved me an hour, I didn’t trust it.
The second time it saved me an hour, I got cocky.
The third time it hallucinated a function name, edited the wrong file, and I spent the rest of the night chasing a bug that never existed.
That pattern is why this blog exists.
A coding agent is not magic. It is a system that works under certain constraints and fails in specific, repeatable ways. When it fails, it leaves a trail: missing context, leaky tool boundaries, bad defaults, weak verification, unclear intent. That trail is the loss function that writes back.
So I started keeping notes.
The delta
Every interaction with an agent produces a delta:
- What I asked for
- What it did
- What it should have done
- Why it missed
If I can explain the miss, I can reduce it next time. Sometimes the fix is boring: better context, tighter tool interfaces, smaller steps, explicit checkpoints. Sometimes it is deeper: orchestration, routing, evals, and failure containment.
Either way, the signal is there.
What I write about
- Short postmortems of agent failures and what fixed them
- Patterns for making agents feel reliable instead of random
- Small system upgrades that compound: prompts, tools, checks, evals
- Illustrations and diagrams when they matter, code when it helps
I care less about the current surface area of these systems than the recurring ways they break. Today’s failures tend to become tomorrow’s design requirements.
So I’ll write down the misses, the fixes, and the system patterns that seem to matter as these tools improve.
Welcome to the gradient descent.