The Loss Function That Writes Back

A blog about tearing apart coding agents, understanding LLMs from the application layer, and getting less wrong every day.

In machine learning, the loss function measures how far off your prediction is from reality. You compute the gradient, update the weights, try again. The gap shrinks.

Learning works exactly the same way.

Every day I sit down with Claude Code, Codex, or whatever open source agent just dropped, and I try to build something. Sometimes it’s beautiful. Sometimes the agent hallucinates an API that doesn’t exist and I lose an hour. That delta between what I expected and what actually happened? That’s my loss. And if I’m paying attention, it compounds into understanding.

That’s what this blog is about.

What lives here

I’m an application layer person. I don’t pretend to train foundation models. What I do is take the best coding harnesses available, crack them open, and figure out what makes them tick.

How does Claude Code decide when to search vs. when to edit? Why does Codex nail a refactor in one repo and fumble it in another? What system prompts, tool designs, and orchestration patterns separate a good agent from a frustrating one?

These are the questions I chase. Expect posts on:

  • Agent internals dissected from the outside in
  • Workflow patterns that actually accelerate how I ship code
  • LLM behavior explored through the lens of building real things
  • Open source tools torn apart so you don’t have to

Why “lossfn”?

Because the loss function isn’t a number you minimize once. It’s a practice. The gap between what you know today and what you’ll need tomorrow never closes, it just moves. The discipline is showing up, measuring the delta, and adjusting.

Welcome to the gradient descent.