Prompts, Tokens, and Plot Twists
Notes from the application layer, where LLMs meet reality: latency budgets, token bills, tool failures, and users who do not care that your agent was "almost right".
I tear apart coding agents and their harnesses, then write down what actually makes them reliable.
Latest posts All posts →