Anthropomorphizing
When the model does something wrong, it’s tempting to treat it like a person. “It’s not thinking clearly.” “It got confused.” “It’s being stupid.” “It’s pissing me off.”
This is the wrong mental model.
The Piggy Bank
Think of the model as a piggy bank that needs to be shaken in a very specific, unusual way to get the coins out. The coins are there. You just need to find the right angle.
You’re not having a conversation with something that understands you. You’re setting up context such that the model can copy a pattern. The output you want already exists in the latent space—you’re just arranging the input to make it fall out.
The Wrong Approach
Going back and forth in a chat interface, trying to get the model to “understand” what you want, adding more explanation, rephrasing the same request—this is using it wrong.
If the model isn’t producing what you need, the answer isn’t to explain harder. The answer is to restructure the context. Provide examples. Show the pattern. Give it something to copy.
Why This Matters
Anthropomorphizing leads to frustration. You expect human-like understanding, and when you don’t get it, you blame the model’s “intelligence.” But intelligence isn’t the issue. The issue is that you haven’t found the right way to shake the piggy bank.
Models are pattern matchers. They excel at continuing patterns, following examples, and completing structures. When you provide the right setup, they produce remarkable output. When you don’t, no amount of explaining will fix it.
The Pattern
Stop treating the model like a confused colleague. Start treating it like a machine that needs specific input to produce specific output. Find the angle that makes the coins fall out.