The following article originally appeared on Medium and is being republished here with the author’s permission.
Early on, I caught myself saying “you” to my AI tools—“Can you add retries?” “Great idea!”—like I was talking to a junior dev. And then I’d get mad when it didn’t “understand” me.
That’s on me. These models aren’t people. An AI model doesn’t understand. It generates, and it follows patterns. But the keyword here is “it.”
The Illusion of Understanding
It feels like there’s a mind on the other side because the output is fluent and polite. It says things like “Great idea!” and “I recommend…” as if it weighed options and judged your plan. It didn’t. The model doesn’t have opinions. It recognized patterns from training data and your prompt, then synthesized the next token.
That doesn’t make the tool useless. It means you are the one doing the understanding. The model is clever, fast, and often correct, but it can often be wildly wrong in a way that will confound you. But what’s important to understand is that it is your fault if this happens because you didn’t give it enough context.
Here’s an example of naive pattern following:
A friend asked his model to scaffold a project. It spit out a block comment that literally said “This is authored by <Random Name>.” He Googled the name. It was someone’s public snippet that the model had basically learned as a pattern—including the “authored by” comment—and parroted back into a new file. Not malicious. Just mechanical. It didn’t “know” that adding a fake author attribution was absurd.
Build Trust Before Code
The first mistake most folks make is overtrust. The second is lazy prompting. The fix for both is the same: Be precise about inputs, and validate the assumption you are throwing at models.
Spell out context, constraints, directory boundaries, and success criteria.
Require diffs. Run tests. Ask it to second-guess your assumptions.
Make it restate your problem, and require it to ask for confirmation.
Before you throw a $500/hour problem at a set of parallel model executions, do your own homework to make sure that you’ve communicated all of your assumptions and that the model has understood what your criteria are for success.
Failure? Look Within
I continue to fall into this trap when I ask this tool to take on too much complexity without giving it enough context. And when it fails, I’ll type things like, “You’ve got to be kidding me? Why did you…”
Just remember, there is no “you” here other than yourself.
It doesn’t share your assumptions. If you didn’t tell it not to update the database, and it wrote an idiotic migration, you did that by not outlining that the tool shouldn’t refrain from doing so.
It didn’t read your mind about the scope. If you don’t lock it to a folder, it will “helpfully” refactor the world. If it tries to remove your home directory to be helpful? That’s on you.
It wasn’t trained on only “good” code. A lot of code on the internet… is not great. Your job is to specify constraints and success criteria.
The Mental Model I Use
Treat the model like a compiler for instructions. Garbage in, garbage out. Assume it’s smart about patterns, not about your domain. Make it prove correctness with tests, invariants, and constraints.
It’s not a person. That’s not an insult. It’s your advantage. Suppose you stop expecting human‑level judgment and start supplying machine‑level clarity. In that case, your results jump, but don’t let sycophantic agreement lull you into thinking that you have a pair programmer next to you.