4 min read

a greasy AI feeling

It's the thing where someone has tried to communicate with you, and they have used AI to do so, and as a result the layer of meaning is slightly obscured, and now you need to try to peer past the AI details to find the human intention behind it.

And maybe they have communicated directly with you or maybe that have just written stuff as general notes. Either way it's the same - a feeling that you're never standing on solid ground, that everything you're dealing with is an echo of the original.

And this matters different amounts for different tasks. Some things can be done diffusely, they can be done imprecisely, you just need to push something in the right general direction and it'll do. But most things, at least the way I think about it, most things start by looking for solid ground. Maybe that's not directly perceivable, but that's what I'm trying to discover.

When I'm debugging, I often think about a paradigm where I'm collecting lots of pieces of evidence, and I am aware that what those pieces of evidence say is not what they seem to say. The log says the event was emitted here - well, I believe that something caused this log line to be printed, but I merely have high confidence that the event was actually emitted. You go through all of the things you believe and you find the thing you have the lowest amount of confidence in, and then you try to either find evidence which increases your confidence. Or maybe, if you're lucky, you find - ah, no, here's the error! The event emission had been commented out, but the logging statement was left in place, here's why they were never being received on the other side. You could call it challenging your assumptions, but the whole point of this mindset is to try not to have assumptions. Just stuff that, on the basis of the evidence, seems likely.

But if you're trying to apply this process when there's an AI in the loop - well, now the link between what the system says and what the system does is weaker. It says this but it might be doing something else. Everything becomes less precise. The AI says that the event is being emitted, but is it?

And, y'know, maybe what I'm saying here is that the intuitions I've developed for debugging code the old fashioned way are missing important parts for reasoning about AI models. That I need to learn to decipher their signals, form accurate estimates of trustworthiness, use them instead to investigate multiple possibilities in parallel or to iterate through things faster. Learn to guide them through this same process of going from evidence to likelihoods.

But this feeling also persists outside of programming. If I see a brief and there's a specific detail mentioned in passing - it looks kind of innocuous, but upon thinking about it turns out to be a major constraint to design around... well, if it's human written then I'll use a load of context clues to get a sense of whether it really does seem to be a hard constraint from their side or if it just passed by without too much thought. But if I think it's AI, then it's just so much harder to pick up on whether it is an essential or incidental detail. Greasy. You could say - well, if it's an important detail, it's worth asking about. And I would! But also I'd want to go back with a list of details to query, some early thoughts to check against, etc.

(In the real nightmare scenario, you go back to check on the query and receive yet another AI response, back and forth never quite touching their intent until you've got a full thing specced out that is almost but not quite entirely unlike the thing they were looking for)

Or, a doc outlining a system design. Even if it's wrong when I actually get into the details, if it's human written then I have a lot of useful info from the fact it's wrong. Maybe the human thought this was how the system worked. Maybe this was a doc describing the intended outcome, before it changed during implementation. Maybe it changed after it was already running? All of these imply other things about the system. But if it's an AI, then it just says that the described behaviour is a plausible common alternative to the way it actually works. I probably know that already!

We're social animals, we are very good at picking up on subtle social cues. A fullstop at the end of a text tells us that someone is mad indeed. Unless they're a person who just uses fullstops there. A single character with a mountain of context and a mountain of meaning.

It's a greasy feeling. It makes everything feel slippery. Unreliable. It's a feeling that comes not from your own use of AI - when you look at the things it's generated for you, you see lots of stuff that looks right, and your mind fills in all the gaps with the context you already know. It comes from trying to deal with the AI of others. Or even the suspected use of it - if you think you are, then the doubts are raised just the same.


Honestly of all the potential misuses of generative AI in games, "writing design docs with copilot" is probably the worst one? Bad writing or bad art are one thing, bad internal docs are sending your coworkers into a spiral of time waste

Bruno Dias (@brunodias.bsky.social) 2025-12-21T16:24:31.704Z

deciphering programmer art like you're in a movie montage where a guy does science in a dimly lit lab for 4 days and 4 nights is an integral part of the process. you gotta learn to find the soul of it. its right there. behind the word "BIGGER" they underlined so hard it ripped through the paper

(https://bsky.app/profile/hrpixelart.com/post/3maggxnytec2u)