How to Tell if Something is AI-Written

Another rule: look for formulations like “it’s not just X, but also Y” or “rather than A, we should focus on B.” This structure is a form of computational hedging. Because an LLM only knows the relationships between words, not between words and the world, it wants to avoid falsifiable claims. (I’m saying want here as a joke but it helps to see LLMs as wafflers.) By being all balance-y it can sound comprehensive without committing to anything.

An LLM navigates what researchers call “conceptual space,” the network of relationships between ideas as they appeared in training text. This is different than the network of relationships between ideas and reality we struggle with. LLMs live entirely within the realm of signifiers, tracking statistical relationships between word-forms without access to the signifieds that anchor language to human experience.

You never have to work with LLMs. They play it safe. They also want to sound sophisticated and comprehensive.

Look for unnaturally perfect balance, where every point has a counterpoint, every advantage has a corresponding challenge mentioned. That’s AI.

Look for the absence of concrete details. AI-generated text about education might discuss “preparing students for the future workforce” and “developing critical thinking skills,” but it rarely mentions specific students, particular classroom moments, or concrete pedagogical challenges (like no pencils). The language floats at the level of institutional abstractions because that’s all the model has.

LLMs can, when asked, produce what feels like a personal narrative until you poke at it. A human personal narrative is full irrelevant details and random memories. There is mess and uncertainty and sometimes cringe. LLM anecdotes serve the argument too neatly.

— Read on hollisrobbinsanecdotal.substack.com/p/how-to-tell-if-something-is-ai-written


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *