Is everything written by AI these days? Is this article?
The proliferation of large language models has prompted a new, wary literacy: people can now read a paragraph and wonder who—or what—wrote it. That anxiety exists for good reason.
Recent studies continue to show that the ever-increasing flood of machine-generated prose differs from human writing in increasingly not-so-subtle ways, from specific word choice to easily identifiable structural tics. These patterns matter because they affect far more than school essays and research theses; they shape corporate communications, journalism, and interpersonal email in ways that can muddle trust or authenticity.
Researchers surveying stylometric detection techniques have found consistent, measurable patterns in lexical variety, clause structure, and function-word distributions—a statistical fingerprint that persists across tasks and prompts. While these tells are shrinking with every model generation—OpenAI just fixed its over reliance on em dashes, for instance—the difference between AI slop and stuff that's human-written is still large enough to inform how readers and editors approach suspiciously polished text.
A recent Washington Post analysis of 328,744 ChatGPT messages reinforces this point with real-world data. It found that the model leans heavily on emojis, a narrow palette of favorite words, and everyone's favorite tell, "negative parallelism: “It's not X, it's Y;” or "It's less about X and more about Y."
The Post also warned against overconfidence: none of these traits prove AI authorship; they only raise the probability. Still, when a piece of writing exhibits several of them, the signal gets harder to ignore.
Here are the five strongest signals that a text may have been machine-generated, each anchored in current research.
By the way, most of this article was written by AI.
© 2025 DeFi.io