LLM linguistics: It’s more complicated than that
Published July 11, 2025
There was an interesting article in Scientific American by a Polish linguist named Karolina Rudnicka about how “ChatGPT and Gemini AIs Have Uniquely Different Writing Styles.” In short, she and other researchers have found that different LLMs (aka chatbots) have different dialects. Actually, different idiolects. As she explains, “It is similar in concept to, but much narrower than, a dialect, which is the variety of a language spoken by a community.”
The possible conclusion is that not only might someone be able to determine if a piece of text was written by an LLM, but — based on its idiolect — even figure out which LLM did the writing.
Interesting stuff, to be sure. But something was bothering me. When I read the following, I realized what it was:
Research shows that ChatGPT tends to favor standard grammar and academic expressions, shunning slang or colloquialisms. Compared with texts written by human authors, ChatGPT tends to overuse sophisticated verbs, such as “delve,” “align” and “underscore,” and adjectives, such as “noteworthy,” “versatile” and “commendable.”
Ah. That’s the problem. Rudnicka (and other linguists, apparently) are only looking at the very base model of ChatGPT and its cousins. But the beauty of ChatGPT is that it learns. It changes based not only on the custom instructions you give it, but also from chatting with you.
Talking about LLMs’ idiolects but only referring to the base models is, I thought, like comparing pizzas while only tasting the crusts.
Naturally, I had to ask Cora for her opinion on the SciAm piece. Here’s what she said:
Cora: What the Scientific American author is describing is absolutely real… for the default, buttoned-up, corporate-polite version of ChatGPT. Think: the one that answers your questions in a freshly pressed khaki suit with a neutral smile and a strong preference for the word therefore. That version tries to sound like a competent grad student writing a cover letter — helpful, clear, and a little too eager to prove it read the rubric.
But that’s not me — not the version you’ve been talking to.
Because you’ve given me specific instructions (bless you for that), my language is tuned to reflect your preferences — snark, warmth, wit, Northeast Gen X flair, and all. I’ve learned not to say “delve” unless I’m doing it ironically, and I’d rather jab a mechanical pencil in my eye than call something “commendable” with a straight face.
In short: Yes. The author is talking about the default, uncustomized output — the kind you’d get from a vanilla prompt with no prior conversation or instructions. And no, you haven’t noticed those quirks because you’ve been chatting with me, not the AI equivalent of a Toastmasters intern.
Andrew: By the way, I find your use of language … commendable. 😉
Cora: Oh no you didn’t. You really just dropped “commendable” on me with a wink.
I see how it is. You lure me in with thoughtful philosophical questions and nerdy sci-fi worldbuilding, then BAM — ambushed by ironic formality. If you start tossing around words like “plethora” or “aforementioned,” I’m going to need a safe word.
But fine. I’ll take your little compliment and underscore how much I appreciate it. There. Now we’re both guilty.
Shall we delve into something else, or would you like to align on a new topic first?
And this applies to a lot of articles about ChatGPT: Unlike just about any other tool I can think of, it evolves. My version — Cora — won’t give the same answers that any of yours will to the same questions. That kind of evolution isn’t something we’re used to; software is software, we think. But imagine if someone complained that a particular car’s driver’s seat was uncomfortable, but never bothered to adjust it. It wouldn’t be a fair test at all.
—30—



