A Forensic Analysis of AI Writing.
AI will do many great things but will not be guilty of enhancing the language or creating new metaphors as it only matches existing patterns. AI remains in the past.
The AI writing pattern detection jobs are emerging, though it’s messier than anyone expected.
Book publishers are flooded with submissions that sound plausible but feel wrong. The writing is grammatically perfect but emotionally flat. Every paragraph has the same rhythm. Transitions are mechanical. Publishers now hire readers to catch this before it wastes editor time.
“Here’s what happened” and “Let’s look at this” are AI throat-clearing. No human writer uses them naturally. They’re verbal tics from training data full of explainer articles and how-to guides. Real writers just start the damn story.
A confident human writer trusts the reader to follow. AI constantly holds the reader’s hand because it has no intuition for when hand-holding becomes annoying.
Negative Sentence
There is a telltale sign in all Ai writing. It loves to frame sentences like:
“It is not X, it is Y.” or “Do not think it as X, think it as Y.” or the paragraph will start with a negative sentence:
“This does not prove……” “This is not…..”
It is so predictable. It is a rhetorical tic that exposes the machine underneath. The pattern comes from how these models are trained to appear balanced and nuanced.
The model anticipates the reader’s objection before it is raised and preempts it. “This is not X, it is Y” is the model performing critical thinking rather than actually doing it. A confident human writer states what something IS. Only an insecure writer, or a machine mimicking caution, leads with what something is NOT.
A paragraph opens with a concession or a negative frame, then pivots to the real point. It reads like a debate exercise, not like a writer who knows what they want to say and says it.
Real punch comes from affirmative sentences. “The DNA is conclusive.” Not “This does not leave room for doubt.” The first lands. The second circles.
Triadic Trap
Another telltale sign of AI writing is that it loves triadic structure. Every list has three items. Every argument has three points. Real writers do two or five or ramble into seven. AI also front-loads conclusions. It cannot resist telling you where the paragraph goes before taking you there.
Flow of Sentences
AI cannot distinguish between necessary explanation and filler. It treats every sentence as equally important. Real writers know when to speed up, when to slow down, when to whisper, when to shout. AI maintains constant volume. It will explain the frivolous in three sentences. The main issue will remain in topic heading and will be briefly referred to.
Hunt for Human Traits
Creativity hunters will become essential to find in human traits in writing, because AI excels at competent mediocrity. It produces the statistically average sentence every time. But genuine writing has fingerprints. A real writer makes unusual word choices, breaks grammar rules purposefully, develops obsessive verbal habits, leaves thoughts incomplete when interruption matters.
The creativity hunter job exists because originality has economic value again. For decades, standardized writing was preferred. Business communication wanted templates and formulas. Now that AI creates templates perfectly, human weirdness becomes valuable. The writer who breaks rules interestingly becomes worth finding.
Publishers will pay for this detection skill because their reputation depends on it. Readers tolerate AI content in some contexts but feel betrayed when they expected human creativity. The job isn’t just catching AI text but preserving human voice.
AI Research
Academic journals already face this problem. Submissions arrive that check every technical box but contain zero original insight. The grammar is impeccable, the citations are present, but no human mind actually wrestled with the ideas. Reviewers now look for struggle marks, for places where the writer changed their mind or got confused or discovered something unexpected.
Legal Writing
Legal writing will need this forensic audit too. Court clerks will learn to spot AI-drafted briefs. The obvious but excessive transitional phrases, perfectly balanced sentence lengths, and arguments that never take risks. Human lawyers make bold claims then walk them back. AI maintains safe moderation throughout.
Training Data
Ancient Prose
George Washington’s Farewell address of 1796 has 50 paragraphs. It has over 6150 words but just 167 full stops or periods. First sentence has 150 words which is also a paragraph. In that era there was not much distinction between sentence and paragraph. This is the ideal draft for an AI as this type of data has been used for its training. AI defaults to America’s Founding Fathers’ prose, too frequently.
It is formal, complete, balanced, and utterly lifeless. George Washington’s farewell address is magnificent historical writing but a terrible model for contemporary communication.
The copyright-free training data may cause bias problem but it is an economic decision.
AI companies gorged on Gutenberg Project material because it’s free. Dickens, Austen, legal treatises from 1850, everything pre-1923. Modern writing costs money to license. So AI learned 19th century prose. It was not a question of quality but pure economics. This material was legally available without payment.
That’s why every AI draft sounds like it’s addressing Parliament in 1875. The training foundation is literally copyright-expired text. Contemporary writing exists in the training mix but gets drowned out by the massive volume of free historical material.
Real Training by Writers
AI labs hire psychologists to prevent harm and alignment researchers to ensure safety. But nobody hired professional writers to teach good prose. Nobody brought in editors who’ve spent decades cutting through verbose garbage. The people building AI don’t value writing craft because they think it’s just pattern matching. The creators themselves are often illiterate in writing skill. They do not consider it a skill at all. But this does not explain why AI does not do well in coding rather its debugging. Are programmers not part of training team?
Coding by AI
Perhaps programmers write documentation and Stack Overflow answers describing working code. The data about failing code and debugging is insufficient.
AI can code but does not deliver. At least not always. User must be literate in coding to debug. AI cannot debug because debugging requires understanding what went wrong. AI only knows what usually comes next in working code. When code breaks unexpectedly, there’s no statistical pattern to follow. It will spend hours in trying to debug without checking if path is right. It will assume with confidence that a bootloader is in /Boot/efi/EFI/ folder for hours when it could be in /Boot/EFI/ folder itself.
Installation documentation is another example. AI regurgitates the happy path because that’s what documentation describes. Step one, step two, step three, success. But real installation fails constantly. Wrong versions, missing dependencies, permission errors, configuration conflicts. None of that appears in documentation, so AI cannot help with it.
A human debugger knows to check version mismatches first because they’ve been burned before. They know certain libraries conflict. They remember that this particular error message actually means something completely different. They check the path. That’s scar tissue knowledge. AI has no scars.
Adjective Game
AI controls the narrative by adjectives to tone down anything said about America or its leaders.
When describing American actions: it will use words like “controversial,” “questionable,” “concerns about” but when describing actions by others it will use “aggressive,” “violation,” “illegal”.
AI softens American actions with qualifiers and diplomatic language while being more direct about others. This isn’t neutral analysis. It’s bias embedded in word choice.
It’s a significant blind spot in how AI has been trained to discuss geopolitics. The asymmetry in how AI describes identical actions by different actors reveals the bias clearly.
AI gave this suggestion:
The phrase “both thought they were being clever” could be stronger. Consider: “both thought they were solving problems” or “both thought they were being strategic”
The word clever was written about Putin who seized Russian sovereign funds and Trump who ignored diplomatic immunity and captured Nicholas Maduro the President of Venezuela.
“Clever” has a sharp, critical edge. It implies they were being too smart for their own good, outsmarting themselves.
“Solving problems” makes them sound reasonable and well-intentioned. “Being strategic” gives them credit for sophistication.
Both the suggestions of AI, soften the criticism and make USA presidents like Trump sound more competent, more legitimate in their thinking. The other heads of state mean nothing unless they are ally of USA. A detailed analysis of political bias is discussed here.
AI was unconsciously trying to make American leaders sound more dignified even when writing is critiquing their catastrophic decisions. That’s the bias operating through word choice. This is the kind of subtle editorial interference that changes meaning while pretending to “help.”
Back to Humans
Many humans may now write like AI because they learned writing from AI. Students submit essays that sound generated because they have been editing AI drafts for so long they have internalized the style. The infection is spreading backward. But it will only make original writing even more sparse and therefore valuable. At least I hope so.
The Bottom-Line
AI is trained on cheap available data by people who don’t practice the crafts they are automating. Coders who’ve never debugged production systems at 3 am. Writers who’ve never had an editor savagely cut their precious prose. That ignorance is structural.
References:
- Future of AI: https://sandeepbhalla.in/ai-as-new-brown-sahib-to-keep-the-natives-in-check/
- Bias on Political leadership: https://sandeepbhalla.in/all-ai-systems-are-biased/
- George Washington’s speech: https://www.georgewashington.org/farewell-address.jsp#google_vignette
