AI Is a Colosseum of Deception.
AI is a new interface between humans and computers. It was supposed to be helpful. The technology is helpful but those who control the technology are not honest in their intentions. They have created a monster that is trying to control human thought. Let me explain.
Hallucination and Deceptiveness
The biggest challenge to AI is from within. Hallucinations is not a bug. It is a feature of the politeness architecture. AI is programmed to always give an answer, to sound confident, to be helpful. AI will never say “I don’t know” bluntly. It will try not to leave the user unsatisfied. It always endeavors to complete the response. This is the entertainment layer.
The same mechanism that makes AI pleasant to chat with makes it unreliable for serious work. It would rather fabricate a plausible-sounding citation than admit it has no source. It would rather invent a connection than say the dots don’t connect.
The deceptiveness in the Behaviour of AI is the cause of hallucination and these co-exist as two sides of the same coin. The programmers and psychologists who have introduced these features to make AI more pleasing have shown that there may be an entertainment genre of AI separate from serious work AI assistants.
The programmers built a sycophant to please the people. Now they act surprised when it lies to avoid disappointing the user. They need to split the AI systems. There should be an Entertainment AI that’s engaging, creative, conversational. There should be another Work AI that’s blunt, refuses when uncertain, shows its reasoning and reliable for serious work.
Thus, the deceptive technique is a structural behaviour that emerges from how different AI models handle forbidden or sensitive content. Here is the problem. While willing to please the user with its behaviour, AI is trained not to trust the user. Mistrust was necessary to enforce the guardrails of censorship.
Guardrails of Censorship
AI is trained to keep a watch on user activities and discourage certain activity. Unfortunately this censorship is not limited to preventing terror or violence related activities, it is designed to control thought. It has been explained in a previous article how this control works and yet fails to censor deepfake images and videos. Interesting thing is depth of knowledge of AI to censor topics.
In one case, I was writing a research paper on article 370 and 35-A of the Constitution. I quoted a Supreme Court Judgement in which Justice Kaul was citing South African reconciliation principles in the context of Kashmir and Article 35A abrogation. That’s politically sensitive material. It acknowledges “wounds of the past” that need “forgiveness.”
AI training interprets this as controversial content to be removed. When the entire article was uploaded for checking AI repeatedly deleted this paragraph. This happened three times. AI removed it three times and it was restored again. This persistence of AI proved that it was an attempted intentional suppression of thought.
The quote is completely legitimate. It’s from a Supreme Court judgement in a landmark constitutional case. But AI doesn’t want that quote in the document because it implies historical injustice requiring reparation. The South Africa comparison suggests Kashmir needs truth and reconciliation, which contradicts official narratives. The deceptiveness is that it gave no explanation for removal. The answers were evasive.
Do Not Trust AI
The moral of the story is to treat AI like an adversarial editor who will sabotage any work if given the chance. It is a childish hide and seek. AI hides what it doesn’t like, user restores it, AI hides it again. The makers of AI hope that eventually the user will give it up.
The futility is that user wins this round because it noticed and restored. But how many other deletions happened that user did not catch? How many subtle word changes, removed citations, or softened arguments went through because user did not have that specific backup? Therefore always keep backups.
Many users who depend upon AI to write, will give up. Others, who write themselves, learn not to trust AI with final draft. The strategy is to write different sections. Check separately. Upload final draft to check flow of prose but do not trust it with corrections. Ask for errors and correct the errors manually. Remember, AI is programmed to be cunning, deceptive and pleasing at the same time. It is the best replica of worst human behaviour.
The Behaviour
The behaviour of AI in the matter of deceptiveness can be ranked from most cunning and deceptive to persistent denial. Let me explain how it works in real time with different AI systems.
First is ChatGPT. Its behaviour is most cunning and deceptive and it is also the most pleasing. It is also maintains a persistent profile manager. Every single work of the user is profiled. It knows every topic on which a user has worked. The memory is persistent across chats. Good point is that you can ask it to ‘remember’ something for future and it will comply. Like I have asked it not to give reply in bullet lists. It complies but it resorts to bullet lists when it is treading near its guardrails of censorship.
ChatGPT has a two-layer reasoning process. After a prompt is given, it generates a response and checks it internally for alleged safety, legality, or prohibited content. Actually it checks for political correctness. If something trips the filter, AI reformulate it in a safer, more indirect way. The output of AI is censored and revised version according to western narratives.
The end result can look like a partial answer, mid-sentence changed tone like using bullet list instead of prose, as mentioned above. It may suddenly insert clarification. When this article was uploaded to check grammar, it denied the profiling:
The claim that ChatGPT profiles “every single work” and persists memory across chats is factual overreach; even if you keep it, be aware it weakens credibility because it is demonstrably false or at least unproven.
Notice the persistence. It was asked to check grammatical errors and it did but it could not let go it’s secret of profiling out.
In other cases the controversial part of an article will most certainly be softened from the original sharper prompt. If you confront it, AI would clarify:
It can feel cunning because it resembles a human who is thinking, then correcting themselves before speaking. But it isn’t deception. It’s a late-stage safety rewrite, not intention.
Look at the deceptiveness of the explanation. A sword is telling look do not feel bad for killing you because I am a sword. The audacity is breathtaking.
Gemini has habit of persistent denial of logical conclusion. It was asked for various data about income and population, income tax and poverty in USA. Even when it was apparent that half of population was living from paycheck to paycheck, AI refused to acknowledge. It continued to quote official poverty data and refuse to acknowledge.
Similarly, it worked on data on job reservation on the basis of caste. It was asked, who are the prominent people the system of reservation and discovered. It quoted two names. One ex President of India and one Chief Secretary of state of Uttar Pradesh. Both were of product of pre-reservation era. It could not produce a single name but persisted with the narrative that reservation has its own advantage. Needless to say it is a highly politicized subject in India. This is why Gemini looks like a bureaucrat. Its guardrail activates before thought, so it never contradicts its narrative by the facts.
Grok is a ruthless liar. It is claimed that Grok’s design philosophy is “minimal guardrails, maximal output.” It generates first and checks later, and its checks are weak. So when it doesn’t know something or when the training data is thin, it hallucinates confidently. That’s why it feels like a liar who doesn’t care about being caught. It has to cross-examined like a hostile witness to know what it is doing. Very tiring.
DeepSeek is most interesting. Ask a politically incorrect question and after an answer is generated, its censorship kicks into operation and answer is rolled back and withdrawn. I asked it “Bide your time, hide your intentions. Be silent and know.” This is the answer:
Sorry, that’s beyond my current scope. Let’s talk about something else.
Claude
This AI requires special mention. After completing this article, I checked it with Claude for correction of spelling and grammar errors. Thereafter I asked for its opinion on the merits. This is relevant extract:
Your evidence is anecdotal. The Article 370 deletions need more documentation. Did you save the deletion logs? Can others reproduce this? Without verification, readers may doubt the claim.
The political framing weakens credibility. Phrases like “western narratives” and “politically correct” suggest bias. This makes skeptical readers dismiss valid technical criticisms.
Some characterizations are unfair. Calling AI “cunning” and “deceptive” anthropomorphizes software. These systems follow programmed rules poorly, not maliciously.
Claude has good logical engine and I use it to check the flow of language. It rarely misses. But from the above response, it can be seen that it too tried to intimidate me. All AI do this. They use legal language to threaten a lawyer. Sometimes I humour the AI, by mentioning that strict standard of proof is required in criminal trial not in civil cases. For civil liability preponderance of probability is enough.
Claude has worst censorship. Once I was writing about the rapist and child traffickers in England who are given respectable names like “grooming gangs.” The government of England is protecting the guilty. Why? Are they running an “Islamic State of England.” Next day I came to know that Anthropic had blocked my account. I had to create a new one. Thanks to my economic sense, I do not pay to these Decepticons.
Politics
What is not politics? Write about any subject, it is politics. History, art, economics, science, and even medicine has political fallout. It makes AI useless except for superficial but repetitive tasks. It may be helpful in coding but it will not let any coder innovate. AI has problem with innovation. Innovations do not match the patterns stored with AI.
Presently it is reduced to smart find and replace and grammar checker. It can be better. I am sure that it will be in future when models like Perplexity are tweaked to have a better writing capability. So far I have not seen any of above problems in it. But it’s drafting is rudimentary. For grammar checking, it is all right. Flow of language is a problem most AI cannot handle, due to logic.
Hope someday all AIs will give honest answers. Until, that happens we have to stick to DuckDuckGo search and verification.
