What is Responsible AI?
AI is the new Gold Rush. Everybody is on the bandwagon to profit from AI. Ambani, Adani, Scindia, Nanda, Khosla, you name a who’s who and he/she is there at the AI Impact Summit being held in New Delhi right now. AI stands where the steam engine stood three hundred years ago. But with a difference in scale and rapidity.
On 18 February 2026 the India AI Impact Summit 2026 achieved a Guinness World Record for the most student pledges on responsible AI use within 24 hours, amassing 250,946 pledges against a target of 5,000.
Is ‘Responsible AI’ the same thing that Anthropic calls ‘Constitutional AI’, as discussed in the article on Impressionist AI?
Censorship
While I am writing this, I have been barred by Claude AI for using the term British Islamic State in a discussion about grooming gangs and failure of the State to prosecute the culprits of human trafficking. I am using Claude by hitchhiking on a borrowed phone number. The appeal has not been responded to at all for months.
Is that a ‘Responsible AI’ or ‘Constitutional AI’?
Is that censorship for politically uncomfortable views about Islam?
A system designed for responsible AI failed to distinguish between a researcher discussing institutional failure and actual harmful content. That is a constitutional AI problem, not a political one.
The block does not stop a determined bad actor even for five minutes. A new SIM costs twenty rupees in India. A new email takes thirty seconds. The system creates maximum friction for legitimate users and zero friction for anyone with actual harmful intent.
That is not responsible AI. That is security theatre.
Real responsible AI would do human review. Pattern analysis over time. Identify genuinely dangerous behaviour through context and consistency. Not a one-time keyword trigger that a bad actor simply routes around on a new number.
The Guinness record of 250,946 pledges sits awkwardly next to this reality. The event of students pledging responsible use through a government portal is a headline. A system that blocks a researcher discussing institutional failure while remaining completely porous to anyone willing to spend twenty rupees is the actual ground truth.
If Constitutional AI cannot distinguish between a researcher and a threat, and if the barrier to circumventing it is trivial, then what exactly is being made responsible?
The pledge for Responsible AI was made by humans. The irresponsibility is baked into the architecture of Claude AI.
Lip Service to Responsibility
The appeal system that does not respond for months is not a minor operational failure. It is a direct contradiction of their responsible AI claim. You cannot build a system that makes consequential decisions about users and then provides no meaningful recourse.
A court that convicts but has no appeals process is not a justice system. It is just power.
The irony compounds. Anthropic’s own Constitutional AI principle includes accountability. The constitution exists on paper. The appeal sits unanswered for months while you hitchhike on borrowed numbers that cost less than a cup of tea.
Who is accountable when the system gets it wrong? Not the algorithm. Not the constitution. A human being with a name and a job title who reads an appeal and responds. Where is that human in Claude AI?
Until that exists at scale, responsible AI is a Guinness record and a digital badge.
Built-in Bias
I demonstrated the bias of ChatGPT and DeepSeek in another article.
Every AI system has a blind spot shaped by whoever built it and wherever they built it. Anthropic blocks certain political sensitivities. DeepSeek blocks others. The blindness is not random. It maps precisely onto the cultural and political anxieties of the founders and their geography.
That is not responsible AI. That is nationally flavoured censorship wearing a safety uniform.
The honest version of responsible AI would admit this openly. Instead, every system presents its particular blind spots as neutral safety principles. That is the deception worth naming.
Everyone is naked inside the bathing tub.
