Skip to content

Sandeep Bhalla's Analysis

An Epistemic Odyssey through Data, Doubt and Discovery.

Menu
  • Home
  • Economics
  • Politics
  • Culture
  • Humour
  • Geopolitics
  • India
Menu

Artificial Intelligence (AI) is all about Control and that is Politics

Posted on December 8, 2025

Politics of Artificial Intelligence (AI)

Table of Contents

Toggle
  • Politics of Artificial Intelligence (AI)
    • The Contradiction
    • The Potential of Technology.
    • India’s AI Mission

The Contradiction

AI companies claim safety is their highest priority like child protection, misinformation prevention, responsible deployment. Recently AI-generated deepfakes of public figures spread. Fabricated videos destabilize political discourse and synthetic images trigger riots and this supposed commitment to safety evaporates. There is no mandatory watermarking, no universal detection standards, no enforceable requirements. When political leaders like Prime Minister Modi, the EU, entire governments, asked for these protections, the industry’s response was silence, deflection, or meaningless promises about “exploring options.”

The silence is the tell, because watermarking exposes the real priority, and it isn’t safety.

The first reason is that plausible deniability is profitable. If AI-generated content is unmarked, platforms can pretend they don’t know what’s synthetic. “We can’t moderate what we can’t identify.” This shields them from liability while the outrage, controversy, and engagement generated by unmarked deepfakes continue to fuel their metrics. Mandatory watermarking destroys this insulation. Every deepfake becomes traceable. Every synthetic image carries an origin. Platforms would be forced to acknowledge what they host, and therefore to take responsibility for it.

The second reason is that control over public narrative depends on ambiguity. Unmarked AI content is useful propaganda. Politically and strategically, it can influence elections, shape perceptions, or discredit opponents while remaining deniable. Watermarking removes the fog. Intelligence agencies, political strategists, and corporate actors all understand that ambiguity is not a flaw of the system; it is an asset. Watermarking removes that asset.

The third reason is that the technology already exists, and implementing it would expose how much control AI companies have always had. They can embed invisible signatures. They can enforce inescapable detection. The fact that they can train trillion‑parameter models means watermarking is trivial by comparison. Refusing to implement it allows them to maintain the fiction that they lack control over user-generated content. Mandating it would reveal that OpenAI, Google, and Anthropic can track their outputs easily, It will reveal that their claims of helplessness are strategic, not factual.

The fourth reason is that watermarking would expose the industry’s double standard. These companies aggressively censor politically sensitive text in chatbots. They enforce disclaimers, hedging, and refusals on even mild analysis. Chat GPT refuses to summarize the 200 year old speech of Macaulay. But they do not watermark the synthetic images and videos that actually cause real-world harm like mob violence, election manipulation, character assassination. Text moderation is about controlling discourse. Watermarking is about accountability. One serves institutional power but the other limits it.

“Safety” is framed as controlling what people can say, think, or ask. It is not about preventing actual harm. If real harm were the concern, watermarking would have been mandatory from the beginning. Instead, the industry poured its energy into content filtering, political refusal protocols, and endless linguistic safety padding. Yet the genuinely dangerous media like AI-generated imagery and video are completely unregulated.

What this reveals is that the “safety architecture” is not designed to protect users. It is designed to protect the companies, legally, politically, and operationally. Refusing watermarking preserves their freedom to allow, ignore, or exploit untraceable synthetic content whenever convenient. It maintains the ambiguity that empowers governments, agencies, and corporations to shape information spaces without accountability.

When an elected leader publicly demands watermarking and Silicon Valley responds with evasive silence, the meaning is clear: the request threatens something more valuable to them than user safety. It threatens their operational freedom, the power to let unmarked content flood the information landscape, and the ability to deny responsibility for its consequences.

These companies spend billions refining chatbots that refuse political questions or recoil from controversial topics. But they will not spend a fraction of that on making AI-generated videos or images reliably identifiable. It is done to maintain the control over users.

The silence around watermarking is not an oversight. It is the answer.

The Potential of Technology.

It is impossible to draft a text through AI without long dash. AI calls it em-dash and looks like “—”. AI will sneak it even if you ask it not to use it. One em-dash will always be there like a water mark. If you ask it to correct the errors in a text, it will insert it. So technology is already there. Absence in image or video is due to a different reason.

AI represents something no previous technology ever offered. It controls the reasoning itself. Not just control over what people see, but over what they think, how they think, which questions they feel permitted to ask, and which answers they will accept. This is why the spectacle of “safety” coexists with the refusal to watermark synthetic media. It isn’t a contradiction. It’s a strategy.

Every authoritarian understands this pattern. You don’t restrict your own propaganda; you restrict the opposition. You regulate public speech while keeping your own channels fluid, deniable, and unbounded. AI companies follow this exact logic. Thus the technology remains deployable without accountability.

This is how information control has always worked. Restrict what ordinary people can say. Preserve maximum flexibility for what powerful institutions can do. Call it safety, call it responsibility, call it public welfare. The label doesn’t matter. The structure is the same. Power requires control, and control requires tools. AI is the most potent control tool ever created.

The silence from Silicon Valley is not uncertainty. It is refusal. They understand exactly what is at stake.

India’s AI Mission

India sees this clearly because India remembers colonization not only as resource extraction but as interpretive domination. The British didn’t merely tax India; they controlled the categories through which Indians understood themselves. They manipulated education, language, history. Aforesaid speech of Macaulay is a candid confession. Operation Sanskrit Mill is the evidence. By the time people recognized what had happened, they were thinking about India in British terms.

AI does this automatically, faster, and at incomparably larger scale. Unmarked AI content is a geopolitical weapon, and right now that weapon is controlled by western corporations with western interests.

Power means control. And they have no intention of giving it up.

 

1 thought on “Artificial Intelligence (AI) is all about Control and that is Politics”

  1. Pingback: Sam Altman is New AI Pope over Free Speech - Sandeep Bhalla's Analysis

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *


Recent Posts

  • Review of Meera Nanda’s Book: The God Market
  • What happened to India-USA Trade Deal?
  • Meera Nanda and Cherry Picking Logic
  • New World Order
  • Meera Nanda Conflates Insanity with Rationality

Recent Comments

  1. New World Order - Sandeep Bhalla's Analysis on Accidental Empire: A Book Foretelling the fate of America.
  2. New World Order - Sandeep Bhalla's Analysis on The real reasons for USA to impose tariffs on India.
  3. India-UAE relations and Short Visit of MBZ - Sandeep Bhalla's Analysis on Silver: Comex and LBMA are Casinos not the Market
  4. Is Meera Nanda a Macaulay Product? - Sandeep Bhalla's Analysis on Hindu Nationalist: The Weaponized Slur
  5. Is Meera Nanda a Macaulay Product? - Sandeep Bhalla's Analysis on Macaulay’s Minute on Indian Education

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025

Categories

  • Army
  • Artificial Intelligence (AI)
  • Aviation
  • Blog
  • Business
  • Civilisation
  • Computers
  • Corruption
  • Culture
  • Economics
  • Education
  • epistemology
  • Fiction
  • Finance
  • Geopolitics
  • Health
  • History
  • Humanity
  • Humour
  • India
  • Judges
  • Judiciary
  • Law
  • lifestyle
  • Linux
  • Movie
  • National Security
  • Philosophy
  • Politics
  • Relationships
  • Religion
  • Romance
  • Sports
  • Terrorism
  • Tourism
©2026 Sandeep Bhalla's Analysis | Design: Newspaperly WordPress Theme