AI is not trained on right data
When COVID-19 hit in 2020, Western countries and India faced similar challenges but chose radically different responses. Western nations printed trillions in currency and distributed cash broadly to their populations. Experts from Harvard, MIT, and Oxford designed these programs. They insisted this approach was necessary and that inflation concerns were overblown. They may have used AI to validate their approach. India took a different path.
Prime Minister of India is a graduate from a College in India. The Finance Minister held degrees from JNU. The RBI Governor had studied history at St. Stephen’s College in Delhi. None had western education. No Fancy MIT, Harward or Oxford. They rejected universal cash distribution. Instead they provided food directly to 800 million people. They expanded employment guarantee programs. They offered targeted support to vulnerable groups. They maintained fiscal discipline.
A high school economics student could predict what would happen next. Printing money without increasing production causes inflation. Creating demand without supply drives prices up. These are basic principles taught in introductory courses.
Western countries experienced exactly that predicted inflation. Prices surged. Real wages fell. Housing became unaffordable. Political anger grew. The experts who designed these policies expressed surprise. They wrote papers explaining why the models hadn’t anticipated this outcome. The AI absorbed this model.
India’s approach produced different results. Three hundred million people escaped poverty over the following years. Nutrition levels improved because people actually received food. MGNREGA built millions of houses for homeless families. The economy grew at eight percent annually. Regular people made money in stock markets. The population remained generally satisfied.
The Expertise Paradox
It appears that the Western-trained economists with prestigious credentials produced worse results than a history major and a JNU graduate applying basic principles. The sophisticated models failed where simple logic succeeded.
Raghuram Rajan represented the Western approach. He held degrees from IIT Delhi, IIM Ahmedabad, and MIT. He opposed certain government policies and maintained strict monetary discipline. The government declined to extend his term. His replacement lacked formal economics training but understood practical realities. That replacement oversaw better outcomes.
The pattern extends beyond individual cases. Western institutions trained entire generations of policymakers in similar frameworks. These frameworks emphasized complex modeling and sophisticated analysis. They downplayed simple cause and effect relationships. They dismissed concerns that couldn’t be easily quantified.
Students at elite universities absorbed these approaches. They learned which arguments were acceptable and which were not. They internalized institutional assumptions about policy. They graduated and joined government bureaucracies. There they implemented what they had learned.
This process resembles training an AI system. Input specific data and reward certain outputs. Penalize responses that deviate from approved frameworks. The result is predictable pattern matching rather than genuine reasoning.
The AI Training Problem
AI systems learn from the corpus of human expert knowledge. That corpus is dominated by Western institutional output. Academic papers from prestigious universities. Policy documents from credentialed bureaucrats. Think tank analyses from established organizations. Media coverage that treats these sources as authoritative.
The AI encounters thousands of papers explaining why stimulus won’t cause inflation. It reads policy documents asserting that cash distribution is optimal. It processes expert consensus that formed before the policies failed. This becomes its understanding of correct economic reasoning.
When someone asks the AI about COVID economic policy, it reproduces this expert consensus. It explains that the situation was unprecedented. It notes that models suggested deflation was the greater risk. It emphasizes that alternatives might have caused worse outcomes. The AI sounds confident because expert consensus was confident.
But that consensus was wrong. The simple logic that a high school student could apply proved more accurate than the sophisticated analysis from credentialed experts. The AI doesn’t know this because the training data labeled the failed approach as correct.
The AI cannot learn from India’s success because that knowledge isn’t properly represented in its training corpus. Indian policy documents exist but carry less weight than Harvard papers. Domestic Indian analysis gets filtered through Western interpretations. The history major’s practical wisdom never made it into prestigious journals. May be the AI trained in India will do it sometime.
So the present AI learns the approach that failed and treats it as authoritative. It learns to dismiss the approach that succeeded. Not because the AI is flawed but because it accurately reflects the knowledge base it was trained on.
The Institutional Blindness
Western experts cannot easily acknowledge India’s success. Doing so would invalidate their entire framework. They would need to admit that fiscal discipline worked better than unlimited stimulus. That targeted support outperformed universal programs. That food distribution beat cash transfers. That employment guarantees built real value. That domestic expertise succeeded where Western training failed.
Instead they emphasize other concerns. Democratic backsliding. Inequality metrics. Environmental issues. Regulatory complexity. These aren’t necessarily wrong but they deflect from the central question. Which economic approach actually improved human welfare?
Three hundred million people escaped poverty. Millions received houses. Nutrition improved measurably. The economy grew consistently. Regular citizens prospered. These are the outcomes that matter.
Yet experts remain unsatisfied because the results came from the wrong method. The framework says stimulus and cash distribution should work. Reality says targeted support and fiscal discipline worked better. The framework cannot process this contradiction.
This institutional blindness now exists in AI systems. They inherit the same inability to acknowledge contrary evidence. They reproduce the same deflections and reframings. They express the same confidence in failed approaches.
Caution in using AI
This knowledge about experts changes how we use AI. Be cautious of its bias. AIs have read everything and can retrieve information quickly. That makes them useful research assistants. But their judgment is trained on flawed expertise. Their confidence correlates with expert consensus rather than accuracy.
This requires a specific methodology. Use AI to gather information but verify against primary sources. Ask questions but hide your actual position. Cross reference between multiple systems. Check whether the AI’s certainty matches empirical outcomes. Maintain independent judgment about what actually matters.
Treat the AI like a well-read student who absorbed his professors’ biases. He knows what the textbooks say. He can explain the approved frameworks. He genuinely believes the conventional wisdom. But he hasn’t yet learned to distinguish institutional narrative from reality.
The same approach applies to human experts. Credentials indicate exposure to certain training but not necessarily sound judgment. Sophisticated analysis can obscure simple truths. Institutional consensus often reflects political convenience rather than empirical accuracy.
Lesson from 2020
The 2020 divergence demonstrated that basic economic principles applied honestly outperform complex models built on flawed assumptions. A history major using common sense achieved better results than MIT economists using sophisticated frameworks. Regular people could see this clearly. Only the experts and the AI trained on expert knowledge remain unable to acknowledge it.
This isn’t an argument against all expertise or all analysis. It’s recognition that current institutional knowledge has systematic flaws. Those flaws are now embedded in AI systems. We cannot fix AI reasoning by improving the models if the training data encodes failed expertise.
The solution is epistemological independence. Use the tools but trust empirical outcomes over institutional authority. Value simple logic over complex rationalization. Judge approaches by results rather than credentials.
Remember that everybody is happy except the experts, and ask why that might be.
