The Right to be Wrong
Imagine standing on a random street corner in an Indian city like New Delhi or Bengaluru where the air carries the dust, exhaust fumes, and the smell of peanuts being roasted on roadsides. Scooters graze past pedestrians, vendors shout over the honk of buses, a stray cow loiters beside a billboard advertising the latest smartphone. If, into this lively, pulsing scene, you stand, asking random passers-by an equally random question — say, “Why are sea turtles dying?”— the answers would be many and mostly wrong. This, of course, is precisely why such an exercise would be worthwhile.
One person would assert, with the complacency that only ignorance can afford, that “It’s climate change, of course! Everyone knows that.” Another, less confident, would mutter that they have no idea. A third, seeking refuge in levity, might suggest that “sea turtles are given to fits of suicidal insanity and like to fling themselves in the way of sharks.” A cynic would declare that “people die every day, so what’s the fuss?” And there would be, inevitably, the usual contrarian who suspects the question itself is a trick.
Such a collection of answers — half-informed, contradictory, occasionally ridiculous — is more instructive about human nature than the most sophisticated sociological survey. It tells us that mankind, when confronted with mystery, seldom resists the urge to speak; and that what we lack in knowledge we make up for in improvisation.
Artificial intelligence, we are told, will one day think as we do. I doubt it. What it may do, perhaps already does, is calculate as we cannot. But thinking, in the human sense, involves more than the production of correct answers. It involves the capacity to be wrong in interesting ways. The machine gives answers, while the man only gives himself away.
On such an Indian street, a French architect by the name of Le Corbusier once stood and was appalled. The chaos of vendors, bicycles, dogs, cows, and pedestrians offended his sense of beauty. To him, the street was an error. A failure of planning that needed to be corrected by geometry. In Chandigarh, he sought to replace that ungovernable sprawl with wide avenues and clean lines, a well-meaning ambition that was later accused of imposing an European modernist aesthetic on Indian urban life.
His dream was not born of malice but of faith: faith in the idea that order equals progress. The straight line, to Le Corbusier, was evidence of civilisation. A dream that was the architectural equivalent of artificial intelligence: an attempt to impose the logic of the drawing board upon the untidy exuberance of life. Like many modernists, he mistook complexity for confusion and believed that order, once imposed, would ennoble the human spirit.
But of course, the spirit refuses to be ennobled by design. The street, even when rebuked by the architect, insists on its own vitality. It lives by improvisation, not instruction. In this disorder there is a wisdom that no planner can reproduce, just as there is in human thought a “motleyness” that no algorithm can simulate.
It is precisely this “motleyness” of human thought that eludes AI algorithms. Admittedly, AI has achieved humaneness, but not humanness. It seeks correctness as its highest virtue — whether factual or political. But people often give wrong answers. Comical answers. Even outrageous answers. AI does not concern itself with that aspect of humanity. That’s why it fails to impersonate humans.
It is perhaps telling that the age that invented artificial intelligence is also the age most frightened of error. We apologise for our opinions, demand data for emotions, and fear contradiction as if it were a moral defect. The human mind, however, is rarely coherent. We hold two opposing thoughts and still feel both to be true. We contradict ourselves and still mean what we say. This elasticity is not a flaw of thought but its essence. The moral consequence of abandoning that elasticity is grave. When a culture begins to prize being right above being real, it starts to distrust ambiguity, humour, and doubt — the very things that keep it alive. Our fear of being wrong hardens into cultural sterility: discourse becomes cautious, politics become polarized, and moral life turns brittle. The inability to err becomes an inability to forgive. Needless to say, a civilization that seeks the perfection of machines soon inherits their coldness.
When a human being says, “Maybe the turtles are suicidal,” it’s not a factually wrong answer, but a rebellion against the human condition. It reveals a discomfort with helplessness, a need to mock tragedy to feel in control. Such errors are expressive, rather than merely defective. AI would reject this as noise. But it is precisely this noise, this subtext, that gives purposeful speech its depth.
Philosopher Paul Ricoeur once observed that understanding is born not from clarity but from conflict between meanings and interpretations. AI, by contrast, seeks a frictionless world, where every question yields a clean answer. In doing so, it forgets that intelligence is not the elimination of ambiguity but its endurance.
Walk, once more, down that Indian street. Walk, because you can. The smell of the street, the press of bodies, the urgency of time, all inflect how we think. Maurice Merleau-Ponty called this the lived body: the idea that perception is not mental computation but a dance between the body and the world. AI, bodiless and unfeeling, cannot dance. It processes, but it does not participate. It observes the street, but it cannot walk in it. This detachment from embodiment explains why AI produces responses that sound intelligent but lack resonance. It speaks, but it does not mean. To mean something requires a body, a context, a consequence, a vulnerability.
The chaos of the street offends the tidy mind, but it also reveals a genius. There are no traffic lights, yet somehow the traffic moves. There is no grand design, yet commerce thrives. What appears as disorder is, in fact, the result of millions of small accommodations.
Human thought is much the same. It works because it is adaptive. When we err, we adjust. When we misunderstand, we learn. The machine cannot do this. It does not learn in the human sense; it corrects itself mechanically, without remorse or memory. In other words, our wrongness is our conscience. It is how we know we have lived.
Just as Le Corbusier wanted to save the street from itself, AI wants to save us from our stupidity. Both fail for the same reason: they misunderstand what needs saving. The vitality of the street lies precisely in its disorder just like the dignity of the human mind lies in its imperfection.
AI may out-reason us, out-write us, even out-predict us—but it cannot out-doubt us. And doubt, as Socrates and the Upanishads alike remind us, is the beginning of wisdom. And so, we may yet comfort ourselves with this thought: AI can answer, but it cannot ask. In the embodied experience there will always flicker something unprogrammable: a mind that means what it says, even when it says it badly.
Image source: Cliff Dwellers by George Bellows.

Comments
Post a Comment