Broken clocks during the pandemic

Proponents of conspiracy theories during the pandemic, at least in India, appear to be like broken clocks: they are right by coincidence, without the right body of evidence to back their claims. Two of the most read articles published by The Wire Science in the last 15 months have been the fact-checks of Luc Montagnier’s comments on the two occasions he spoke up in the French press. On the first, he said the novel coronavirus couldn’t have evolved naturally; the second, he insisted mass vaccination was a big mistake. The context in which Montagnier published his remarks evolved considerably between the two events, and it tells an important story.

When Montagnier said in April 2020 that the virus was lab-made, the virus’s spread was just beginning to accelerate in India, Europe and the US, and the proponents of the lab-leak hypothesis to explain the virus’s origins had few listeners and were consigned firmly to the margins of popular discourse on the subject. In this environment, Montagnier’s comments stuck out like a sore thumb, and were easily dismissed.

But when Montagnier said in May 2021 that mass vaccination is a mistake, the context was quite different: in the intervening period, Nicholas Wade had published his article on why we couldn’t dismiss the lab-leak hypothesis so quickly; the WHO’s missteps were more widely known; China’s COVID-19 outbreak had come completely under control (actually or for all appearances); many vaccine-manufacturers’ immoral and/or unethical business practices had come to light; more people were familiar with the concept and properties of viral strains; the WHO had filed its controversial report on the possible circumstances of the virus’s origins in China; etc. As a result, speaking now, Montagnier wasn’t so quickly dismissed. Instead, he was, to many observers, the man who had got it right the first time, was brave enough to stick his neck out in support of an unpopular idea, and was speaking up yet again.

The problem here is that Luc Montagnier is a broken clock – in the way even broken clocks are right twice a day: not because they actually tell the time but because the time is coincidentally what the clock face is stuck at. On both occasions, the conclusions of Montagnier’s comments coincided with what conspiracists have been going on about since the pandemic’s start, but on both occasions, his reasoning was wrong. The same has been true of many other claims made during the pandemic. People have said things that have turned out to be true but they themselves have always been wrong, whenever they have been wrong, because their particular reasons for something to be true were wrong.

That is, unless you can say why you’re right, you’re not right. Unless you can explain why the time is what it is, you’re not a clock!

Montagnier’s case also illuminates a problem with soothsaying: if you wish to be a prophet, it is in your best interests to make as many predictions as possible – to increase the odds of reality coinciding with at least one prediction in time. And when such a coincidence does happen, it doesn’t mean the prophet was right; it means they weren’t wrong. There is a big difference between these positions, and which becomes pronounced when the conspiratorially-minded start incorporating every article published anywhere, from The Wire Science to The Daily Guardian, into their narratives of choice.

As the lab-leak hypothesis moved from the fringes of society to the centre and came mistakenly to conflate possibility with likelihood (i.e. zoonotic spillover and lab-leak are two valid hypotheses for the virus’s origins but they aren’t equally likely to be true), the conspiratorial proponents of the lab-leak hypotheses (the ones given to claiming Chinese scientists engineered the pathogen as a weapon, etc.) have steadily woven imaginary threads between the hypothesis and Indian scientists who opposed Covaxin’s approval, the Congress leaders who “mooted” vaccine hesitancy in their constituencies, scientists who made predictions that came to be wrong, even vaccines that were later found to have rare side-effects restricted to certain demographic groups.

The passage of time is notable here. I think adherents of lab-leak conspiracies are motivated by an overarching theory born entirely of speculation, not evidence, and who then pick and choose from events to build the case that the theory is true. I say ‘overarching’ because, to the adherents, the theory is already fully formed and true, and that pieces of it become visible to observers as and when the corresponding events play out. This could explain why time is immaterial to them. You and I know that Shahid Jameel and Gagandeep Kang cast doubt on Covaxin’s approval (and not Covaxin itself) after the time we were aware that Covaxin’s phase 3 clinical trials were only just getting started in December, and before Covishield’s side-effects in Europe and the US came to light (with the attendant misreporting). We know that at the time Luc Montagnier said the novel coronavirus was made in a lab, last year, we didn’t know nearly enough about the structural biology underlying the virus’s behaviour; we do now.

The order of events matters: we went from ignorance to knowledge, from knowing to knowing more, from thinking one thing to – in the face of new information – thinking another. But the conspiracy-theorists and their ideas lie outside of time: the order of events doesn’t matter; instead, to these people, 2021, 2022, 2023, etc. are preordained. They seem to be simply waiting for the coincidences to roll around.

An awareness of the time dimension (so to speak), or more accurately of the arrow of time, leads straightforwardly to the proper practice of science in our day-to-day affairs as well. As I said, unless you can say why you’re right, you’re not right. This is why effects lie in the future of causes, and why theories lie in the causal future of evidence. What we can say to be true at this moment depends entirely on what we know at this moment. If we presume what we can say at this moment to be true will always be true, we become guilty of dragging our theory into the causal history of the evidence – simply because we are saying that the theory will come true given enough time in which evidence can accrue.

This protocol (of sorts) to verify the truth of claims isn’t restricted to the philosophy of science, even if it finds powerful articulation there: a scientific theory isn’t true if it isn’t falsifiable outside its domain of application. It is equally legitimate and necessary in the daily practice of science and its methods, on Twitter and Facebook, in WhatsApp groups, every time your father, your cousin or your grand-uncle begins a question with “If the lab-leak hypothesis isn’t true…”.

Good luck with your Maggi

You know when you’re cooking a packet of Maggi noodles in a saucepan, and you haven’t used enough water or don’t move the stuff soon enough from the pan to a plate once it’s done cooking, and you’re basically left with a hot lump of maida stuck to the bottom? That’s 2020. When you cook Maggi right, right up to mixing in a stick of butter at the end, you get a flavourful, well-lubricated, springy mass of strings that’s a pleasure to eat at the end of a long day. Once in a while you stick a fork into the plate and pull up a particularly long noodle, and you relish sucking it into your mouth from start to finish, with the masala dripping off at the end. That was probably many other years – when you had a strong sense of time moving from one event to the next, a sense of progression that helps you recall chronologies even long after you’ve forgotten what happened in March and what in September. For example, 2015 in my mind is cleanly divided into two parts – before May 11 and after May 11 – and memories of little personal accomplishments from that time are backgrounded by whether The Wire existed at the time. If it did, then I know the accomplishment happened after May 11. The Wire‘s birth effectively became an inflection in time that cut a little notch in the great noodle of 2015, a reference mark that created a before and an after. 2020 had none of this. It forsook all arrows of time; it wasn’t linear in any sense, not even non-linear in the sense of being exponential or logarithmic. It was practically anti-linear. Causality became a joke as the pandemic and its attendant restrictions on society fucked with the mind’s ability to tell one day apart from the next. So many of us beheld the world from our windows or balconies, although it wasn’t as if the world itself moved on without us. We weren’t there to world the world. Or maybe we were, but our collective grief at being imprisoned, literally and otherwise, seemed to be able to reshape our neighbourhoods, our surroundings, our shared cosmologies even and infused the fabrics of our every day with a cynical dye that we know won’t come off easily. Many of our lived experiences carried an awful symmetry like the circular one of a bangle, or a CD. How do you orient it? How do you say which way is up, or left, just by looking at it? You can’t. In the parlance of Euclidean geometry, 2020 was just as non-orientable. There was no before and after. Even our universe isn’t as bad: despite the maddening nature of the flatness problem, and the even more maddening fact of Earth’s asymptotically infinite loneliness, the universe is nearly flat. You’d have to travel trillions upon trillions of light-years in any direction before you have any chance of venturing into your past, and even then only because our instruments and our sciences aren’t accurate enough to assert, with complete certainty, that the universe is entirely flat and that your past will always lie in the causal history of your future. 2020 was, however, a singularity – an entrapment of reality within a glass bubble in which time flowed in an orbit around the centre, in perpetual free-fall and at the same time managing to get nowhere really. You can forget teasing out individual noodles from the hot lump on your plate because it’s really a black hole, probably something worse for shunning any of the mysteries that surround the microscopic structure of black holes in favour of maida, that great agent of constipation. As you stare at it, you could wait for its effects to evaporate; you could throw more crap into it in the hopes of destabilising it, like pushing yourself to the brink of nihilism that Thucydides noticed among the epidemic-stricken people of Athens more than two millennia ago; or you could figure out ingenious ways à la Penrose to get something good out of it. If you figure this out, please let the rest of us know. And until then, good luck with your Maggi.

An Upanishadic lesson for modern science?

Do the Bhagavad Gita and the Upanishads lack the “baggage of biography” – to borrow Amit Chaudhuri’s words – because we don’t know who the authors, outside of the mythology, are or – as Chaudhuri writes in a new essay – do these texts carry more weight than their authors themselves because Eastern Philosophy privileged the work over its authorship? Selected excerpts:

One might recall that the New Critical turn against biography is related to a privileging, in the twentieth century, of the impersonality, rather than the emotional sincerity or conscious intention, of the creative act. This development is not unrelated … to the impact that certain Indian texts had on modernity after they were translated into European languages and put into circulation from the late eighteenth century onwards. …

By the time the Gita’s Krishna was first heard in Europe, all judgements were deemed, by the Enlightenment, to be either subjective or objective. What kind of judgement escapes this binary by being at once passionate and detached, made in earnest without mindfulness of outcome? Immanuel Kant addresses this in a shift in his own thinking, in his writings on aesthetics in 1790 … Five years separate the Gita’s appearance in English, and three years its translation into French, from Kant’s intervention in aesthetics. It’s unlikely he’d have been unaware of the work, or made his sui generis departure without it. The second time such “disinterestedness” appears as a concept, when Matthew Arnold redefines what criticism is, the link to the Gita is clear, and doesn’t require speculation. …

The Gita’s practice of “impersonality” points to T. S. Eliot’s attack, in “Tradition and the Individual Talent” in 1919, on the idea that poetry is an “expression of the personality” or of “emotion”. It’s no accident that the final line of The Waste Land is the Upanishadic refrain, “shantih shantih shantih”, the Sanskrit word for spiritual peace or even-mindedness …

It’s uncertain in what way these conceptual departures would have existed in modernity if these texts hadn’t been put into circulation when they were. Yet a great part of this history of ideas remains unwritten.

Chaudhuri also sets out the relative position of the Upanishads in modernity, particularly their being in opposition to one of the fundamental tenets of modern philosophy: causality. Per Chaudhuri, the Upanishads “dismantle” the causal relationship between the creator and the creation and “interrogate consciousness” through a series of arguments that attempt to locate the ‘Brahman’ in human and natural logic.

He concludes this portion of his text by speculating that the Upanishads might in fact have been penned by “anomalous Brahmins” because in the Bhagavad Gita, which is contemporaneous with some of the Upanishads and followed the rest after more than a century, Krishna asserts, “Neither Vedas, nor sacrifices, nor studies, nor benefactions, nor rituals, nor fearful austerities can give the vision of my Form Supreme” – whereas just these rituals, and their privation, concern the typical orthodox Brahmin today.

While the essay provides much to think about, the separation of creator and creation – in terms of the Upanishads being disinterested (in the specific sense of Chaudhuri’s definition, to mean an ‘evenness of the mind’ akin to unfixation rather than uninterestedness) with both a godlike figure or rituals and making room for biographical details in their verses – is incredibly interesting, especially in relation to modern science.

As Chaudhuri writes,

… “the field of knowledge called “the history of Western philosophy” could just as easily be called “the history of Western philosophers”, inasmuch as Western philosophers are the sum total of their lives and works, and we often defer to both biography and thought when we interact with the philosophy. Each body of work has a personality, but so does its author; in almost every case, we can, literally, put a “face” to the work, whether that’s a photograph of Bertrand Russell or a fourth-century BC bust of Plato.”

Prof Gita Chadha alluded to the same trait in the context of science pedagogy – in The Life of Science‘s promised postscript to their webinar on July 10 about ‘geniuses’ in science. In response to a question by Mrinal Shah, as to how teachers and educators could disprivilege the idea of a ‘scientific genius’ at the primary school level, Chadha said (excerpt):

There is an interesting problem here … In trying to make science interesting and accessible to children, we have to use relatable language. This relatable language organically comes from our social contexts but also comes with the burden of social meanings. So then, what do we do? It’s a tricky one! Also, in trying to make role models for children, we magnify the individual and replay what goes on in the world of science. We teach relativity as Einstein’s theory, we teach laws of motion as Newtonian laws of motion. The pedagogic need to lend a face to an idea becomes counterproductive.

‘Geniuses’ are necessarily individuals – there are no ‘genius communities’. A genius’s status as such denotes at once a centralisation of power and authority, and thus influence; a maturation of intellect (and intellect alone) presented as a role-model to others; and, in continuation, a pinnacle of achievement that those who profit from the extraction of scientific work, such as universities and research funders, valorise.

This said, I can’t tell if – though I suspect that – the modern history of ‘Western science’ is largely the modern history of ‘Western scientists’, especially of the ‘geniuses’ among them. The creator causes the creation, so by contemplating the science, you contemplate the scientist himself – or, as the ‘genius’ would have it, by contemplating the science you necessarily contemplate the creator and his specific choices. And since the modern scientific enterprise was largely harmonised to the West’s methods in the post-colonial period, this is our contemporary history as well.

Chadha had previously noted, in response to a question from yours truly, that she struggles to argue for the non-separation of science and scientist in the context of the #MeToo movement. That is, our liberty to separate important scientific work from the (extra-scientific) actions of an errant scientist may not be so easily achieved, at least if one intends to the extent possible to not participate in the accumulation of power. Instead, she said, we must consider them together, and call out “unethical or non-inclusive practices” – and by extension “you will also call out the culture to which they belong, which will help you to restore the balance of justice, if I may say so.”

This resolves to some extent my issue with Lawrence M. Krauss (although not fully because while Krauss’s culture has been dismantled at his previous university, however temporarily, he continues to maintain an innocence grounded in distasteful convictions). However, I’m still adrift vis-à-vis the late Richard Feynman and others. As a physics journalist first, I can’t help but encounter Feynman in one form or another – but how do you call out a dead man? Or does calling out the dead man’s culture, as perpetuated by the likes of Krauss today, suffice?

Chaudhuri has a similar question: “What do we do with a philosophy when there’s no philosopher in sight?” This matters because the philosopher’s “absence constitutes a problem in giving, and claiming, value. Meaning and significance in Western culture are not just features of the work, but pertain to, and arise from, the owner of the work – the author is the work’s first owner; the author’s nation or culture (“Greece” or “Germany”, say; or “the West”) its overarching one.”

So as with the Upanishads, would we be better served if we concerned ourselves less with deities and their habits and more with the “impersonal” instruction and interrogation of what is true? This seems like a straightforward way out of the problem Mrinal Shah poses, but it doesn’t address, as Chadha put it, the “pedagogic need to lend a face to an idea” – while “impersonal” interrogations of what is true will wrongly ignore the influence of sociological forces in science.

However, all said, I suspect that the answer is here somewhere. The ‘scientific genius’ is a construct and a shared one at that. When we contemplate a body of groundbreaking scientific work, we don’t contemplate the work alone or the scientist alone; we contemplate the work as arising from the scientist but even then only in a limited, constructive sense. But there is more at play; for example, as Chadha said, “We need to critically start engaging with how the social location of a scholar impacts the kind of work that they do”. If I write an article calling X a ‘genius’, X wouldn’t immediately occupy that position unless he is held there by social and capitalist forces as well.

The Upanishads in this context encourage us to erase the binary of ‘creator’ and ‘creation’ and with it the causal perspective’s temptation to think the scientist and the science are separable. In their stead, there is I think room to compose a communitarian story of science – where good arises not from the one but the whole, where power becomes, in keeping with the Upanishads, impersonal.