Proposed solution for Riemann hypothesis?

The hot news this week from the mathematical physics world is that the noted mathematician Michael Atiyah claimed to have solved the Riemann hypothesis, one of the most difficult unsolved problems known and whose resolution carries a $1 million prize. The problem is that Atiyah’s solution, while remarkable for its brevity, may not hold water.

The Riemann hypothesis is concerned with the Riemann zeta function, which – in very broad terms – provides a way to predict the position of prime numbers on the number line. Computers have been able to find prime numbers with scores of digits and mathematicians have been able to find in hindsight that, yes, the zeta function predicts they exist. However, what mathematicians don’t know (and this is the Riemann hypothesis) is whether the function can predict prime numbers ad infinitum or if it will break at some particularly large value. And solving the Riemann hypothesis problem means proving that the zeta function can indeed predict the position of all prime numbers on the number line.


A more technical explanation, reproduced from my article in The Wire last year, follows; article continues below this section:

In 1859, Bernhard Riemann expanded on Euler’s work to develop a mathematical function that relates the behaviour of positive integers, prime numbers and imaginary numbers. The Riemann hypothesis is founded on a function called the Riemann zeta function. Before him, Euler had formulated a mathematical series called Z (s), such that:

Z (s) = (1/1s) + (1/2s) + (1/3s) + (1/4s) + …

He found that Z (2) – i.e., substituting 2 for s in the Z function – equalled π2/6, and Z (4) equalled π4/90. At the same time, for many other values of s, the series Z (s) would not converge at a finite value: the value of each term would keep building to larger and larger numbers, unto infinity. This was particularly true for all values of s less than or equal to 1.

Euler was also able to find a prime number connection. Though the denominators together constituted the series of positive integers, with a small tweak, Z (s) could be expressed using prime numbers alone as well:

Z (s) = [1/(1 – 1/2s)] * [1/(1 – 1/3s)] * [1/(1 – 1/5s)] * [1/(1 – 1/7s)] * …

This was Euler’s last contribution to the topic. In the late 1850s, Riemann picked up where Euler left off. And he was bothered by the behaviour of the series of additions in Z (s) when the value of s dropped below 1.

In an attempt to make it less awkward (nobody likes infinities), he tried to modify it such that Z (2) and Z (4), etc., would still converge to interesting values like π2/6 and π4/90, etc. – but while Z (s ≤ 1) wouldn’t run away towards infinity. He succeeded in finding such a function but it was far more complex than Z (s). This function is called the Riemann zeta (ζ) function: ζ (s). And it has some weird properties of its own.

One such is involved in the Riemann hypothesis. Riemann found that ζ (s) would equal zero whenever s was a negative even number (-2, -4, -6, etc.). These values are also called trivial zeroes. He wanted to know which other values of s would precipitate a ζ (s) equalling zero – i.e. the non-trivial zeroes. And he did find some values. They all had something in common because they looked like this: (1/2) + 14.134725142i, (1/2) + 21.022039639i, (1/2) + 25.010857580i, etc. (i is the imaginary number represented as the square-root of -1.)

Obviously, Riemann was prompted to ask another question – the question that has since been found to be extremely difficult to answer, a question worth $1 million. He asked: Do all values of s that are non-negative integers and for which ζ (s) = 0 take the form ‘(1/2) + a real number multiplied by i‘?

In more mathematical terms: “The Riemann hypothesis states that the nontrivial zeros of ζ (slie on the line Re (s1/2.”


When I first heard Atiyah’s claim, I was at a loss for how to react. Most claimed solutions for the Riemann hypothesis are usually dismissed quickly because they contain leaps of logic not backed by sufficient mathematical rigour. On the other hand, Atiyah isn’t just anybody. He won the Fields Medal in 1966 and the Abel Prize in 2004, and has been associated with some famous solutions for problems in algebraic topology.

Perhaps the most famous and recent example of this was Vinay Deolalikar’s proof of another major unsolved problem in mathematics, whether P equals NP, in August 2010. The P/NP problem asks whether a problem whose solution is easy to check is also therefore easy to solve. Though nobody has been able to provide a proof for this conundrum yet, it is widely assumed by mathematicians and computer scientists that P = NP, i.e. a problem whose solution is easy to check is therefore also easy to solve. However, Deolalikar, then working at Hewlett Packard Research Labs, claimed to have a proof that P ≠ NP, and it couldn’t be readily dismissed either because, to borrow Scott Aaronson’s words,

What’s obvious from even a superficial reading is that Deolalikar’s manuscript is well-written, and that it discusses the history, background, and difficulties of the P vs. NP question in a competent way. More importantly (and in contrast to 98% of claimed P≠NP proofs), even if this attempt fails, it seems to introduce some thought-provoking new ideas, particularly a connection between statistical physics and the first-order logic characterization of NP.

Nonetheless, flaws were found in Deolalikar’s proof, as delineated prominently in Aaronson’s and R.J. Lipton’s blogs, and the claim was settled: P/NP remained (and remains) unsolved. Lesson: watch the blogs as a first response measure. The peers of a paper’s author(s) usually know what’s happening before the news does and, if a controversial claim has been advanced, they’re likely already further into a debate than the mainstream media realises.

So as a quick way out in Atiyah’s case, I hopped over to Shtetl Optimized, Aaronson’s blog. And there, at the end of a long post about the weirdness of quantum theory, was this line: “As of Sept. 25, 2018, it is the official editorial stance of Shtetl-Optimized that the Riemann Hypothesis and the abc conjecture both remain open problems.” Aha!

Some of you will remember that three physicists made a major announcement last year about finding a potential way to solve the Riemann hypothesis because they had unearthed an eerie similarity between the Riemann zeta function, central to the hypothesis, and an equation found in quantum mechanics. While they’re yet to post an update, the physicists’ thesis was compelling and wasn’t dismissed by the wider mathematical community, raising hope that it could lead to a solution.

Atiyah’s solution also concerns itself with a famously physical concept: the fine-structure constant, denoted as α (alpha). The value of this constant determines the strength with which charged particles like electrons interact with the electromagnetic field. It has the value of about 1/137. If it were higher, the electromagnetic force would be stronger and all atoms would be smaller, apart from numerous other cascading effects. Atiyah’s resolution of the Riemann hypothesis is pegged to a new derivation for the value of α, and this where he runs into trouble.

Sean Carroll, a theoretical physicist Caltech, called the derivation “misguided”.  Madhusudhan Raman, a postdoc at the Tata Institute of Fundamental Research, said that while he isn’t qualified to comment on the correctness on the Riemann hypothesis proof, he – like Carroll – had some problems with the physics of it.

His full explanation is as follows (paraphrased): It is tempting to think of α as a fixed number, like π (pi), but it is not. While the value of π does not change, the value of α does because it is related to the energy at which it is being measured. At higher energies, such as inside the Large Hadron Collider, the value of α will be higher. So α is not a number as much as a function that says its value is X at energy Y. However, Atiyah appears to have worked with the assumption that α is a single, fixed number like π. This isn’t true and therefore his derivation is suspect.

Sabine Hossenfelder, a research fellow at the Frankfurt Institute for Advanced Studies, also had the same issues with Atiyah’s effort. Carroll went a step further and said that if he had to be very charitable, then the derivation could pass muster but not without also discussing various issues in physics associated with α. However, he wrote, “Not a whit of this appears in Atiyah’s paper.”

At the same time – and unlike in numerous previous instances – these physicists and others besides continue to have great respect for Atiyah and his work, and why not? Though he is 89, as one comment observed on Carroll’s blog, “It’s brave to fight to the last, and, who knows, with his distinguished record and doubtless vast erudition, maybe there’s some truth or useful insights in these latest papers, even if [it’s] not quite what he claims.”

So also, the Riemann hypothesis endures unresolved.

The Wire
September 28, 2018

An epistocracy

The All India Council for Technical Education (AICTE) has proposed a new textbook that will discuss the ‘Indian knowledge system’ via a number of pseudoscientific claims about the supposed inventions and discoveries of ancient India, The Print reported on September 26. The Ministry of Human Resource Development (MHRD) signed off on the move, and the textbook – drawn up by the Bharatiya Vidya Bhavan educational trust – is set to be introduced in 80% of the institutions the AICTE oversees.

According to the Bharatiya Vidya Bhavan website, “the courses of study” to be introduced via the textbook “were started by the Bhavan’s Centre for Study and Research in Indology under the Delhi Kendra after entering into an agreement with the AICTE”. They include “basic structure of Indian knowledge system; modern science and Indian knowledge system; yoga and holistic health care”, followed by “essence of Indian knowledge tradition covering philosophical tradition; Indian linguistic tradition; Indian artistic tradition and case studies”.

In all, the textbook will be available to undergraduate students of engineering in institutions other than the IITs and the NITs but still covering – according to the Bhavan – “over 3,000 engineering colleges in the country”.

Although it is hard to fathom what is going on here, it is clear that the government is not allowing itself to be guided by reason. Otherwise, who would introduce a textbook that would render our graduates even more unemployable, or under-employed, than they already are? There is also a telling statement from an unnamed scholar at the Bhavan who was involved in drafting the textbook; as told to The Print: “For ages now, we have been learning how the British invented things because they ruled us for hundreds of years and wanted us to learn what they felt like. It is now high time to change those things and we hope to do that with this course”.

The words “what they felt like” indicate that the people who have enabled the drafting and introduction of this book, including elected members of Parliament, harbour a sense of disenfranchisement and now feel entitled to their due: an India made great again under the light of its ancient knowledge, as if the last 2,000 years did not happen. It also does not matter whether the facts as embodied in that knowledge can be considered at par with the methods of modern science. What matters is that the Government of India has today created an opportunity for those who were disempowered by non-Hindu forces to flourish and that they must seize it. And they have.

In other words, this is a battle for power. It is important for those trying to fight against the introduction of this textbook or whatever else to see it as such because, for example, MHRD minister Prakash Javadekar is not waiting to be told that drinking cow urine to cure cancer is pseudoscientific. It is not a communication gap; Javadekar in all likelihood is not going to drink it himself (even though he is involved in creating a platform to tell the masses that they should).

Instead, the stakeholders of this textbook are attempting to fortify a power structure that prizes the exclusion of knowledge. Knowledge is power, after all – but an epistocracy cannot replace a democracy; “ignorance doesn’t oppress in the same way that knowledge does,” to adapt the words of David Runciman. For example, the textbook repeatedly references an older text called the ‘Yantra Sarvasva’ and endeavours to establish it as a singular source of certain “facts”. And who can read this text? The upper castes.

In turn, by awarding funds and space for research to those who claim to be disseminating ancient super-awesome knowledge and shielding them from public scrutiny, the Narendra Modi government is subjecting science to power. A person who peddles a “fact” that Indians flew airplanes fuelled by donkey urine 4,000 years ago no longer need aspire to scholarly credentials; he only has to want to belong to a socio-religious grouping that wields power.

A textbook that claims India invented batteries millennia before someone in Europe did is a weapon in this movement but does not embody the movement itself. Attempts to make this textbook go away will not make future textbooks go away, and attempts to counter the government’s messaging using the language of science alone will not suffice. For example, good education is key, and our teachers, researchers, educationists and civil society are a crucial part of the resistance. But even as they complain about rising levels of mediocrity and inefficiency, perpetrated by ceaseless administrative meddling, the government does not seek to solve the problem as much as use it as an excuse to perpetrate further mediocrity and discrimination.

There was no greater proof of this than when a member of the National Steering Committee constituted by the Department of Science and Technology to “validate research on panchgavyatold The Wire in 2017, “With all-round incompetence [of the Indian scientific community], this is only to be expected. … If you had 10-12 interesting and well-thought-out good national-level R&D programmes on the table, [the ‘cowpathy’] efforts will be seen to be marginal and on the fringe. But with nothing on the table, this gains prominence from the government, which will be pushing such an agenda.”

But we do have well-thought-out national-level R&D programmes. If they are not being picked by the government, it must be forced to provide an explanation as to why, and justify all of its decisions, instead of letting it bask in the privilege of our cynicism and use the excuse of our silence to sustain its incompetence. Bharatiya Vidya Bhavan’s textbook exists in the wider political economy of banning beef, lynching Dalits, inciting riots, silencing the media and subverting the law, and not in an isolated silo labeled ‘Science vs. Pseudoscience’. It is a call to action for academics and everyone else to protest the MHRD’s decision and – without stopping there – for everyone and the academics to vocally oppose all other moves by public institutions and officials to curtail our liberties.

It is also important for us to acknowledge this because we will have to redraft the terms of our victory accordingly. To extend the metaphor of a weapon: the battle can be won by taking away the opponent’s guns, but the war will be won only when the opponent finds its cause to be hopeless. We must fight the battles but we must also end the war.

The Wire
September 27, 2018

Storm-seeker

For the last two nights, the skies of Bangalore have been opening up, as if for me. Last night, it poured rivers. The sky flashed with the kind of lightning that makes you say you’ve never seen lightning like that. The entire empyrean turns that electric pink that you know is all heat, blowing like canons through columns of air at the speed of sound. Seconds later, you hear it building into a crescendo into the sound of a mountain coming apart – and it pours, pours, pours, pours.

The petrichor is thick in the air, clogging your senses. Its name translates in the Greek to, roughly, “the fluid in the vein of the gods in the rocks”. Its odour is due to the presence of an alcohol, geosmin, in the soil, released by actinobacteria. We pick up on petrichor the moment it is in play because we have evolved to; we know it is going to rain when there a few parts per trillion of geosmin in the air. A biologist will tell you it is to help you find water wherever you are. I don’t think so. I think it is to help us find the storm wherever it is. We’re storm-seekers. And why not? I stand upon this crag looking at the world up on fire, the world below underwater and I, in between heaven and hell.

It is where I have always been. Satyavrata cursed, Trishanku liberated.

Political activation

… all forms of knowledge are implicated in political structures in one way or another. If the people who actually have expertise in that form of knowledge are not the ones activating it politically, then someone else is going to do it for them.

– Curtis Dozier, publisher of Pharos. Source of quote here.

Scientists communicating their work to the people is a way for them to take control of the narrative such that they can guide it the way they want it to go, they way they think it should go. But this is a small component of the larger idea of science stewardship. Without stewards – who can chaperone scientific knowledge through corridors of power as much as they can through the many streams of public dialogue – science, even if just the label, is going to be appropriated by “someone else” to be activated politically unto their ends. When the “someone else” is also bound to an enthno-nationalistic ideology, science is doomed.

Board games II

My second visit to Tabletop Thursday on September 20 was super-fun again. This time I played four games: Colouretto, The Lady and the Tiger, Coup and Secret Hitler. I’m pretty sure one of the people I played the last game with, who was introduced only as Amit, was Amit Varma, the author of India Uncut, the blog that got me blogging. I didn’t get a chance to talk to him, hopefully next time!

I don’t want your ideas

Tommaso Dorigo published a blog post on the Science 2.0 platform, where he’s been publishing his writing, that I would have liked to read. It was about whether neural networks could help design particle detectors on accelerators of the future. This is an intriguing idea considering neural networks have been pressed into improving diagnostic and problem-solving tasks in various other fields in an effort to leapfrog over barriers to the field’s expansion. And particle physics is direly in need of such efforts given the increasing gap between theoretical and experimental results.

However, I couldn’t concentrate on Dorigo’s piece because the moment I realised that he was the author (having discovered the piece through its headline), my mind was befouled by the impression I have of him as a person – which is poor. This was the result of an interaction he had had on Twitter with astrophysicist Katherine Mack last year, in which he came across – from my POV – as an insensitive and small-minded person. I had written shortly after on the basis of this interaction that as much as we need more scientific insights, they or their brilliance should not excuse troubling behaviour on the scientist’s part.

In other words, no matter how brilliant the scientist, if he is going to joke about matters no one should be joking about and simply being juvenile in his conduct, then he should not be accommodated in academia – or in public discourse – without sufficient precautions that will prevent him from damaging the morale of his non-male colleagues and peers. I am aware that there is no way Dorigo’s unwholesome ideas can affect my life but at the same time I don’t want to consume what he publishes and so contribute to the demand for his writing (even passively). This isn’t a permanent write-off: Dorigo is yet to apologise for his words (that I know of); silent repentance is not useful for those who witnessed that very public exchange with Mack.

However, at the end of all this, there is no way for me to remove the idea of neural networks designing particle detectors from my consciousness. Plus given that ideas in science have to be attributed to those who originated them, this means I can’t explore Dorigo’s idea without reading more of Dorigo’s writing.

At this point, I am tempted to ask that publishers, distributors, aggregators and platforms – all entities that share and distribute content on various platforms and through different services – ensure that the name of the author is present and accessible in the platform/service-specific metadata. This is because more and more people are starting to have discussions about whether genius should excuse, say, misogyny and concluding that it shouldn’t. People are also becoming more conscious of whose writing they are consuming and whose they are actively avoiding for various reasons. These decisions matter, and content distributors need to assist them actively.

For example, I came upon Dorigo’s article via a Google News Alert for ‘high-energy physics’. The corresponding email alert looked like this:

screen-shot-2018-09-21-at-18-40-03

The headline, publisher’s name and the first score or so words of the article are visible in the article preview provided by Google. In the first item: the fact that it is also a press release is mentioned, but I am not sure if this is a regular feature. Although it is not immediately evident if the publisher is who it says it is, Google does not mask the URL if you hover over the link, there is only a forwarding prefix (`google.com/url?rct=j&sa=t&url=<link>`).

I have essentially framed my argument as a contest between discovering new ideas and avoiding others. For example, by choosing to avoid Dorigo’s writing, I am also choosing to avoid discovering the arguably unique ideas that Dorigo might have – and in the long-run give up on all that knowledge. However, this is an insular counterargument because there is a lot to be learnt out there. There is no reason I should have to put up with someone like Dorigo. Should a subsequent question arise as to whether we should tolerate someone who is doing something unique while also being misogynistic, etc.: the answer is still ‘no’ because it remains that nothing should excuse bad behaviour of that kind.

‘Gardens of the Moon’

I – and all my friends who have read the Malazan Book of the Fallen series – have wondered why the first book in the series is titled Gardens of the Moon. The only Moon-related entity in the book is Moon’s Spawn, the flying fortress of Anomander Rake’s Tiste Andii, but it doesn’t possess any gardens. In fact, the only garden that finds prominent mention in the book is the one on which a festival named Gedderone’s Fete takes place. So the title has always been confusing.

Yesterday, in the middle of my third reread of the series, I came across a curious statement in Dust of Dreams, the ninth book: that Olar Ethil, the bonecaster of the Logros T’lan Imass, is called ‘Ayala Alalle’ by the Forkrul Assail. ‘Ayala Alalle’ means ‘tender of the Gardens of the Moon’. Now, Olar Ethil is a particularly interesting character in the series: she may be the mother of Draconus’s daughters Envy and Spite, was an Azathanai who may have created the Imass, and she may be Burn the Sleeping Goddess (keeping with author Steven Erikson’s persistent use of an unreliable narrator throughout the series). She was certainly the bonecaster who conducted the First Ritual of Tellann.

Olar Ethil, a.k.a. Ayala Alalle, as Burn is what is relevant here. The Malazan world is thought to be kept in existence by the dreaming of Burn. Should her dreams be poisoned, the Malazan world will be poisoned; should she awaken from her dream, the Malazan world will be destroyed. Now, if the person who was Ayala Alalle was also the person known as Burn, then ‘tending to the Gardens of the Moon’ may have been a reference to Burn’s tending to her dream or the subjects of her dream – i.e. in effect serving as a broad introduction to the world and peoples of the books.

I know this is tenuous and based on Olar Ethil being Burn and that is something Erikson never confirms, not even in the first two books of the Kharkhanas Trilogy (the third is yet to be published), which discuss the Azathanai before K’rul created the Warrens. However, I’m going to go with it because Erikson does not provide any other material in Gardens of the Moon that might suggest why it is named so. All the other books in the series are also very specifically named according to people or events in each book.

Finally, I am going to take heart from the fact that we find out only in the series’s last book, The Crippled God, as to why the series is called so. It is just another example of Erikson being perfectly okay with explaining things as and when he pleases and not when he thinks the reader ought to know.

The sounds of science

Do you remember the sound of a telephone ringing in the early 1990s? That polyphonic ringtone so reminiscent of the life of that decade…

Do you remember the sound of using a telephone in the 1990s? The flat noises the cheap plastic buttons on the interface made when you pushed on them, the wound-up cord flopping over the wooden table, the clackety-clack of the switch when you plunged it into the chassis, wondering why you couldn’t hear a voice on the other side, the closing allegro of the handset coming to rest, almost surely time for you to stop eavesdropping on the teacher-parent phone call.

In case you were wondering, science has everything to do with these sounds, noises and other music – as much as it had everything to do with why telephones and other such devices were in your house in the first place. However, while their underlying principles are carefully recorded in the scientific literature and preserved for decades, while our encounters with their designs is memorialised in trends and encoded in interfaces of the future, the sounds find refuge only in our memories, where they slowly fade away.

We must endeavour to preserve them better because they embody a cultural experience of our carefully, ergonomically crafted world. They are the inadvertent, nonetheless persistent, products of an older scientific vision that only saw far enough to say every person must be able to speak to every other person almost instantaneously. The vision did not anticipate the sound but the sound is what defined our day-to-day engagement with technology.

This is what a project, called ‘Conserve the Sound’, has been trying to do. Funded by the Film and Media Foundation NRW, Germany, it is:

… an online museum for vanishing and endangered sounds. The sound of a dial telephone, a walkman, a analog typewriter, a pay phone, a 56k modem, a nuclear power plant or even a cell phone keypad are partially already gone or are about to disappear from our daily life.

Almost all the products featured on their site – from tabletop ventilators to the engines of the Junkers Ju 52 aircraft – are of German origin but that does not diminish the nostalgia trip. Why, use the site long enough and browse through enough sounds you recognise, and you might soon be tempted to sample ones that you never got the chance to hear growing up.

The Wire
September 14, 2018

Fact-checking in science journalism

The Gordon and Betty Moore Foundation has helped produce a report on fact-checking in science journalism, and it is an eye-opening read. It was drafted by Deborah Blum and Brooke Borel; there is a nice summary here.

The standout findings for me, as a science editor working with journalists for a news publication in India, all had something to do with the fact that most people like to refer to the New Yorker model as the gold standard, and feed an implicit aspiration that that is the only way fact-checking must be done. But while the thoroughness and level of quality control exemplified by the New Yorker model are very high, the aspiration itself tends to be frequently unrealistic. The following lines from the report (paraphrased) support this view:

  • About half of all outlets surveyed for the report (mostly American) delegated fact-checking to the reporters, the editors or a combination of them
  • Fact-checkers in the US made anywhere from $15 to $75 an hour, with the average being $30; more importantly, fact-checkers cost money that publications may not always be able to afford. As one editor put it, “The difference made by incrementally-increased quality [due to fact-checking] is hard to quantify and hard to justify financially” – more so in India, where, for example, it has seemed increasingly evident that readers will not penalise a publication for working without a style guide.

(In the report, the newspaper model “does not employ fact-checkers, per se. Instead, the accuracy of the story lies mostly with the journalist. Many newspaper journalists have their own systems for double-checking facts in their stories… – for example, checking the piece line-by-line and cross-referencing to original sources. In the newspaper model all stories also go through editors, who push back on iffy claims and look for other holes in sourcing or logic. Rather than going line-by-line and checking all the facts, the editor is looking for potential problems. Finally, the story will go through the copy desk, where copy editors will check for style and grammar. At some publications, copy editors do an abbreviated fact check, confirming facts against written sources, although they don’t typically re-interview people who appear in the story.”)

  • Editors use who use the newspaper model go by a ‘sniff test’, where they stay alert for facts or phrasing that sounds problematic, controversial, etc.
  • The Wire Science uses the newspaper model (although I am the ‘editor’ and the ‘copy desk’ both) and, relative to publications in the West, I can’t help but wonder from time to time – even if irrationally so – if the work that we are doing is somehow poorer in quality. But reading of the names of publications that employ the same model has provided overwhelming validation: “Ars Technica, Sky at Night Magazine, Chemical & Engineering News, … Environmental Health News, Gizmodo, Nature Medicine, Newsweek (both print and online), NOVA Next, PBS NewsHour, Quartz, Retraction Watch, Science, Science News for Students, Vox (except features), and the Washington Post, as well as digital-only stories from Sierra, and Smithsonian.”

(At The Wire Science, once an article is submitted, one of three workflows kicks in. If I am not familiar with the article’s topic: I check it for clarity and flow, forward it to an independent topical expert who can comment on its technical details, and proceed to edit it together with the author. If I am familiar with the article’s topic: I check it for clarity and flow, perform a ‘sniff test’ fact-check, and proceed to edit it together with the author. If the article is in long-form: I check it for clarity and flow, forward one copy to an independent topical expert who can comment on its technical details, one copy to an independent fact-checker, and finally edit it together with the author.)

  • Fact-checking has been on the decline but it is not as easy as attributing it to the rise of digital publishing. In fact, the divide between print and digital newsrooms  vis-a-vis fact-checking is much less strong than between news and long-form publishing.
  • Some 61% of publications that had a fact-checker did not provide written guidelines and 57% did not provide training for the person in that position
  • Most editors “don’t allow anyone to share unpublished materials – whether an entire story or a short excerpt – with sources during a fact check”; The Wire Science treads the same line for the most part
  • It is easier to correct an article post-publication in digital form than in print form. But most people forget that the digital medium aids preservation and reproducibility, i.e. a captured screenshot can last for many years more than a piece of paper bearing some words.

A concluding note from my end: Facts are important, but in science journalism, we are also often in the business of uncertainty and exploration – realms of endeavour in which facts are, more often than not, contingent. So fact-checking itself must not fetishise precision, especially when there might be an advantage in dangling doubt from a well-constructed web of contingencies, as much as give facts room to move and breathe freely.

The Wire
September 14, 2018

Will an Indian win an Ig Nobel by 2035?

The 28th First Annual Ig Nobel Prize Ceremony concluded yesterday, handing out 10 prizes to 38 recipients with institutional affiliations in 26 countries. There is one recipient with an affiliation in India, though I doubt anyone is keeping track. They should (John Barry for the reproductive medicine prize, see below). In fact, instead of endorsing the view that an Indian will a Nobel Prize by 2035, the Government of India should aspire to have an Indian win an Ig Nobel Prize within the next two decades (if the intent is to target a prize at all).

Although there is an apparent sense of ridicule in the prizes’ premise, it is gentle and in fact uplifting. A government should aspire to help its country’s scientists win an Ig Nobel Prize because the government, at least some department of it, has tremendous influence on the national research culture and research priorities. In this framework, to win an Ig Nobel Prize would mean being able to work on what scientists deem worth their while. This in turn would require the presence of a research evaluation scheme that is fair, efficient and not very exacting, allowing scientists the time to work on projects that catch their fancy without consequence for their career advancement or other responsibilities.

This is, of course, a lofty ambition and requires changes in the resource makeup of Indian academia as much as the demographics and structural factors like evaluation schemes. Most of all, this requires time. But as I said, if the intention is to point the R&D guns of Indian scientists towards the achievement of winning a specific prize, it should be the Ig Nobel Prize. Nothing makes the case better than the citations for this year’s winners, so without further ado:

  • Medicine – “for using roller coaster rides to try to hasten the passage of kidney stones”
  • Anthropology – “for collecting evidence, in a zoo, that chimpanzees imitate humans about as often, and about as well, as humans imitate chimpanzees”
  • Biology – “for demonstrating that wine experts can reliably identify, by smell, the presence of a single fly in a glass of wine”
  • Chemistry – “for measuring the degree to which human saliva is a good cleaning agent for dirty surfaces”
  • Medical education – “for the medical report ‘Colonoscopy in the Sitting Position: Lessons Learned From Self-Colonoscopy'”
  • Literature – “for documenting that most people who use complicated products do not read the instruction manual”
  • Nutrition – “for calculating that the caloric intake from a human-cannibalism diet is significantly lower than the caloric intake from most other traditional meat diets”
  • Peace – “for measuring the frequency, motivation, and effects of shouting and cursing while driving an automobile”
  • Reproductive medicine – “for using postage stamps to test whether the male sexual organ is functioning properly”
  • Economics – “for investigating whether it is effective for employees to use Voodoo dolls to retaliate against abusive bosses”

It is not that scientists should or shouldn’t work on these kinds of studies alone. There should definitely be a modicum of accountability in terms of what the funds, a limited resource, earmarked for R&D are used for. But that said, being able to work on these kinds of studies shouldn’t be rendered entirely impossible either, at least in some centres of the country. For example, it would be questionable to require every research institution to undertake blue-sky research, but those centres that are equipped for it shouldn’t be disincentivised from doing so.

More generally: The ideas that win the Ig Nobel Prizes may not be the ones that change the world but they certainly stand for the even more important super-idea that changing the world shouldn’t be our sole imperative.

The Wire
September 14, 2018