Breaking down CMB Bharat

A consortium of Indian scientists has submitted a proposal to the national space agency for a new space science mission called CMB Bharat. Let’s break it down.

What is CMB Bharat?

According to Tarun Souradeep, a senior professor at the Inter-University Centre for Astronomy and Astrophysics, Pune, the proposal is for a “comprehensive next generation cosmic microwave background mission in international collaboration, with a major Indian contribution.”

What is the cosmic microwave background?

Very simply put, the cosmic microwave background (CMB) is radiation leftover from the time the first atoms formed in the universe, about 378,000 years after the Big Bang. It is the smoke of the ‘smoking gun’, as it were. It manifests as a temperature of 2.7 K in the emptiest regions of space. Without the CMB, these regions should have exhibited a temperature of 0 K. The ‘microwave’ in its name alludes to the radiation’s frequency: 160.23 GHz which falls in the microwave range.

As radiation that has been around since the dawn of space and time, it carries the signatures of various cosmic events that shaped the universe over the last 13.8 billion years. So scientists hoping to understand more about the universe’s evolution often turn to instruments that study the CMB.

What will CMB Bharat do?

Souradeep: “It proposes near-ultimate survey polarisation that would exhaust the primordial information in this ‘gold-mine’ for cosmology.”

The CMB contains different kinds of information, and each kind can be elicited depending on which instruments scientists use to study it. For example, the European Space Agency’s Planck space probe mapped the CMB’s small temperature variations throughout the universe. Based on this, scientists were able to obtain a clearer picture of how mass is distributed throughout space.

The other major feature of the CMB apart from its temperature is its polarisation. As electromagnetic radiation, the CMB is made up of electric and magnetic fields. When these fields bump into certain forces or objects in their path, the direction they’re pointing in changes. This flip is called a polarisation.

By studying how different parts of the CMB are polarised in different ways, scientists can understand what kind of events might have occurred to have caused those flips. It is essentially detective work to unravel the grandest mysteries ever to have existed.

The CMB Bharat proposal envisages an instrument that will study CMB polarisation to a greater extent than the Planck or NASA WMAP probes did – or, as Souradeep put it, to a “near-ultimate” extent. WMAP stands for Wilkinson Microwave Anisotropy Probe. Planck probed about 10% of the CMB’s polarisation while WMAP probed even less.

What kind of instrument will CMB Bharat be?

Souradeep said that it is an imager with “6,000 to 14,000 power detectors in the focal plane”. The focal plane is simply the plane along which the detectors will make their detections.

They will be maintained at a very low temperature, at much less than 1 K. This is because these instruments will emit heat during operation, which will have to be siphoned away lest it interfere with their observations.

As a result, they will be sensitive in the attowatt range – i.e. to energy changes of the order of 0.000000000000000001 joule per second.

What kind of discoveries will CMB Bharat stand to make?

Its goals are classified broadly as ultra-high energy and high energy.

The ultra-high energy regime refers to a very young universe in which its energy was packed so tightly together that gravitational and quantum mechanical effects didn’t express themselves separately, as they do today. Instead, they were thought to have manifested in the form of a unified ‘quantum gravity’.

Of course, we don’t know this for sure; and even if the universe went through this phase, we don’t really know what reality would have looked like. According to Souradeep, CMB Bharat is expected to be able to “reveal the first clear signature of quantum gravity and ultra-high-energy physics in the very early universe”.

It could also help understand the quantum mechanical counterpart of gravitational waves. These are ripples of energy flowing through the spacetime continuum, and are released when very massive bodies accelerate through the continuum.

The Laser Interferometer Gravitational-wave Observatories – known famously as LIGO – detected the classical, or gravitational, form of these waves and won its makers the Nobel Prize for physics in 2017. Their quantum mechanical side, should it exist, remains a mystery.

CMB Bharat’s high-energy regime refers to constituents of the particulate realm. Per Souradeep, the mission will explore problems in neutrino physics, including help determine how many kinds of neutrinos there actually are and the order of their masses; map the distribution of dark matter; and track baryons (composite particles like protons and neutrons) in the observable universe.

Additionally, the instrument will also be able to study the Milky Way galaxy’s astrophysical properties in greater detail.

What’s the status of CMB Bharat?

“The Indian Space Research Organisation has a programmatic approach to science projects,” Souradeep said. ISRO’s Space Science Programme made an ‘announcement of opportunity’ for future astronomy programmes in February 2017. Following this, he said, a “consortium of cosmology researchers” drafted a proposal for CMB Bharat in April that year.

“The project is under review and consideration.”

Souradeep told The Hindu, “Typically, ambitious space missions of this magnitude take over a decade [to] launch. We would like to be observing for 4-6 years and the time to final release of all data and release could extend to [about] five years.”

The Wire
January 27, 2019

ISRO’s amazing tender notice

The Indian Space Research Organisation (ISRO) has provided more details about its Gaganyaan programme, including new stages for its GSLV Mk III launch vehicle, through – of all things – a tender notice. Such surreptitiousness is par for the course for India’s spaceflight organisation, which has often done next to nothing to publicise even its most high-profile space missions.

According to Google’s timestamp, the notice has been available online since at least August 2017; another version was online on January 25. In it, ISRO has invited quotations for a slew of infrastructure upgrades that will prepare its second launchpad (SLP) at the Satish Dhawan Space Centre, Sriharikota, to support a rocket that can lift humans to space, as well as heavier satellites. The last date to submit proposals is listed as February 20, 2019.

Perhaps the more tantalising details concern two rocket stages, called SC120 and SC200. The Mk III is a three-stage rocket. The first stage comprises two boosters called S200 attached to the sides of the rocket. The second stage is powered by the L110 stage, powered by liquid propellants combusted by a pair of Vikas 2 engines. The ‘S’ and ‘L’ denote solid and liquid, and the numbers denote the total propellant mass they carry.

The third stage is powered by a cryogenic engine, C20. The stages are ignited in the order of their numbering.

The GSLV Mk III. The crew module (as used in the atmospheric reentry experiment in December 2014) is visible in the topmost chamber. Credit: ISRO
The GSLV Mk III. The crew module (as used in the atmospheric reentry experiment in December 2014) is visible in the topmost chamber. Credit: ISRO

The SC in ‘SC120/200’ stands for semi-cryogenic, a type of engine ISRO had already been developing for its reusable launch vehicle programme. Both of them seem to be alternatives for the Mk III rocket’s second stage, the L110.

A discussion on Reddit suggests adapting the GSLV Mk III to be able to use them would require enough changes for the modified version to differ significantly from the original. Such a rocket is then expected to be able to lift over 5,000 kg to the geostationary transfer orbit – a goal that former ISRO chairman A.S. Kiran Kumar spelled out in 2017. With the L110, the Mk III can currently lift up to 4,000 kg.

According to a technical document describing the trailer system used to transport rocket stages, the SC120 stage will be 4 m wide, 17.29 m tall and weigh 11,500 kg. A more futuristic variant is likely to see the SC120 replaced by the SC200 system. Using both together would be infeasible because of their combined weight.

The tender notice also describes a new and heavier cryogenic upper stage called C32, a variant of the C20 engine that the Mk III uses at present. In the Indian space programme, a rocket stage powered by a cryogenic engine carries liquefied oxygen and liquefied hydrogen, a combination shortened in industry parlance as hydrolox. The C32-powered upper stage, according to the transport system specs, will be 4 m wide, 14.75 m long and weigh 7,400 kg, which is 400 kg more than the C20.

The stage with the semi-cryogenic configuration will carry liquefied oxygen and a highly refined form of kerosene called RP-1 – a.k.a. kerolox. Kerolox has a lower specific impulse than liquefied hydrogen. Specific impulse is a measure of “how much more push accumulates as you use that fuel” (source).

However, to its significant credit, RP-1 is 10-times denser, which means the same volume of kerolox will generate more thrust than the same volume of hydrolox (same source: thrust is “the amount of push a rocket engine provides to the rocket”). RP-1 is also cheaper, more stable at room temperature and presents much less of an explosion hazard. A well-known launch vehicle that uses kerolox is the SpaceX Falcon 9.

Additionally, kerolox engines are harder to ignite than hydrolox engines, more so when the propellant flow rate increases as the engine fires for longer. As a result, they are sometimes ignited on the ground itself, where the process can be better controlled. This is unlike the L110 engine, which switches on over 110 seconds after liftoff.

Beyond the crewed spaceflight programme itself, ISRO will need to continue its march to a heavier lift launch vehicle. Many commercial satellites and India’s own GSAT communication satellites are starting to weigh near 7,000 kg, especially as the latter is tasked with bringing more transponders online to sate India’s growing bandwidth demand.

India currently relies on launch vehicles operated by the French company Arianespance, such as its Ariane 5 rocket, to launch such heavy missions. These contracts are very expensive (over Rs 400 crore per launch). On the other hand, using a homegrown and home-operated vehicle is likely to provide better control over the expenditure, support local manufacturing and keep vehicles ready as and when necessary.

Moreover, ISRO has a programme-wise approach to science missions, which means it typically announces opportunities based on the availability of launchers in the future, and not the other way round. In this paradigm, having a heavier lift launch vehicle, akin to China’s giant Long March 5, will present correspondingly greater opportunities to India’s scientific workforce.

At the same time, it is also important that ISRO undertake launches more frequently. This isn’t something the Mk III can help with because – unlike the Polar (PSLV) and Small Satellite Launch Vehicles (SSLV) – it is a much more complex machine, and will be even more so in the SC120/200 configuration. It can’t be setup and launched with as much ease.

This in turn requires a launchpad able to support such an intense workload, together with the logistical requirements for transporting and loading different fuels. As the notice states (lightly edited):

For servicing of semi-cryo stage at the SLP, it is necessary that new facilities and/or augmentations are established apart from augmentation of existing cryo and gas systems together with associated instrumentation and control systems.

1. Isrosene system
2. Liquid oxygen storage and filling system (LOFS)
3. Nitrogen storage and filling system (NSS)
4. Gas storage and servicing system (GSSF)
5. Instrumentation and control systems
6. Cable trench and pipe trench
7. Augmentation of [liquid oxygen] storage at SLP,


(Isrosene is a grade of kerosene that ISRO has developed as a ‘greener’ fuel to be used on future missions.)

Proposed layout of the augmented SLP. Credit: ISRO
Proposed layout of the augmented SLP. Credit: ISRO

A launchpad upgraded in this fashion will also be useful for the reusable launch vehicle programme, expected to be ready by 2030. Its current design envisages a launch vehicle powered by four or five kerolox semi-cryogenic engines during its ascent (and a scramjet engine during the descent phase).

The Wire
January 26, 2019

Compare ideas with ideas

Avi Loeb in his interview to the New Yorker:

We don’t have as much data as I would like. Given the data that we have, I am putting this on the table, and it bothers people to even think about that, just like it bothered the Church in the days of Galileo to even think about the possibility that the Earth moves around the sun. Prejudice is based on experience in the past. The problem is that it prevents you from making discoveries. If you put the probability at zero per cent of an object coming into the solar system, you would never find it!

There’s a bit of Paul Feyerabend at work here. Specifically:

A scientist who wishes to maximise the empirical content of the views he holds and who wants to understand them as clearly as he possibly can must therefore introduce other views; that is, he must adopt a pluralistic methodology. He must compare ideas with other ideas rather than with ‘experience’ and he must try to improve rather than discard the views that have failed in the competition. … Knowledge so conceived is not a series of self-consistent theories that converges towards an ideal view; it is not a gradual approach to the truth. It is rather an ever increasing ocean of mutually incompatible alternatives, each single theory, each fairy-tale, each myth that is part of the collection forcing the others into greater articulation and all of them contributing, via this process of competition, to the development of our consciousness.

p. 13-14, ch. 2, Against Method, Paul Feyerabend, Verso 2010.

The problem with claiming “it’s aliens”

I doubt the New Yorker thinks Harvard University is a big deal the same way many Indians do, but its persistence with Avi Loeb’s ideas only suggests that it is. Or it is being sensational.

Both possibilities are unsettling.

Avi Loeb is a theoretical astrophysicist at the Harvard-Smithsonian Centre for Astronomy. He is quite well known for his outlandish explanations for physical phenomena. Most of them are certainly grounded in known science but still exhibit an atypical affinity towards the outermost limits of our knowledge. (For examples, see the preprint papers he has authored/coauthored here.)

It is for these reasons that Loeb’s ideas are to be taken with a pinch of salt. It is not that they are impossible but that they are immensely improbable. And as an astrophysicist who knows what he is talking about, Loeb also can’t not know that his claims are extraordinary, often to the extent that they do little more than draw attention, either to him or to some deeper issue he claims he is spotlighting:

My motivation, in part, is to motivate the scientific community to collect more data on the next object rather than argue a priori that they know the answer.

But none of these is a problem. The problem arises when, as a magazine of sizeable repute, the New Yorker does a poor job of contextualising his words. For example, Loeb claims in his quote above that we don’t have enough data, but in another place, he says his idea was simply following the facts. But when the interviewer asks him if he is simply plugging holes in the evidence with theories of his own, Loeb dodges with whataboutery.

As mentioned earlier, Loeb’s ideas are improbable, not impossible, which makes them that much harder to refute. If they had not been grounded in science, he could – and would – simply have been dismissed. But Loeb stays within the realm of possibility, albeit right up at the boundary. It is just that, in the process, the New Yorker fails to provide a true impression of the validity of his ideas.

In science, hypotheses that originate within its rules are more valid than those that originate without. But even among the former group, some ideas are more valid than others, and ‘aliens’ is one of the least valid. In this landscape, the New Yorker‘s interview suggests that ‘aliens’ is a reasonable hypothesis by returning repeatedly to Loeb without going through the necessary trouble of clarifying that it is entertaining at best.

(It is possible that the magazine decided it would try to do this through the interview itself, by pushing back against the interviewee effectively enough to make the them ‘concede’ the issues with their position. But all this does is remind me of their trouble with Steve Bannon all over again.)

My favourite way to understand this is through Bertrand Russell’s response when asked what he would say should he one day discover that god actually exists: “Well, I would say that you did not provide much evidence.”

Aliens – if they are around and nearby – are not making themselves easy to detect either. Granted, they represent a form of unknown-unknowns that we should not be so quick to dismiss. We should still look for them. However, we should not pin every other seemingly inexplicable thing on them because that also closes off the non-alien unknown-unknowns. And just like that, we would be guilty of what the New Yorker is doing with Loeb: popularising one explanation to the detriment of more valid others.

This in turn feeds an already-troublesome impression: not that the more far-fetched the claim, the more media coverage it will receive, but that only the most far-fetched claims will receive any coverage at all.

Just how many reusable rocket designs is ISRO working on?

The Indian Space Research Organisation (ISRO) is working on at least three different designs of reusable launch vehicles at the same time.

Together with its endeavours to increase the number of objectives per mission and deploy purpose-built rockets, it seems like ISRO wants to secure a competitive advantage as quickly as possible in all segments of the launch services market: light, medium and heavy.

Last week, ISRO chairman K. Sivan told Times of India that they will be soon testing a prototype two-stage rocket in which both stages will be recoverable after launch. Sivan’s specifications suggest that this project is in addition to, and not in relation to, two others aimed at building rockets with reusable parts.

The first project that ISRO began testing is simply called the Reusable Launch Vehicle (RLV). It is modelled along NASA’s Space Shuttle. However, differences include the fact that it will be powered by five semi-cryogenic engines during ascent and a scramjet engine during descent. When completed by around 2030, it will be able to lift over 10,000 kg to the low-Earth orbit.

ISRO doesn’t yet have a testable prototype in the second project yet. In fact, its details emerged only a month or so ago. Called ADMIRE, it envisions a small two-stage rocket the size of an L40 booster used on the GSLV Mk II. Its payload capability is not known.

But it is known that ADMIRE’s first stage will be recoverable after launch in similar fashion to the first stage of SpaceX’s Falcon 9 rocket. The second stage will be lost after delivering the payload, just like with the Polar (PSLV) and Geosynchronous Satellite Launch Vehicles (GSLV).

The third project, according to Sivan, involves a two-stage rocket. The first stage will be like ADMIRE’s first-stage. The second will resemble a smaller version of the RLV shuttle.

Reusable launch vehicles reduce cost by allowing space agencies to shave off the expense of the recovered stage for every subsequent launch. The only other expenses are capital costs for the infrastructure and a recurring refurbishment cost (which hasn’t been finalised yet).

Though the NASA Space Shuttle typified this paradigm for many decades, it was the Falcon 9 rocket that really popularised it. To SpaceX’s credit, it showed that reusable rockets didn’t have to be as large as the Space Shuttle and didn’t require infrastructure at that scale either. Since then, many space agencies – public and private – have been pursuing their own reusable launcher programmes.

However, why ISRO is pursuing three of them, if not more, at once is not clear. The payload capacities of the ADMIRE and the third project could help understand the organisation’s eventual plans better.

The RLV will be a heavy-lift vehicle, capable of lifting 10,000-20,000 kg to the low-Earth orbit. The second and third projects could be aimed at lower payload capabilities. This would explain their smaller sizes and they would also fit within ISRO’s broader programme of cashing in on the growing small satellites launch market.

The shuttle-like upper-stage of the third project has technically been tested. In May 2016, ISRO flew a scaled-down prototype of the RLV shuttle in a technology demonstration mission. The eventual upper stage is expected to have similar dimensions as the prototype.

However, there are some potential differences between the RLV shuttle and the third project shuttle. For example, the RLV shuttle is larger and will be powered by a scramjet engine during its descent. On the other hand, the third-project shuttle will – to use Sivan’s words – “glide back to Earth and land on an airstrip”.

Since such gliding will require a source of power, it is plausible that ISRO will tack on a scramjet engine to the shuttle stage as well. However, the vehicle’s potential use in lighter missions also suggests ISRO will want to keep the vehicle as light as possible.

ISRO successfully tested its scramjet engine in August 2016, atop an Advanced Technology Vehicle (ATV), essentially a modified version of the RH560 Rohini sounding rocket. An official press release had said that the time that the ATV and scramjet engine together weighed 3,277 kg. Since the RH560 weighs 1,300 kg, the scramjet weighs around 1,900 kg.

In sum, if the third project uses the ADMIRE vehicle’s first stage, then it would work the following way.

First, the two-stage rocket will take off. Once the first stage is exhausted, it will separate from the second stage, glide through the air until it is suitably over the spot it has to land on (in the Bay of Bengal), and descend using retrograde thrusters.

By this time, the second, shuttle-like stage will have reached a suitable altitude at which to deploy the payload. Once that is done, the shuttle will glide back down, similar to the first stage, and descend on an airstrip.

Apart from these tests, ISRO has also been working on accomplishing more per mission itself. During the PSLV C34 and C35 missions, the organisation showed off the rocket’s ability to launch satellites into multiple orbits at different altitudes.

The PSLV C44 mission will do something similar. On January 24, it will lift off with two satellites: the Microsat-R, an imaging satellite built by the Defence Research and Development Organisation, and a student-built satellite called Kalamsat.

After launching Microsat-R, the rocket’s fourth and uppermost stage will climb into a higher, more circular orbit. There, Kalamsat will switch on and use the orbiting stage as a platform to perform some experiments in space.

Finally, later this year, ISRO will conduct the first test flight of its planned Small Satellite Launch Vehicle. It will be a three-stage rocket partly derived from the PSLV, and capable of carrying 300 kg to a Sun-synchronous orbit and 500 kg to the low-Earth orbit.

Its USP is that it can prepared for launch within 72 hours, rendering it highly available – in much the same way ISRO itself wants to be.

The Wire
January 22, 2019

A close shave with the criticism question

Pamela Philipose, the public editor of The Wire, raised an important question towards the end of her latest column:

Sudhir Angadi wants to know why The Wire is “loaded with so much of negativity”. He wants a response to his question before he decides whether he will continue to read its content. Around the same time I received another mail, this time from S.P. Mahapatra, who also found the lack of “positive news” in The Wire distressing. …

Fortunately, he took the trouble to send in his suggestions: “It will be better you write good pieces on the policies of the BJP government and their work over  the last four and a half years; how leakage of public money has stopped through the introduction of Aadhaar; how millions of poor people now have gas and electricity connections; how hundreds of thousands of children are getting immunised; how LED bulbs are being distributed at a cheaper rate and neem-coated urea has been introduced to stop the black marketing in urea.”

But after having come so close, she lets the answer drown in the (legitimate) righteousness of The Wire‘s political journalism. As an employee of The Wire – but more importantly as an editor as well as a reader, I’d like to know the extent to which a focus on criticism is justified in the journalistic enterprise. Indeed, all criticism is fair if it is accurate. However, does that warrant a focus on criticism alone in one’s coverage of the news? That is the question I’d like answered. I don’t prefer that the answer be ‘yes’ or ‘no’ either; I am only interested in the reasoning behind it.

Additionally, I suspect that when a person doesn’t see what my issue is, it’s likelier than not that they implicitly believe journalism is synonymous with adversarial journalism. Criticism can emerge in other, non-adversarial contexts as well. In science journalism, not all stories have to end with piercing state-sponsored veils of secrecy, to use Philipose’s words, and meaningful journalism is to be found as much in the courteous cross-examination of research methods as in questions raised to research administrators.

In this context, outside of the requirements of political news, could an exclusive focus on criticism be justified? Or is one required to “balance” it, in principle, with positive commentaries as well?

Not all retracted papers are fake news – but then, which ones are?

The authors of a 2017 paper about why fake news spreads so fast have asked for it to be retracted because they’ve discovered a flaw in their analysis. This is commendable because it signals that scientists are embracing retractions as a legitimate part of the scientific process (which includes discovery, debate and publishing).

Such an attitude is important because without it, it is impossible to de-sensationalise retractions, and return them from the domain of embarrassment to that of matter-of-fact. And in so doing, we give researchers the room they need to admit mistakes without being derided for it, and let them know that mistakes are par for the course.

However, two insensitive responses by influential people to the authors’ call for retraction signals that they might’ve had it better to sweep such mistakes under the rug and move on. One of them is Ivan Oransky, one of the two people behind Retraction Watch, the very popular blog that’s been changing how the world thinks about retractions. Its headline for the January 9 post about the fake news paper went like this:

The authors are doing a good thing and don’t deserve to have their paper called ‘fake news’. Publishing a paper with an honest mistake is just that; on the other hand, ‘fake news’ involves an actor planting information in the public domain knowing to be false, and with which she intends to confuse/manipulate its consumers. In short, it’s malicious. The authors don’t have malice – quite the opposite, in fact, going by their suggestion that the paper be retracted.

The other person who bit into this narrative is Corey S. Powell, a contributing editor at Discover and Aeon:

His tweet isn’t as bad as Oransky’s headline but it doesn’t help the cause either. He wrongly suggests the paper’s conclusions were fake; they weren’t. They were simply wrong. There’s a chasm in between these two labels, and we must all do better to keep it that way.

Of course, there are other categories of papers that are retracted and whose authors are often suspected of malice (as in the intent to deceive).

Case I – The first comprises those papers pushed through by bungling scientists more interested in the scientometrics than in actual research, whose substance is cleverly forged to clear peer-review. However, speaking with the Indian experience in mind, these scientists aren’t malicious either – at least not to the extent that they want to deceive non-scientists. They’re often forced to publish by an administration that doesn’t acknowledge any other measures of academic success.

Case II – Outside of the Indian experience, Retraction Watch has highlighted multiple papers whose authors knew what they were doing was wrong, yet did it anyway to have papers published because they sought glory and fame. Jan Hendrik Schön and B.S. Rajput come first to mind. To the extent that these were the desired outcomes, the authors who draft such papers exhibit a greater degree of malintent than those who are doing it to succeed in a system that gives them no other options.

But even then, the moral boundaries aren’t clear. For example, why exactly did Schön resort to research misconduct? If it was for fame/glory, to what extent would he alone be to blame? Because we already know that research administration in many parts of the world has engendered extreme competition among researchers, for recognition as much as grants, and it wouldn’t be far-fetched to pin a part of the blame for things like the Schön scandal on the system itself. The example of Brian Wansink is illustrative in this regard.

What’s wrong with that? – This brings us to that other group of papers authored by scientists who know what they’re doing and think it’s an actually legitimate way of doing it – either out of ignorance or because they harbour a different worldview, one in which they’re interpreting protocols differently, in which they think they will never have success unless they “go viral” or, usually, both. The prime example of such a scientist is Brian Wansink.

For the uninitiated, Wansink works in the field of consumer behaviour and is notorious for publishing multiple papers from a single dataset sliced in so many ways, and so thin, as to be practically meaningless. Though he has had many of his papers retracted, he has often stood by his methods. As he told The Atlantic in September 2018:

The interpretation of [my] misconduct can be debated, and I did so for a year without the success I expected. There was no fraud, no intentional misreporting, no plagiarism, or no misappropriation. I believe all of my findings will be either supported, extended, or modified by other research groups. I am proud of my research, the impact it has had on the health of many millions of people, and I am proud of my coauthors across the world.

Of all the people who we say ‘ought to know better’, the Wansink-kind exemplify it the most. But malice? I’m not so sure.

The Gotcha – Finally, there’s that one group I think is actually malicious, typified by science writer John Bohannon. In 2015, Bohannon published a falsified paper in the journal International Archives of Medicine that claimed eating chocolate could help lose weight. Many news outlets around the world publicised the study even though it was riddled several conceptual flaws. For example, the sample size of the cohort was too small. None of the news reports mentioned this, nor did any of their writers undertake any serious effort to interrogate the paper in any other way.

Bohannon had shown up their incompetence. But this suggests malice because it was 2015 – when everyone interested in knowing how many science writers around the world sucked at their jobs already knew the answer. Bohannon wanted to demonstrate it anew for some reason but only ended up misleading thousands of people worldwide. His purpose would have been better served had he drawn up the faked paper together with a guideline on how journalists could have gone about covering it.

The Baffling – All of the previous groups concerned people who had written papers whose conclusions were not supported by the data/analysis, deliberately or otherwise. This group concerns people who have been careless, sometimes dismissively so, with the other portions of the paper. The most prominent examples include C.N.R. Rao and Appa Rao Podile.

In 2016, Appa Rao admitted to The Wire that the text in many of his papers had been plagiarised, and promptly asked the reporter how he could rectify the situation. Misconduct that doesn’t extend towards a paper’s technical part is a lesser offence – but it’s an offence nonetheless. It prompts a more worrying question: If these people think it’s okay to plagiarise, what do their students think?

A sudden spike in meteor influx 290 Mya

About 300 million years ago, something happened. And the Moon was hit by meteorites twice to thrice as often after this period than before.

The Solar System has always been a dangerous place for careless travellers. Out there are large clouds of dust, millions of rocks dislodged from ancient collisions, fragments of comets, meteors and asteroids, even interstellar interlopers. The dangers of space haven’t been limited to radiation and extreme isolation.

But even in this chaotic picture, it is startling to find that the rate at which rocky objects struck the Moon suddenly spiked so sharply. What could’ve happened?

The answer to this question is important because, as a new study notes, if the Moon is being hit faster, then Earth could be as well. We just wouldn’t know it because Earth’s atmosphere burns off many of these objects before they reach the ground. Even when they do, plate tectonics messes with all but traces of the more recent impacts. And then there’s the weather. Our natural satellite has none of these luxuries, and its surface is frequently pocked by small and large rocks.

So the Moon is like “a time capsule for events that happen in our corner of the Solar System,” Sara Mazrouei, a planetary scientist at the University of Toronto and one of the study’s authors, told National Geographic.

However, that doesn’t mean what strikes the Moon stays on the Moon. The lunar surface does undergo some transformations thanks to processes like erosion. Moreover, when larger bodies impact its surface, it shakes and redistributes the looser parts of its soil.

To get around these sources of confusion, the analysts – scientists from the US, the UK and Canada – devised some workarounds.

First, they focused on craters over 10 km in diameter (between the 80º N and 80º S latitudes). The diameter limit was set so high because these impacts are likely to have penetrated the bedrock and chipped off rocks from there to the surface. These rocks are warmer and remain that way for longer than those on the surface.

The researchers also figured that newer craters – formed within the last billion years – would be covered in more such rocks than older ones. This is because the longer the rocks lie around, the likelier they are to be broken up into smaller pieces by micrometeorites, the Moon’s crazy temperature shifts (which recently put paid to an intrepid cotton plant) and other disturbances.

In effect, they were left with a three-way relationship between rock abundance, rock temperature and crater age. And when they brought this to bear on data recorded by the Lunar Reconnaissance Orbiter (LRO), a NASA satellite around the Moon, they made their discovery – described richly by the chart below.

DOI: 10.1126/science.aar4058
DOI: 10.1126/science.aar4058

The x-axis shows the ages of craters in millions of years. The y-axis is self-explanatory, and is a proxy for time. As a first step, look at the dotted line. It’s straight because it assumes that the rate of impacts was constant throughout time. However, the researchers found that the black lines – from LRO data – suggested the real picture wasn’t as straightforward.

Instead, they think the rate of impacts is closer to the blue line, which shows a perceptible shift around 290 million years ago. Its slope beyond this point is gentler than the slope before because the cratering rate has increased, and the fraction changes less quickly as a result.

An even more interesting feature of the chart is the red line, which depicts craters on Earth in the same period. Its flow suggests that the rock barrage that began 290 million years ago is likely going on against Earth as well.

Earth has had a storied relationship with meteors and meteorites. One of the most famous events was when a rock wider than the height of Mt Everest struck Earth 66 million years ago, wiping out all dinosaurs that couldn’t fly and triggering a mass extinction.

If the Moon’s record-keeping has been correct, this cataclysm – and other smaller ones – were part of the ongoing wave of more frequent meteoric events. But to be sure, the scientists had to ascertain that Earth didn’t have fewer impact craters before 300 million years ago because the older craters had been eroded away.

And this they did by studying kimberlite pipes. These are tubular structures about 2 km deep, commonly found embedded within ancient landmasses. They were formed when volcanoes underground exploded in supersonic eruptions millions of years ago, drilling these formations through the crust. They are rich in diamonds and are mined for this purpose.

Kimberlite pipes underwent significant erosion before 650 million years ago, during a period called ‘Snowball Earth’ – but very little after. As a proxy for the amount by which Earth’s surface eroded over time, the pipes suggest there are fewer craters on Earth older than 300 million years simply because the number of craters created since has been increasing at a higher rate.

So there we have it – the beginnings of a new mystery. Something happened in the Solar System about 300 million years ago, and Earth and its Moon have since been assailed over twice as often by meteors. We don’t yet know what this something is, but there are some ideas.

For example, the scientists suspect in their paper that the change “may be due to the breakup of one or more large asteroids in the inner and/or central main asteroid belt”. The tinier of these fragments absorb and reemit sunlight, giving themselves a small kick. As lots of fragments are kicked outward like this, they could become trapped in the gravitational fields of planets and moons, and slowly drift towards them.

Spaceflight institutions like NASA and the Indian Space Research Organisation will find this update useful because they can now recalibrate the threat to their space-based assets. They can also work with military organisations to strengthen their planetary defence systems, if any.

There’s also a second mystery here, so to speak. The scientists were able to identify the shift in impact rates 290 million years ago because they assumed there was only one such shift. It’s possible that a larger dataset, with more than the 111 craters they examined, could throw up even more shifts in the rate from a variety of causes.

This in turn could begin to reveal the full extent of the threats Earth faces, and what we can do to keep from getting wiped out.

The Wire
January 18, 2019

An Earth sciences ministry

On January 15, Harsh Vardhan, the Union science and technology minister, mulled renaming India’s Ministry of Earth Sciences (MoES) as “Bharat Mata Mantralaya”. (In Hindi, the ministry is currently called the ‘Prithvi Vigyan Mantralaya’.)

Vardhan was speaking at a function to mark the 144th foundation of the Indian Meteorological Department (IMD). He continued there would be no harm in calling the ministry the ‘Bharat mata mantralaya’ (BMM). Vardhan also oversees the MoES, under which the IMD functions.

He argued that the ministry and its scientists work for “the protection of the earth”, which is “indeed Bharat mata for all of us”. He continued to refer to the ministry as “Bharat mata mantralaya” during the rest of his speech.

But contrary to the minister’s beliefs, there will be some harm in calling it the BMM – especially since there’s a lot of Earth left beyond the insular borders of Bharat.

The MoES’s self-proclaimed mission (rephrased) is:

To conduct scientific and technical activities related to Earth system science to improve weather forecasting, monsoon, climate and hazards, explore polar regions, seas around India and develop technology for exploration and exploitation of ocean resources (living and non-living), and ensure their sustainable utilisation.

A lot of this would fail if the MoES was about ‘Bharat mata’ alone.

Of course, there will be those arguments that the author is reading too much into the minister’s words, and that they simply signify certain sentiments. But labels preserve meaning and intent, and it is important to retain them to keep the ministry’s purpose from being corrupted by sentiments that many of us disagree with.

Vardhan’s own government has slowly but surely bent the arc of research towards application-oriented pursuits, particularly prioritising matters of “national interest”. As such, calling it the BMM will risk trapping the MoES’s image between nationalist blinders and keep its own gaze turned inward, confined within borders that science isn’t supposed to recognise.

It could also signal to its employees that their service is to the government’s material vision of ‘Bharat mata’, not to the Earth sciences. This is antipodal to the MoES’s goals. One can’t help but be somewhat wary that the move will also excise ‘sciences’ from the name.

Finally, there’s the trailing suspicion that Vardhan was simply engaging in another gimmick, and that the MoES won’t actually be renamed. However, given the issues involved – and the issues he could be discussing – it’s worth pointing out that his words are unbecoming of his office.

Instead of tinkering with names, the minister could better serve the MoES better simply by supporting what research it already conducts – research that helps keep India afloat and going in a world shaped by unprecedented forces.

CERN’s next collider looks a bit like China’s

The world’s largest particle physics laboratory has unveiled its design options for the Large Hadron Collider’s successor – what is expected to be a 100-km long next generation ‘supercollider’.

The European Organisation for Nuclear Research (CERN) submitted the conceptual design report for what it is calling the Future Circular Collider (FCC). The FCC is expected to be able to smash particles together at even higher intensities and push the boundaries of the study of elementary particles. CERN expects it can come online by 2040, when the Large Hadron Collider’s (LHC’s) final run will come to a close.

The LHC switched on in 2008. Its first primary goal was to look for the Higgs boson, a fundamental particle that gives all other fundamental particles their masses. The LHC found it four years. After that, physicists expected it would be able to find other particles they’ve been looking for to make sense of the universe. The LHC has not.

This forced physicists to confront alternative possibilities about where and how they could find these other hypothetical particles, or even if they existed. The FCC is expected to help by enabling a deeper and more precise examination of the world of particles. It will also help study the Higgs boson in much greater detail than the LHC allows, in the process understand its underlying theory better.

The CERN report on what the FCC could look like comes at an interesting time – when two supercollider designs are being considered in Asia. In November 2018, China unveiled plans for its Circular Electron Positron Collider (CEPC), a particle accelerator seven-times wider than the LHC.

The FCC, CEPC and the LHC are all circular machines – whereas the other design is slightly different. Also in November, Japan said it would announce the final decision on its support for the International Linear Collider (ILC) in a month. As the name suggests, the ILC’s acceleration tunnel is a straight tube 30-50 km long, and parallels CERN’s own idea for a similar machine.

But in December, a council of scientists wrote to Japan’s science minister saying they opposed the ILC because of a lack of clarity on how Japan would share its costs with other participating nations.

In fact, cost has been the principal criticism directed against these projects. The LHC itself cost $13 billion. The FCC is expected to cost $15 billion, the CEPC $5 billion and the ILC, $6.2 billion. ($1 billion is about Rs 7,100 crore.)

They are all focused on studying the Higgs boson more thoroughly as well. This is because the energy field that the particle represents, called the Higgs field, pervades the entire universe and interacts with almost all fundamental particles. However, these attributes give rise to properties that are incompatible with the universe’s behaviour at the largest scales.

Scientists believe that studying the Higgs boson closely could unravel these tensions and maybe expose some ‘new physics’. This means generating collisions to produce millions of Higgs bosons – a feat that the LHC wasn’t designed for. So the newer accelerators.

The FCC, the CEPC and the ILC all accelerate and collide electrons and positrons, whereas the LHC does the same with protons. Because electrons and positrons are fundamental particles, their collisions are much cleaner. When composite particles like protons are smashed together, the collision energy is much higher but there’s too much background noise that interferes with observations.

These differences lend themselves to different abilities. According to Sudhir Raniwala, a physicist at the University of Rajasthan, the CEPC will be able to “search for rare processes and make precision measurements” and “likely more aggressively” than the FCC. The FCC will be able to do both those things as well as explore signs of ‘new physics’ at higher collision energies.

According to CERN’s conceptual design report, the FCC will have four phases over 15 years.

I – For the first four years, it will operate with a centre-of-mass collision energy of 90 GeV (i.e. the total energy carried by two particles colliding head-on) and produce 10 trillion Z bosons.

II – For the next two years, it will operate at 160 GeV and produce 100 million W bosons.

III – For three years, the FCC will run at 240 GeV and produce a million Higgs bosons.

IV – Finally, after a year-long shutdown for upgrades, the beast will reawaken to run at 360 GeV for five years, producing a million top quarks and anti-top quarks. (The top quark is the most massive fundamental particle known.)

After this, the report states that the FCC tunnel could be repurposed to smash protons together the way the LHC does but at higher energy. And after that also smash protons against electrons to better probe protons themselves.

The first part of this operational scheme is similar to that of China’s CEPC. To quote The Wire‘s report from November 2018:

[Its] highest centre-of-mass collision energy will be 240 GeV. At this energy, the CEPC will function as a Higgs factory, producing about 1 million Higgs bosons. At a collision energy of 160 GeV, it will produce 15 million W bosons and at 91 GeV, over one trillion Z bosons.

Michael Benedikt, the CERN physicist leading the FCC project, has called this a validation of CERN’s idea. He told Physics World, “The considerable effort by China confirms that this is a valid option and there is wide interest in such a machine.”

However, all these projects have been envisaged as international efforts, with funds, people and technology coming from multiple national governments. In this scenario, it’s unclear how many of them will be interested in participating in two projects with similar goals.

Benedikt did not respond to a request for comment. But Wang Yifang, director of the institute leading the CEPC, told The Wire that “the world may not be able to accommodate two circular colliders”.

When asked of the way forward, he only added, “This issue can be solved later.”

Moreover, “different people have different interests” among the FCC’s and CEPC’s abilities, Raniwala said, “so there is no easy answer to where should India invest or participate.” India is currently an associate member at CERN and has no plans for a high-energy accelerator of its own.

To the FCC’s credit, it goes up to a higher energy, is backed by a lab experienced in operating large colliders and already has a working international collaboration.

Additionally, many Chinese physicists working in the country and abroad have reservations about China’s ability to pull it off. They’re led in their criticism by Chen-Ning Yang, a Nobel laureate.

But in the CEPC’s defence, the cost Yang is opposed to – a sum of $20 billion – is for the CEPC as well as its upgrade. The CEPC’s construction will also begin sooner, in around 2022, and it’s possible China will be looking for the first-mover advantage.