A Q&A about my job and science journalism

A couple weeks ago, some students from a university in South India got in touch to ask a few questions about my job and about science communication. The correspondence was entirely over email, and I’m pasting it in full below (with permission). I’ve edited a few parts in one of two ways – to make myself clearer or to hide sensitive information – and removed one question because its purpose was clarificatory.

1) What does your role as a science editor look like day to day?

My day as science editor begins at around 7 am. I start off by catching up on the day’s headlines and other news, especially all the major newspapers and social media channels. I also handle a part of The Wire Science‘s social media presence, so I schedule some posts in the first hour.

Then, from 8 am onwards, I begin going through the publishing schedule – which is a document I prepare on the previous evening, listing all the articles that writers are expected to file on that day, as well as what I need to edit/publish and in which position on the homepage. At 9.30 am, my colleagues and I get on a conference call to discuss the day’s top stories and to hear from our reporters on which stories they will be pursuing that day (and any stories we might be chasing ourselves). The call lasts for about an hour.

From 10.30-11 am onwards, I edit articles, reply to emails, commission new articles, discuss potential story ideas with some reporters, scientists and my colleagues, check on the news cycle every now and then, make sure the site is running smoothly, discuss changes or tweaks to be made to the front-end with our tech team, and keep an eye on my finances (how much I’ve commissioned for, who I need to pay, payment deadlines, pending allocations, etc.).

All of this ends at about 4.30 pm. I close my laptop at that point but I continue to have work until 6 pm or so, mostly in the form of emails and maybe some calls. The last thing I do is prepare the publishing schedule for the next day. Then I shut shop.

2) With leading global newspapers restructuring the copy desk, what are the changes the Indian newspapers have made in the copy desk after the internet boom?

I’m not entirely familiar with the most recent changes because I stopped working with a print establishment six years ago. When I was part of the editorial team at The Hindu, the most significant change related to the advent of the internet had less to do with the copy desk per se and more to do with the business model. At least the latter seemed more pressing to me.

But this said, in my view there is a noticeable difference between how one might write for a newspaper and for the web. So a more efficient copy-editing team has to be able to handle both styles, as well as be able to edit copy to optimise for audience engagement and readability both online and offline.

3) Indian publications are infamous for mistakes in the copy. Is this a result of competition for breaking news or a lack of knack for editing?

This is a question I have been asking myself since I started working. I think a part of the answer you’re looking for lies in the first statement of your question. Indian copy-editors are “infamous for mistakes” – but mistakes according to whom?

The English language came to India in different ways, it is not homegrown. British colonists brought English to India, so English took root in India as the language of administration. English is the de facto language worldwide for the conduct of science, so scientists have to learn it. Similarly, there are other ways in which the use of English has been rendered useful and important and necessary. English wasn’t all these things in and of itself, not without its colonial underpinnings.

So today, in India, English is – among other things – the language you learn to be employable, especially with MNCs or such. And because of its historical relationships, English is taught only in certain schools, schools that typically have mostly students from upper-caste/upper-class families. English is also spoken only by certain groups of people who may wish to secret it as a class symbol, etc. I’m speaking very broadly here. My point is that English is reserved typically for people who can afford it, both financially and socio-culturally. Not everyone speaks ‘good’ English (as defined by one particular lexicon or whatever) nor can they be expected to.

So what you may see as mistakes in the copy may just be a product of people not being fluent in English, and composing sentences in ways other than you might as a result. India has a contested relationship with English and that should only be expected at the level of newsrooms as well.

However, if your question had to do with carelessness among copy-editors – I don’t know if that is a very general problem (nor do I know what the issues might be in a newsroom publishing in an Indian language). Yes, in many establishments, the management doesn’t pay as much attention to the quality of writing as it should, perhaps in an effort to cut costs. And in such cases, there is a significant quality cost.

But again, we should ask ourselves as to whom that affects. If a poorly edited article is impossible to read or uses words and ideas carelessly, or twists facts, that is just bad. But if a poorly composed article is able to get its points across without misrepresenting anyone, whom does that affect? No one, in my opinion, so that is okay. (It could also be the case that the person whose work you’re editing sees the way they write as a political act of sorts, and if you think such an issue might be in play, it becomes important to discuss it with them.)

Of course, the matter of getting one’s point across is very subjective, and as a news organisation we must ensure the article is edited to the extent that there can be no confusion whatsoever – and edited that much more carefully if it’s about sensitive issues, like the results of a scientific study. And at the same time we must also stick to a word limit and think about audience engagement.

My job as the editor is to ensure that people are understood, but in order to help them be understood better and better, I must be aware of my own privileges and keep subtracting them from the editorial equation (in my personal case: my proficiency with the English language, which includes many Americanisms and Britishisms). I can’t impose my voice on my writers in the name of helping them. So there is a fine line here that editors need to tread carefully.

4) What are the key points that a science editor should keep in mind while dealing with copy?

Aside from the points I raised in my previous answer, there are some issues that are specific to being a good science editor. I don’t claim to be good (that is for others to say) – but based on what I have seen in the pages of other publications, I would only say that not every editor can be a science editor without some specific training first. This is because there are some things that are specific to science as an enterprise, as a social affair, that are not immediately apparent to people who don’t have a background in science.

For example, the most common issue I see is in the way scientific papers are reported – as if they are the last word on that topic. Many people, including many journalists, seem to think that if a scientific study has found coffee cures cancer, then it must be that coffee cures cancer, period. But every scientific paper is limited by the context in which the experiment was conducted, by the limits of what we already know, etc.

I have heard some people define science as a pursuit of the truth but in reality it’s a sort of opposite – science is a way to subtract uncertainty. Imagine shining a torch within a room as you’re looking for something, except the torch can only find things that you don’t want, so you can throw them away. Then you turn on the lights. Papers are frequently wrong and/or are updated to yield new results. This seldom makes the previous paper directly fraudulent or wrong; it’s just the way science works. And this perspective on science can help you think through what a science editor’s job is as well.

Another thing that’s important to know is that science progresses in incremental fashion and that the more sensational results are either extremely unlikely or simply misunderstood.

If you are keen on plumbing deeper depths, you could also consider questions about where authority comes from and how it is constructed in a narrative, the importance of indeterminate knowledge-states, the pros and cons of scientism, what constitutes scientific knowledge, how scientific publishing works, etc.

A science editor has to know all these things and ensure that in the process of running a newsroom or editing a publication, they don’t misuse, misconstrue or misrepresent scientific work and scientists. And in this process, I think it’s important for a science editor to not be considered to be subservient to the interests of science or scientists. Editors have their own goals, and more broadly speaking science communication in all forms needs to be seen and addressed in its own right – as an entity that doesn’t owe anything to science or scientists, per se.

5) In a country where press freedom is often sacrificed, how does one deal with political pieces, especially when there is proof against a matter concerning the government?

I’m not sure what you mean by “proof against a matter concerning the government.” But in my view, the likelihood of different outcomes depends on the business model. If, for example, you the publisher make a lot of money from a hotshot industrialist and his company, then obviously you are going to tread carefully when handling stories about that person or the company. How you make your money dictates who you are ultimately answerable to. If you make your money by selling newspapers to your readers, or collecting donations from them like The Wire does, you are answerable to your readers.

In this case, if we are handling a story in which the government is implicated in a bad way, we will do our due diligence and publish the story. This ‘due diligence’ is important: you need to be sure you have the requisite proof, that all parts of the story are reliable and verifiable, that you have documentary evidence of your claims, and that you have given the implicated party a chance to defend themselves (e.g. by being quoted in the story).

This said, absolute press freedom is not so simple to achieve. It doesn’t just need brave editors and reporters. It also needs institutions that will protect journalists’ rights and freedoms, and also shield them reliably from harm or malice. If the courts are not likely to uphold a journalist’s rights or if the police refuse proper protection when the threat of physical violence is apparent, blaming journalists for “sacrificing” press freedom is ignorant. There is a risk-benefit analysis worth having here, if only to remember that while the benefit of a free press is immense, the risks shouldn’t be taken lightly.

6) Research papers are lengthy and editors have deadlines. How do you make sure to communicate information with the right context for a wider audience?

Often the quickest way to achieve this is to pick your paper and take it to an independent scientist working in the same field. These independent comments are important for the story. But specific to your question, these scientists – if they have the time and are so inclined – can often also help you understand the paper’s contents properly, and point out potential issues, flaws, caveats, etc. These inputs can help you compose your story faster.

I would also say that if you are an editor looking for an article on a newly published research paper, you would be better off commissioning a reporter who is familiar, to whatever extent, with that topic. Obviously if you assign a business reporter to cover a paper about nanofluidic biosensors, the end result is going to be somewhere between iffy and disastrous. So to make sure the story has got its context right, I would begin by assigning the right reporter and making sure they’ve got comments from independent scientists in their copy.

7) What are some of the major challenges faced by science communicators and reporters in India?

This is a very important question, and I can’t hope to answer it concisely or even completely. In January this year, the office of the Principal Scientific Advisor to the Government of India organised a meeting with a couple dozen science journalists and communicators from around India. I was one of the attendees. Many of the issues we discussed, which would also be answers to your question, are described here.

If, for the purpose of your assignment, you would like me to pick one – I would go with the fact that science journalism, and science communication more broadly, is not widely acknowledged as an enterprise in its own right. As a result, many people don’t see the value in what science journalists do. A second and closely related issue is that scientists often don’t respond on time, even if they respond at all. I’m not sure of the extent to which this is an etiquette issue. But by calling it an etiquette issue, I also don’t want to overlook the possibility that some scientists don’t respond because they don’t think science journalism is important.

I was invited to attend the Young Investigators’ Meeting in Guwahati in March 2019. There, I met a big bunch of young scientists who really didn’t know why science journalism exists or what its purpose is. One of them seemed to think that since scientific papers pass through peer review and are published in journals, science journalists are wasting their time by attempting to discuss the contents of those papers with a general audience. This is an unnecessary barrier to my work – but it persists, so I must constantly work around or over it.

8) What are the consequences if a research paper has been misreported?

The consequence depends on the type and scope of misreporting. If you have consulted an independent scientist in the course of your reporting, you give yourself a good chance of avoiding reporting mistakes.

But of course mistakes do slip through. And with an online publication such as The Wire – if a published article is found to have a mistake, we usually correct the mistake once it has been pointed out to us, along with a clarification at the bottom of the article acknowledging the issue and recording the time at which the change was made. If you write an article that is printed and is later found to have a mistake, the newspaper will typically issue an erratum (a small note correcting a mistake) the next day.

If an article is found to have a really glaring mistake after it is published – and I mean an absolute howler – the article could be taken down or retracted from the newspaper’s record along with an explanation. But this rarely happens.

9) In many ways, copy editing disconnects you from your voice. Does it hamper your creativity as a writer?

It’s hard to find room for one’s voice in a news publication. About nine-tenths of the time, each of us is working on a news copy, in which a voice is neither expected nor can add much value of its own. This said, when there is room to express oneself more, to write in one’s voice, so to speak, copy-editing doesn’t have to remove it entirely.

Working with voices is a tricky thing. When writers pitch or write articles in which their voices are likely to show up, I always ask them beforehand as to what they intend to express. This intention is important because it helps me edit the article accordingly (or decide whether to edit it at all). The writer’s voice is part of this negotiation. Like I said before, my job as the editor is to make sure my writers convey their points clearly and effectively. And if I find that their voice conflicts with the message or vice versa, I will discuss it with them. It’s a very contested process and I don’t know if there is a black-and-white answer to your question.

It’s always possible, of course, if you’re working with a bad editor and they just remodel your work to suit their needs without checking with you. But short of that, it’s a negotiation.

Tech bloggers and the poverty of style

I created my writing habit by performing it over a decade (and still continuing). When I first started blogging in 2008, I told myself I would write at least 2,000 words a week. By some conspiracy of circumstances, but particularly my voracious reading habit at the time, I found this target to be quite easy. So it quickly became 5,000, and then 10,000. I kept this pace up well into 2011, when it slowed because I was studying to become a journalist and many of the words I had, to write, were published in places other than my blog. The pace has been more or less the same since then; these days, I manage about 1,000-2,000 words a week.

At first, I wrote because I wanted to write something. But once it became a habit, writing became one of my ways of knowing, and a core feature of my entire learning process irrespective of the sphere in which it happened. These days, if I don’t write something, I probably won’t remember it and much less learn it. How I think about writing – the process, beginnings and endings, ordering paragraphs, fixing the lengths of sentences, etc. – has also helped me become a better editor (I think; I know I still have a long way to go), especially in terms of quickly assessing what could be subpar about an article and what the author needs to do to fix it.

But this said, writing is really an art, mostly because there’s no one correct way to do it. An author can craft the same sentence differently to convey different meanings, couched in different spirits; the complement is true, too: an author can convey the same meaning through different sentences. In my view, the ergodicity of writing is constrained only by the language of choice, although a skilled author can still transcend these limitations by combining words and ideas to make better use of the way people think, make memories and perceive meaning.

This is why I resent a trend among some bloggers – especially people working with Big Tech – to adopt a style of writing that they believe is ‘designed’ to make communication effective. (I call this the ‘Gladwellian style’ because it only reminds me of how Malcolm Gladwell writes: to say what the author is going to say, then to say it, and then to remind the reader of what the author just said.)

I work in news and I can understand the importance of following a simple set of rules to communicate one’s point as losslessly as possible. But the news space is a well-defined subset of communication more broadly, and in this space, finding at least one way to make your point – and then in fact doing so – is more important than exploring ways to communicate differently, with different effects.

Many tech bloggers undermine this possibility when they seem to address writing as a science, with a small and finite number of ways to get it right, thus proscribing opportunities to do more than just get one’s point across, with various effects. Writing in their hands is on one hand celebrated as an understated skill that more engineers must master but on the other is almost always wielded as a means to a common end. (Medium is chock-full of such articles.)

There’s none of the wildness writing is capable of – no variety of voices or no quirky styles on display that an organic and anarchic evolution of the writing habit can so easily produce. Most of it is one contiguous monotonous tonescape, interspersed every now and then with quotes by famous white writers saying something snarky about writing being hard. (Examples here and here.) This uniformity is also reflected in the choice of fonts: except for Medium, almost every blog by a tech person who isn’t sticking to tech uses sans-serif fonts.

Granted, it’s possible that many of these ‘writers’ have nothing interesting to say, which in turn might make anything but a sombre style seem excessive. It’s also possible some of them are just doing what Silicon Valley tech-bros often do in general: rediscover existing concepts like coherence and clarity, and write about them as if people didn’t know them before. We’ve already seen this with everything from household technology to history. It’s also probably silly to expect the readers of a tech blog to go there looking for anything other than what a fellow techie has to say.

But I’m uncomfortable with the fact that writing as a habit and writing as an art often lead limited lives in the tech blogging space – so much so that I’m even tempted to diagnose Silicon Valley’s employees’ relationship with writing in terms of the issues we associate with the Silicon Valley culture itself, or even the products they produce.

The forgotten first lives of India’s fauna

Prof. Biju said the Rohanixalus is the 20th recognised genus of the family Rhacophoridae that comprises 422 known Old World tree frog species found in Asia and Africa. He said there are eight frog species in this genus Rohanixalus, which are known to inhabit forested as well as human-dominated landscapes right from the northeast, the Andaman islands, Myanmar, Thailand, Malaysia, Indonesia, Vietnam, Laos, and Cambodia, up to southern China.

“Our discovery of a treefrog member from Andaman Islands is unexpected and again highlights the importance of dedicated faunal surveys and explorations for proper documentation of biodiversity in a mega diverse country like India. This finding also uncovers an interesting new distribution pattern of tree frogs that provides evidence for faunal exchange between Andamans and the Indo-Burma region,” Prof. Biju said.

‘New genus of tree frog discovered, found in Andamans and Northeast India’, The Hindu, November 12, 2020

When researchers make ‘discoveries’ like this – and they make many every year because India encompasses some of the world’s major “biodiversity hotspots” – I always think about whether people living in the same general area as these creatures might already know of their existence, and have different names and identities for them separate from what the scientific literature will now call it. And if this knowledge does exist – if someone already knew about it – and if they have a knowledge-organising system of their own (that is not science) – which could be easily true if they are members of the many tribes of India (the Adivasi) – then the scientific rediscovery of these species somehow creates a moment in history where the latter knowledge, more traditional and almost certainly far older and therefore more knowing, becomes a bit more forgotten by virtue of being treated as if it didn’t exist. The creatures have now been ‘discovered’ by the methods of science and therefore they have been found for the first time (and not ‘rediscovered’). These frogs for example are now temporary subjects of our celebration of the wonder that is science, even as the human knowledge of their existence that ‘lived’ earlier, in the form of tribal words, sounds, smells, experiences and memories, is not part of the conversation whatsoever in this moment – like the frogs have been snatched out of the context in which they have lived all this time, into a different world, a new world that gives them new names and new purposes. The ‘old world’, the first world, continues its quiet subterranean existence, like an entire universe that’s been kept out of sight, or in our collective blindspot, offering up the resources we need for our second world – land, wood, medicines, frogs, lizards, snakes, birds, vacations, wonder – while battling ecological despair and the end of the first.

Featured image: Juvenile purple frogs somewhere in the Western Ghats, 2017. Credit: Nihaljabinedk/Wikimedia Commons, CC BY-SA 4.0.

Super-spreading, mobility and crowding

I still see quite a few journalists in India refer to “super-spreaders” vis-à-vis the novel coronavirus – implying that some individuals might be to blame for ‘seeding’ lots of new infections in the community – instead of accommodating the fact that simply breathing out a lot of viruses doesn’t suffice to infect tens or hundreds of others: you also need the social conditions that will enable all these viral particles to easily find human hosts.

In fact, going a step ahead, a super-spreading event can happen if there are no super-spreading individuals but there are enabling environmental conditions that do nothing to slow the virus’s transmission across different communities. These conditions include lack of basic amenities (or access to them) such as clean water, nutritious meals and physical space.

new study published by a group of researchers from the US adds to this view. According to their paper’s abstract, “Our model predicts higher infection rates among disadvantaged racial and socioeconomic groups solely from differences in mobility: we find that disadvantaged groups have not been able to reduce mobility as sharply, and that the POIs [points of interest] they visit are more crowded and therefore higher-risk.”

And what they suggest by way of amelioration – to reduce the maximum occupancy at each POI, like a restaurant – applies to a mobility-centric strategy the same way reducing inequality applies to a strategy centred on social justice. In effect, disadvantaged groups of people – which currently include people forced to live in slums, share toilets, ration water, etc. in India’s cities – should have access to the same quality of life that everyone else does at that point of time, including in the limited case of housing.

This study is also interesting because the authors’ model was composed with mobility data from 98 million cellphones – providing an empirical foundation that obviates the need for assumptions about how people move and where. In the early days of India’s COVID-19 epidemic, faulty assumptions on just this count gave rise to predictions about how the situation would evolve in different areas that in hindsight were found to be outlandish – and in some cases in ways that could have been anticipated.

Some modellers denoted people as dots on a screen and assumed that each dot would be able to move a certain distance before it ‘met’ another dot, as well as that all the dots would have a certain total area in which to move around. But as two mathematicians wrote for Politically Math in April this year, our cities look nothing like this:

According to this report, “India’s top 1% bag 73% of the country’s wealth”. Let us say, the physical space in our simulation represents not the ‘physical space’ in real terms, but the ‘space of opportunities’ that exist. In this specific situation of a country under complete lockdown because of the pandemic, this might mean who gets to order ‘contactless’ food online while being ‘quarantined’ at home, and who doesn’t. In our segregated simulation space therefore, the top chamber must occupy 73% of the total space, and the bottom chamber 27%. Also, 1% of the total number of dots occupy the airy top chamber, while the remaining 99% of the dots occupy the bottom chamber.

As a result, and notwithstanding any caveats about the data-taking exercises, researchers reported that Dharavi in Mumbai had a seroprevalence of more than 50% by late July while three wards in non-slum areas had a seroprevalence of only 16%.

The flawed models still can’t claim they could have been right if Mumbai’s slum and non-slum areas were treated as distinct entities. As T. Jacob John wrote for The Wire Science in October, one of the reasons (non-vaccine) herd immunity as a concept breaks when applied to humans is that humans are social animals, and their populations regularly mix such that ‘closed societies’ are rendered practically impossible.

So instead of mucking about with nationwide lockdowns and other restrictions that apply to entire populations at once, the state could simply do two things. First, in the short-term, prevent crowding in places where it’s likely to happen – including public toilets that residents of slums are forced to share, ration shops where beneficiaries of the PDS system are required to queue up, workplaces where workers are crammed too many to a room, etc.

Obviously, I don’t suggest that the government should have been aware of all these features of the epidemic’s progression in different areas from the beginning. But from the moment these issues became clear, and from the moment a government became able to reorient its COVID-19 response strategy but didn’t, it has effectively been in the dock.

This brings us to the second and longer term thing we should do: with the novel coronavirus’s transmission characteristics as a guide, we must refashion policies and strategies to reduce inequality and improve access to those resources required to suppress ‘super-spreading’ conditions at the same time.

The simultaneity is important. For example, simply increasing the average house size from 4 sq. m, say, to 8 sq. m won’t cut it. Instead, buildings have to be designed to allow ample ventilation (with fresh air) and access to sunlight (depending on its natural availability). As researchers from IDFC Institute, a think-tank in Mumbai, noted in another article:

Dharavi’s buildings and paths are irregularly laid out, with few straight routes. Based on calculations with OpenStreetMap routes and Google Earth imagery, it appears 68% of pathways and roads are less than 2 m wide. Such a dimension offers little space for air circulation, and reduces airflow relative to other, properly planned areas, and admits fewer air currents that could help break up the concentration of viral particles.

Mitigating such conditions could also impinge on India’s climate commitments. For example, with reference to our present time in history as the hottest on record, and many countries including India experiencing periods in which the ambient temperature in some regions exceeds thresholds deemed safe for human metabolism, science writer Leigh Phillips wrote for Jacobin that air-conditions must be a human right:

What would it mean to have a right to air-conditioning? Precisely, the right should be to have free or cheap, reliable access to the thermal conditions optimal for human metabolism (air temperatures of between 18 degrees C and 24 degrees C, according to the WHO). Neither too hot nor too cold. The right to Goldilocks’s porridge, if you will. New buildings must come with A/C as part of any “Green New Deal”. The aim of any programme of publicly subsidised mass retrofitting of old buildings shouldn’t be just to fuel-switch away from gas heating and improve insulation, but also to install quiet, efficient air-conditioning systems. At the scale of the electricity grid, this demand must also include the requirement that A/C run on cheap, clean electricity.

So really, none of what’s going on is simple – and when governments respond by offering solutions that assume the problem is simple are avoiding dealing with the real causes. For example, ‘super-spreading’ is neither a choice nor an event – it’s a condition – so solutions that address it as a choice or event are bound to fail. Seen the other way, a community with a high prevalence of a viral infection may be much less responsible for its predicament than the simple interaction of their social conditions with a highly contagious virus.

But this doesn’t mean no solution except a grand, city-scale one can be feasible either – only that all solutions must converge, by being targeted to that effect, on eliminating inequalities.

On resource constraints and merit

In the face of complaints about how so few women have been awarded this year’s Swarnajayanti Fellowships in India, some scientists pushed back asking which of the male laureates who had been selected should have been left out instead.

This is a version of the merit argument commonly applied to demands for reservation and quota in higher education – and it’s also a form of an argument that often raises its head in seemingly resource-constrained environments.

India is often referred to as a country with ‘finite’ resources, often when people are discussing how best to put these resources to use. There are even romantic ideals associated with working in such environments, such as doing more with less – as ISRO has been for many decades – and the popular concept of jugaad.

But while fixing one variable while altering the other would make any problem more solvable, it’s almost always the resource variable that is presumed to be fixed in India. For example, a common refrain is that ISRO’s allocation is nowhere near that of NASA, so ISRO must figure how best to use its limited funds – and can’t afford luxuries like a full-fledged outreach team.

There are two problems in the context of resource availability here: 1. an outreach team proper is implied to be the product of a much higher allocation than has been made, i.e. comparable to that of NASA, and 2. incremental increases in allocation are precluded. Neither of these is right, of course: ISRO doesn’t have to wait for NASA’s volume of resources in order to set up an outreach team.

The deeper issue here is not that ISRO doesn’t have the requisite funds but that it doesn’t feel a better outreach unit is necessary. Here, it pays to acknowledge that ISRO has received not inconsiderable allocations over the years, as well as has enjoyed bipartisan support and (relative) freedom from bureaucratic interference, so it cops much of the blame as well. But in the rest of India, the situation is flipped: many institutions, and their members, have fewer resources than they have ideas and that affects research in a way of its own.

For example, in the context of grants and fellowships, there’s the obvious illusory ‘prestige constraint’ at the international level – whereby award-winners and self-proclaimed hotshots wield power by presuming prestige to be tied to a few accomplishments, such as winning a Nobel Prize, publishing papers in The Lancet and Nature or maintaining an h-index of 150. These journals and award-giving committees in turn boast of their selectiveness and elitism. (Note: don’t underestimate the influence of these journals.)

Then there’s the financial constraint for Big Science projects. Some of them may be necessary to keep, say, enthusiastic particle physicists from being carried away. But more broadly, a gross mismatch between the availability of resources and the scale of expectations may ultimately be detrimental to science itself.

These markers of prestige and power are all essentially instruments of control – and there is no reason this equation should be different in India. Funding for science in India is only resource-constrained to the extent to which the government, which is the principal funder, deems it to be.

The Indian government’s revised expenditure on ‘scientific departments’ in 2019-2020 was Rs 27,694 crore. The corresponding figure for defence was Rs 3,16,296 crore. If Rs 1,000 crore were moved from the latter to the former, the defence spend would have dropped only by 0.3% but the science spend would have increased by 3.6%. Why, if the money spent on the Statue of Unity had instead been diverted to R&D, the hike would have nearly tripled.

Effectively, the argument that ‘India’s resources are limited’ is tenable only when resources are constrained on all fronts, or specific fronts as determined by circumstances – and not when it seems to be gaslighting an entire sector. The determination of these circumstances in turn should be completely transparent; keeping them opaque will simply create more ground for arbitrary decisions.

Of course, in a pragmatic sense, it’s best to use one’s resources wisely – but this position can’t be generalised to the point where optimising for what’s available becomes morally superior to demanding more (even as we must maintain the moral justification of being allowed to ask how much money is being given to whom). That is, constantly making the system work more efficiently is a sensible aspiration, but it shouldn’t come – as it often does at the moment, perhaps most prominently in the case of CSIR – at the cost of more resources. If people are discontented because they don’t have enough, their ire should be directed at the total allocation itself more than how a part of it is being apportioned.

In a different context, a physicist had pointed out a few years ago that when the US government finally scrapped the proposed Superconducting Supercollider in the early 1990s, the freed-up funds weren’t directed back into other areas of science, as scientists thought they would be. (I couldn’t find the link to this comment nor recall the originator – but I think it was either Sabine Hossenfelder or Sean Carroll; I’ll update this post when I do.) I suspect that if the group of people that had argued thus had known this would happen, it might have argued differently.

I don’t know if a similar story has played out in India; I certainly don’t know if any Big Science projects have been commissioned and then scrapped. In fact, the opposite has happened more often: whereby projects have done more with less by repurposing an existing resource (examples herehere and here). (Having to fight so hard to realise such mega-projects in India could be motivating those who undertake one to not give up!)

In the non-Big-Science and more general sense, an efficiency problem raises its head. One variant of this is about research v. teaching: what does India need more of, or what’s a more efficient expense, to achieve scientific progress – institutions where researchers are free to conduct experiments without being saddled with teaching responsibilities or institutions where teaching is just as important as research? This question has often been in the news in India in the last few years, given the erstwhile HRD Ministry’s flip-flops on whether teachers should conduct research. I personally agree that we need to ‘let teachers teach’.

The other variant is concerned with blue-sky research: when are scientists more productive – when the government allows a “free play of free intellects” or if it railroads them on which problems to tackle? Given the fabled shortage of teachers at many teaching institutions, it’s easy to conclude that a combination of economic and policy decisions have funnelled India’s scholars into neglecting their teaching responsibilities. In turn, rejigging the fraction of teaching or teaching-cum-research versus research-only institutions in India in favour of the former, which are less resource-intensive, could free up some funds.

But this is also more about pragmatism than anything else – somewhat like untangling a bundle of wires before straightening them out instead of vice versa, or trying to do both at once. As things stand, India’s teaching institutions also need more money. Some reasons there is a shortage of teachers include the fact that they are often not paid well or on time, especially if they are employed at state-funded colleges; the institutions’ teaching facilities are subpar (or non-existent); if jobs are located in remote places and the institutions haven’t had the leeway to consider upgrading recreational facilities; etc.

Teaching at the higher-education level in India is also harder because of the poor state of government schools, especially outside tier I cities. This brings with it a separate raft of problems, including money.

Finally, a more ‘local’ example of prestige as well as financial constraints that also illustrates the importance of this PoV is the question of why the Swarnajayanti Fellowships have been awarded to so few women, and how this problem can be ‘fixed’.

If the query about which men should be excluded to accommodate women sounds like a reasonable question – you’re probably assuming that the number of fellows has to be limited to a certain number, dictated in turn by the amount of money the government has said can be awarded through these fellowships. But if the government allocated more money, we could appreciate all the current laureates as well as many others, and arguably without diluting the ‘quality’ of the competition (given just how many scholars there are).

Resource constraints obviously can’t explain or resolve everything that stands in the way of more women, trans-people, gender-non-binary and gender-non-conforming scholars receiving scholarships, fellowships, awards and prominent positions within academia. But axiomatically, it’s important to see that ‘fixing’ this problem requires action on two fronts, instead of just one – make academia less sexist and misogynistic and secure more funds. The constraints are certainly part of the problem, particularly when they are wielded as an excuse to concentrate more resources, and more power, in the hands of the already privileged, even as the constraints may not be real themselves.

In the final analysis, science doesn’t have to be a powerplay, and we don’t have to honour anyone at the expense of another. But deferring to such wisdom could let the fundamental causes of this issue off the hook.

Trump, science denial and violence

For a few days last week, before the mail-in votes had been counted in the US, the contest between Joe Biden and Donald Trump seemed set for a nail-biting finish. In this time a lot of people expressed disappointment on Twitter that nearly half of all Americans who had voted (Trump’s share of the popular vote was 48% on November 5) had done so for anti-science and science denialism.

Quite a few commentators also went on to say that “denying science is not just another political view”, implying that Trump, who has repeatedly endorsed such denialism, isn’t being a part of the political right as much as stupid and irresponsible.

This is a reasonable deduction but I think it’s also a bit more complicated. To my mind, a belief that “denying science is not just another political view” could be unfair if it keeps us from addressing the violence perpetrated by some supporters of science, and the state in the name of science.

Almost nowhere does science live in a vacuum, churning out silver bullets to society’s various ills; and in the course of its relationship with the state, it is sometimes a source of distress as well. For example, when the scientific establishment adopts non-democratic tactics to set up R&D facilities, like in Challakere, Kudankulam and Theni (INO); when unscrupulous hospitals fleece patients by exploiting their medical illiteracy; and when ineffective communication and engagement in ‘peace time’ leads to impressions during ‘wartime’ that science serves only a particular group of people, or that ‘science knows best’. These are just a few examples.

Of course, belief in pseudo-Ayurvedic treatments and astrological predictions arise due to a complicated interplay of factors, including an uncritical engagement with the status quo and the tendency to sustain caste hierarchies. We must also ask who is being empowered and why, since Ayurveda and astrology also perpetrate violences of their own.

But in this mess, it’s important to remember that science can be political as well and that choosing science can be a political act, and that by extension opposing or denying science can be a political view as well – particularly if there is also an impression that science is something that the state uses to legitimise itself (as with poorly crafted disease transmission models), often by trampling over the rights of the weak.

This is ultimately important because erasing the political context in which science denialism persists could also blind us to the violence being perpetrated by the support for science and scientism, and its political context.

When I sent a draft of the post so far to a friend for feedback, he replied that “the sympathetic view of science denialism” that I take leads to a situation where “one both can and can’t reject science denialism as a viable political position.” That’s correct.

“Well, which one is it?”

Honestly, I don’t know, but I’m not in search of an answer either. I simply think non-scientific ideas and organisations are accused of perpetrating violence more often than scientific ones are, so it’s important to interrogate the latter as well lest we continue to believe that simply and uncritically rooting for science is sufficient and good.

How do you study a laser firing for one-quadrillionth of a second?

I’m grateful to Mukund Thattai, at the National Centre for Biological Sciences, Bengaluru, for explaining many of the basic concepts at work in the following article.

An important application of lasers today is in the form of extremely short-lived laser pulses used to illuminate extremely short-lived events that often play out across extremely short distances. The liberal use of ‘extreme’ here is justified: these pulses last for no more than one-quadrillionth of a second each. By the time you blink your eye once, 100 trillion of these pulses could have been fired. Some of the more advanced applications even require pulses that last 1,000-times shorter.

In fact, thanks to advances in laser physics, there are branches of study today called attophysics and femtochemistry that employ such fleeting pulses to reveal hidden phenomena that many of the most powerful detectors may be too slow to catch. The atto- prefix denotes an order of magnitude of -18. That is, one attosecond is 1 x 10-18 seconds and one attometer is 1 x 10-18 metres. To quote from this technical article, “One attosecond compares to one second in the way one second compares to the age of the universe. The timescale is so short that light in vacuum … travels only about 3 nanometers during 1 attosecond.”

One of the more common applications is in the form of the pump-probe technique. An ultra-fast laser pulse is first fired at, say, a group of atoms, which causes the atoms to move in an interesting way. This is the pump. Within fractions of a second, a similarly short ‘probe’ laser is fired at the atoms to discern their positions. By repeating this process many times over, and fine-tuning the delay between the pump and probe shots, researchers can figure out exactly how the atoms responded across very short timescales.

In this application and others like it, the pulses have to be fired at controllable intervals and to deliver very predictable amounts of energy. The devices that generate these pulses often provide these features, but it is often necessary to independently study the pulses and fine-tune them according to different applications’ needs. This post discusses one such way and how physicists improved on it.

As electromagnetic radiation, every laser pulse is composed of an electric field and a magnetic field oscillating perpendicular to each other. Of these, consider the electric field (only because it’s easier to study; thanks to Maxwell’s equations, what we learn about the electric field can be inferred accordingly for the magnetic field as well):

Credit: Peter Baum & Stefan Lochbrunner, LMU München Fakultät für Physik, 2002

The blue line depicts the oscillating electric wave, also called the carrier wave (because it carries the energy). The dotted line around it depicts the wave’s envelope. It’s desirable to have the carrier’s crest and the envelope’s crest coincide – i.e. for the carrier wave to peak at the same point the envelope as a whole peaks. However, trains of laser pulses, generated for various applications, typically drift: the crest of every subsequent carrier wave is slightly more out of step with the envelope’s crest. According to one paper, it arises “due to fluctuations of dispersion, caused by changes in path length, and pump energy experienced by consecutive pulses in a pulse train.” In effect, the researcher can’t know the exact amount of energy contained in each pulse, and how that may affect the target.

The extent to which the carrier wave and the envelope are out of step is expressed in terms of the carrier-envelope offset (CEO) phase, measured in degrees (or radians). Knowing the CEO phase is crucial for experiments that involve ultra-precise measurements because the phase is likely to affect the measurements in question, and needs to be adjusted for. According to the same paper, “Fluctuations in the [CEO phase] translate into variations in the electric field that hamper shot-to-shot reproducibility of the experimental conditions and deteriorate the temporal resolution.”

Ignore all the symbols and notice the carrier wave – especially how its peak within the envelope shifts with every next pulse. The offset between the two peaks is called the carrier-envelope offset phase. Credit: HartmutG/Wikimedia Commons, CC BY-SA 3.0

This is why, in turn, physicists have developed techniques to measure the CEO phase and other properties of propagating waves. One of them is called attosecond streaking. Physicists stick a gas of atoms in a container, fire a laser at it to ionise them and release electrons. The field to be studied is then fired into this gas, so its electric-wave component pushes on these electrons. Specifically, as the electric field’s waves rise and fall, they accelerate the electrons to different extents over time, giving rise to streaks of motion – and the technique’s name. A time-of-flight spectrometer measures this streaking to determine the field’s properties. (The magnetic field also affects the electrons, but it suffices to focus on the electric field for this post.)

This sounds straightforward but the setup is cumbersome: the study needs to be conducted in a vacuum and electron time-of-flight spectrometers are expensive. But while there are other ways to measure the wave properties of extreme fields, attosecond streaking has been one of the most successful (in one instance, it was used to measure the CEO phase at a shot frequency of 400,000 times per second).

As a workaround, physicists from Germany and Canada recently reported in the journal Optica a simpler way, based on one change. Instead of setting up a time-of-flight spectrometer, they propose using the pushed electrons to induce an electric current in electrodes, in such a way that the properties of the current contain information about the CEO phase. This way, researchers can drop both the spectrometer and, because the electrons aren’t being investigated directly, the vacuum chamber.

The researchers used fused silica, a material with a wide band-gap, for the electrodes. The band-gap is the amount of energy a material’s electrons need to be imparted so they can ‘jump’ from the valence band to the conduction band, turning the material into a conductor. The band-gap in metals is zero: if you placed a metallic object in an electric field, it will develop an internal current linearly proportional to the field strength. Semiconductors have a small band-gap, which means some electric fields can give rise to a current while others can’t – a feature that modern electronics exploit very well.

Dielectric materials have a (relatively) large band-gap. When it is exposed to a low electric field, a dielectric won’t conduct electricity but its internal arrangement of positive and negative charges will move slightly, creating a minor internal electric field. But when the field strength crosses a particular threshold, the material will ‘break down’ and become a conductor – like a bolt of lightning piercing the air.

Next, the team circularly polarised the laser pulse to be studied. Polarisation refers to the electric field’s orientation in space, and the effect of circular polarisation is to cause the electric field to rotate. And as the field moves forward, its path traces a spiral, like so:

A circularly polarised electric field. Credit: Dave3457/Wikimedia Commons

The reason for doing this, according to the team’s paper, is that when the circularly polarised laser pulse knocks electrons out of atoms, the electrons’ momentum is “perpendicular to the direction of the maximum electric field”. So as the CEO phase changes, the electrons’ directions of drift also change. The team used an arrangement of three electrodes, connected to each other in two circuits (see diagram below) such that the electrons flowing in different directions induce currents of proportionately different strengths in the two arms. Amplifiers attached to the electrodes then magnify these currents and open them up for further analysis. Since the envelope’s peak, or maximum, can be determined beforehand as well as doesn’t drift over time, the CEO phase can be calculated straightforwardly.

(The experimental setup, shown below, is a bit different: since the team had to check if their method works, they deliberately insert a CEO phase in the pulse and check if the setup picks up on it.)

The two tips of the triangular electrodes are located 60 µm apart, on the same plane, and the horizontal electrode is 90 µm below the plane. The beam moves from the red doodle to the mirror, and then towards the electrodes. The two wedges are used to create the ‘artificial’ CEO phase. Source: https://doi.org/10.1364/OPTICA.7.000035

The team writes towards the end of the paper, “The most important asset of the new technique, besides its striking simplicity, is its potential for single-shot [CEO phase] measurements at much higher repetition rates than achievable with today’s techniques.” It attributes this feat to attosecond streaking being limited by the ability of the time-of-flight spectrometer whereas its setup is limited, in the kHz range, only by the time the amplifiers need to boost the electric signals, and in the “multi-MHz” range by the ability of the volume of gas being struck to respond sufficiently rapidly to the laser pulses. The team also states that its electrode-mediated measurement method renders the setup favourable to radiation of longer wavelengths as well.

Featured image: A collection of lasers of different frequencies in the visible-light range. Credit: 彭嘉傑/Wikimedia Commons, CC BY 2.5 Generic.