Of reason and realism

Laurie Penny writes on Longreads:

Remember the U.S. presidential debates of 2016? Remember how the entire liberal establishment thought Hillary Clinton had won, mainly because she made actual points, rather than shambling around the stage shouting about Muslims? What’s the one line from those debates that everyone remembers now? It’s “Nasty Woman.” What’s the visual? It’s Trump literally skulking around Hillary, dominating her with his body. It’s theatre. And right now the bad actors are winning.

This paragraph is on point. Many left-liberal intellectuals frequently pen opinions, editorials and commentaries for the popular press and assume, by the self-assessed weight of their arguments, that the conservative, right-wing reader must be convinced of the superiority of the authors’ philosophies and switch sides. This never happens. Specifically, it doesn’t happen 90% of the time because the authors aren’t good writers, and the ensuing back-and-forth swiftly descends into semantics. And it doesn’t happen 10% of the time because the bhakt reading the article isn’t there for the points. You can write and write and write but – as Penny proves – the theatre of fascism will always overtake the finest discussion of ideas.

I’m neither a scientist nor a philosopher, but I have often wondered if ideas from scientific realism can help make sense of the empirical information we have. It is possible the liberal intellectual assumes her audience will behave, at the individual level at least, the way she herself does; this is a reasonable assumption that we all make in our day-to-day lives: for example, we excuse a friend’s anger in a moment of frustration because we rationalise it away based on lessons we have learnt from our own experiences. Similarly, the author presumes that, since she believes she can be swayed by reason, the reader will be swayed, too, and the author’s commitment to reason becomes – in the author’s mind, at least – a common platform upon which writer and reader will stage their debate. However, the flaw in this worldview is that the bhakt is, almost by definition, inimical to reason (irrespective of whether he is in all aspects of his life unreasonable) and does not mount the stage with the same aspirations.

Now, scientific realism (in its semantic interpretation) holds that science’s claims about scientific entities – “objects, events, processes, properties and relations” (source) – should be taken literally, as if they correspond to the actual natural universe itself instead of to a natural universe we perceive with our senses. The significance of this statement is better illustrated by a counter-example: anti-realists would contend that if we cannot see electrons with the naked eye, then science’s claims about their existence don’t pertain to their actual existence but instead provide ways to instrumentalise the claims to aid our interactions with observable entities, such as an electric fan.

Similarly, the liberal intellectual behaves like an anti-realist, seeking to explain deviant social phenomena in terms she can understand and rejecting what she cannot observe herself instead of, and like a realist might, allowing ideas that don’t conform to her worldview to exist on their own terms, outside the realm of her scholarship and trivialised because their rules don’t submit to the logic of hers.

Acknowledgement as in the latter case is important to enable meaningful engagement, such as it is willing to look beyond the identity and aspirations of one’s own group. More importantly, classifying what is beyond one’s didactic reach as fictions – even useful fictions, as the committed anti-realist might – is flawed the same way scientism prizes an economic logic at the cost of morals and ethics. The belief that there may be other ways to make informed choices but that they will ultimately have to be subsumed within one’s worldview prevents oneself from a) designing appropriate policies to govern them; b) expanding one’s own library of knowledge to include what could well be a legitimate alternative, and c) acknowledging the strength of the alternative on its terms instead of addressing it as a primitive form of one’s own politics.

So unable to see beyond her own allegiance to reason, the scholar assumes constantly that it can and will triumph, while her diminished sense of the external world prevents her from acknowledging a different set of motivations for people on the ‘other side’. Over time, the left-liberal collective begins to reject and ignore their existence altogether, dismissing their motivations and sensibilities with counterparts that the individual rooted in the primacy of logic, reason and civility can actually assimilate. This way, the left-liberal group keeps up its mindless performance of engaging with the right when in fact it is not engaging at all.

I don’t present all of this as criticism, however, because the primary function of an intellectual creature is to intellectualise, in whatever form: through speech, essays, dramatisation, etc. The act of intellectualisation, in turn, presumes that one’s interlocutor is capable of receiving knowledge so organised and assimilating it themselves. Without this caveat, intellectualism becomes solipsistic and free speech, insofar as it seeks opportunities to change minds and set society on the path of enlightenment, becomes purposeless. So while there are people who are willing to reason and debate and argue, they must do so; but where people resort to whataboutery, shooting-the-messenger and ad hominem, reason alone – if at all – will not hope to succeed.

To explain the world

Simplicity is a deceptively simple thing. Recently, a scientist who was trying to explain something in general relativity to me did so in the following way:

One simple way to understand … is as follows. Imagine that one sets up spherical polar coordinates, so that space is described by r, theta, phi and time is described by t. Then in this frame what one would normally call a non-rotating observer is one who has no angular velocity in theta and phi i.e. if the proper time of the observer is tau, then {d theta over d tau} = {d phi over d tau} = 0.

(Emphasis added)

This is anything but simple, and this problem isn’t limited to this scientist alone. Lots of them regularly conflate explanation with elaboration. More recently, another scientist – by way of describing a peer’s achievements – simply listed them in chronological order. It was the perfect example of ‘tell, don’t show’:

Starting with the discovery of strangeness, called Gell-Mann-Nishijima formula, the Eightfold Way of SU(3), current algebra, he finally reached the theory of strong interactions, namely quantum chromodynamics. So his name is there in all the components of the theory of strong interactions, now a part of Standard Model. His other fundamental contributions are in renormalisation group, an important part of quantum field theory and in the V-A form of weak interaction. He also proposed a mechanism by which neutrinos acquire very small masses, the so called the See-Saw mechanism. He had broad interests going beyond his contributions in theoretical physics.

Explanation requires the explainer to speak multiple languages. For example, explaining the event horizon to someone in class X means being able to translate what you know in the language of graduate-level physics to the language of Newtonian mechanics, first principles of optics, simple geometric shapes and recourse through carefully chosen metaphors. It means enabling the listener to synthesise knowledge in other contexts based on what you have said. But not doing any of this, sticking to just one language and using more and more words from that language cannot be an act of explanation, or even simplification, unless your interlocutor also speaks that language fluently.

Ultimately, it seems that while not all scientists can also be good science writers, there is a part of the writing process on display here that precedes the writing itself, and which is less difficult to execute: the way you think. To be able to teach well and explain well, I think one needs to be able to think in ways that will mitigate epistemological disparities between two people such that the person with more knowledge empowers the one with less to climb up the knowledge ladder.

This in turn requires one to examine the precise differences between why you know what you know and why your audience doesn’t know what you know. This is not the same as “the difference between what you know and what the audience knows” because it is then simply an exercise in comparison – an exercise in preserving the status quo even. Instead, to know the why of the difference is also to know how the difference can be bridged – resulting in an exercising in eliminating disparity.

NYT on fire

As the world burns, is anyone paying attention to the New York Times? Because if you’re not, you should: it’s catching fire as well. On May 23, the grand old newspaper published a report by Maggie Haberman about how former Trump aide Hope Hicks has an “existential” crisis over complying with a congressional subpoena. Granted, it’s been full of embers for a while now – as Jay Rosen has been saying for years – but this particular story bares the Times‘s ridiculous position vis-à-vis the Trump White House for all to see.

The first giveaway that something is rotten isn’t in the lede but in the hero image, a glamorous photograph of Hicks as if the words to come were going to discuss her clothes. The words that do come then paint Hicks as an enigmatic ex-administrator caught between a rock and a hard place when in fact the matter is far simpler:  either comply with the subpoena from the House Judiciary Committee or find a legitimate reason to skip it, like (it appears) former WH counsel Donald McGahn II has been able. It’s not existentialism; it’s potentially criminal obstruction of justice.

To quote from Rosen’s analysis above:

[Times journalists] want the support, they also want to declare independence from their strongest supporters. … They are tempted to look right and see one kind of danger, then look left to spot another, equal and opposite. They want to push off from both sides to clear a space from which truth can be told. That would make things simpler, but of course things are not that simple. The threat to truth-telling – to journalism, democracy, the Times itself – is not symmetrical. They know this. But the temptation lives.

Science in the face of uncertainty

In 2018, scientists from IISc announced they’d found a room-temperature superconductor, an exotic material that has zero resistance to electric current in ambient conditions – considered the holy grail of materials science. But in the little data the authors were willing to share with the world, something seemed off.

Within a few days, other scientists in India and around the world began to spot anomalous data points in the preprint paper. If the paper wasn’t already vague, it was now also very suspicious. And it was still hard to tell what was going on: the scientists weren’t speaking to the press, IISc kept mum and the narrative was starting to turn smelly.

The duo clearly had to walk a fine line if they wanted their claim, and themselves, to retain legitimacy. They were refusing to talk to the press until their paper had been peer-reviewed, they said. However, others said this was a weak excuse and it was easy to see why: the best way to clear up confusion is to open up, not clam up. But they refused to, as much as they refused to provide any more information about their experiment or to allow academics around India to join in. And the narrative itself had by then become noticeably befouled by suspicion that there was foul play 😱.

In a new effort to beat these dark clouds back, the duo updated their preprint paper on May 22 with a lot more data, apart from tacking on eight more collaborators to their team. (One of them was Arindam Ghosh, a particularly accomplished physicist at IISc.) This was heartening to find out, esp. that they’re receptive to feedback. In fact, they’d also made note of that anomalous data pattern (although they still aren’t able to explain how it got there).

Making the GIANT ASSUMPTION that their claim is eventually confirmed and we have a room-temperature superconductor in our midst, a lot of things about many technologies will change drastically. Theorists will also have a new line of enquiry – though some already do – to find out which materials can be superconductors under what conditions. If we figure this question out, discovering new superconducting materials will become that much easier.

IFF the claim ends up being confirmed, many people will also likely have many different takeaways from what will become encoded as an extended historical moment, the prelude to a major discovery (or invention?). At that time, I think it will be interesting to look back and consider how different scientists respond to something very new in their midst.

To adopt Thomas Kuhn’s philosophy of scientific progress, it will be interesting to examine individual attitudes to paradigm-shifts, and the different  extents to which skepticism and cynicism dominate the story when the doctrine of incommensurability is in play. After all, a scientific result that has researchers scrambling for an explanation can evoke two kinds of responses, excitement or distrust, and it would be useful to find out if they’re context-specific in a contemporary, Indian setting.

In fact, the addition of Arindam Ghosh to the IISc research team reminds me of a specific incident from the not-so-distant past (and I do NOT suggest Ghosh was included only for scholastic heft). In 1982, Dan Shechtman discovered quasicrystals, whose internal crystal arrangement defied the prevailing wisdom of the time. So Shechtman was ridiculed as a “quasi-scientist” by a person no less in stature than Linus Carl Pauling, the father of molecular biology.

But Shechtman was sure of what he had seen under the microscope, so he attempted a third time to have his claim published by a journal. This time, he improved the manuscript’s presentation, and invited Ilan Blech, John Cahn and Denis Gratias to join his team. The last two lent much weight to an application that the casual historian of science frequently considers to be an objective and emotionless enterprise! Their paper was finally accepted by Physical Review Letters in November 1984.

Also in the early 1980s, Dov Levine in the US had discovered quasicrystals but without knowing that Shechtman had done the same thing, and Levine was eager to publish his paper. But Paul Steinhardt, his PhD advisor, advised caution because he didn’t want Levine to be proven wrong and his career damaged for it. Wise words – but also interesting words that show science is nothing without the people that practice it, that there’s a lot to it beyond the stony face of immutable facts, etc.

This is something many people tend to forget in favour of uttering pithy statements like “science is objective”, “science is self-correcting”, etc. Scientism frequently goes overboard in a bad way, and the arc of scientific justice doesn’t bend naturally towards truths. It has to be pulled down by the people who practice it. Science is MESSY – like pretty much everything else.

The same applies in the IISc superconductivity claim case as well. Nobody can respond perfectly in the face of great uncertainty; we can all just hope to do our best. Some ways for non-experts to navigate this would be to a) talk to scientists; I know some who’d surprise you with their willingness to sit down and explain; b) pick out publications you trust and read them (that’s The Wire Science 😄 and The Hindu Science in this specific case) as well as try to discover others; and c) be nice and don’t jump to conclusions, esp. within a wider social frame in which self-victimisation and entitlement has often come too easily.

Also, three cheers for preprints!

I turned this post into a Twitter thread on May 26, 2019.

The wind and the wall

I have an undergraduate degree in mechanical engineering but I’ve always struggled with thermodynamics. To the uninitiated, this means most of the knowledge specific to mechanical engineering over other branches remains out of my reach. I would struggle even with the simpler concepts, and perhaps one of the simplest among them was pressure.

When a fluid flows through a channel, like water flowing through a pipe, it’s easy to intuit as well as visualise what would happen if it were flowing really fast. For example, you just get that when water flowing like that turns a corner, there’s going to be turbulence at the elbow. In technical parlance, it’s because of the inertia of motion (among other things, perhaps). But I’ve never been able to think like this about pressure, and believed for a long time that the pressure of a fluid should be higher the faster it is flowing.

In my second or third year of college, there was a subject called power-plant engineering, a particularly nasty thing made so because it was essentially the physics of water in different forms flowing through a heat-exchanger, a condenser, a compressor, a turbine, etc. Each of these devices mollified the fluid to perform different services, each of them a step in the arduous process of using coal to generate electricity.

Somewhere in this maze, a volume of steam has to shoot through a pipe. And I would always think – when picturing the scene – that the fluid pressure has to be high because its constituent particles are moving really fast, exerting a lot of force on their surroundings, which in turn would be interpreted as their pressure, right?

It was only two years later, and seven years ago, that I learnt my mistake, when my folks moved to an apartment complex in Bangalore. This building stands adjacent to a much larger one on its right, separated by a distance of about 40 feet, with a wall that rises as high as an apartment on the sixth floor. My folks’ house is on the fourth floor. Effectively, the complex and the wall sandwich a 40-foot-wide, 80-foot-high and 500-foot-long corridor. The whole setup can be surveyed from my folks’ house’s balcony.

When there’s a storm and the wind blows fast, it blows even faster through this corridor because it’s an unobstructed space through which the moving air can build momentum for longer and because its geometry prevents the air from dissipating too much. As a result, the corridor becomes a high-energy wind tunnel, with the wind whistling-roaring through on thunderous nights. When this happens, the curtains against the window on the balcony always billow outwards, not inwards.

This is how I first realised that the pressure outside, in the windy corridor, is lower than it is inside the house. The technical explanation is (deceptively) simple: it’s composed of the Bernoulli principle and the Venturi effect.

The moving wind has some energy that’s the sum of the kinetic energy and the potential energy. The wind’s speed depends on its kinetic energy and its pressure, on its potential energy. Because the total energy is always conserved, an increase in kinetic energy can only be at the expense of the potential energy, and vice versa. This implies that if the wind’s velocity increases, then the corresponding increasing in kinetic energy will subtract from the potential energy, which in turn will reduce the pressure. So much is the Bernoulli principle.

But why does the wind’s velocity increase at all in the corridor? This is the work of the Venturi effect. When a fluid flowing through a wider channel enters a narrower portion, it speeds up. This is because of an elementary accounting principle: the rate at which mass enters a system is equal to the rate at which mass accumulates in the system plus the rate at which it exits the system.

In our case, this system is composed of the area in front of the apartment complex, which is very wide and wherefrom the wind enters the narrower corridor, the other part of the system. Because  the amount of wind exiting the corridor at the other end must equal the rate at which it’s entering the corridor, it speeds up.

So when the wind starts blowing, the Venturi effect accelerates it through the corridor, the Bernoulli principle causes its pressure to drop, and that in turn pulls the curtains out of my window. If only I’d seen this in my college days, that D might just have been a C. Eh.

A century of the proton

In 1907, a New Zealander named Ernest Rutherford moved from McGill University in Canada to the University of Manchester. There, he conducted a series of experiments where he fired alpha particles1at different materials. When he found that the beams deviated by about 2º when fired through air, he figured that the atomic constituents of air would have to have electric fields as strong as 100 million volts per cm to explain the effect. Over the next decade, Rutherford – together with the help of Hans Geiger and Ernest Marsden – would conduct more experiments that ultimately resulted in two very important results in the history of physics. First, that the atom was not indivisible. Second, the discovery of the proton.

In the last year of the 19th century and the first year of the 20th, Rutherford, and Paul Villard, had independently isolated and classified radiation into three types: alpha, beta and gamma. Their deeper constituents (as we know them today) weren’t known until much later, and Rutherford played an important role in establishing what they were. By 1911, he had determined that the atomic had a nucleus that occupied 0.1% of the total volume but contained all the positive charge – known today as the famous Rutherford model of the atom. In 1914, he returned to Canada and then Australia on a lecture tour, and didn’t return to the UK until 1915, after the start of World War I. Wartime activities would delay his studies for two more years, and he could devote his attention to the atom once more only in 1917.

That year, he found that when he bombarded different materials with alpha particles, certain long-range recoil particles called “H-particles” (a term coined by Marsden in 1913) were produced, more so when nitrogen gas was also present. This finding led him to conclude that an alpha particle could have penetrated the nucleus of a nitrogen atom and knocked out a hydrogen nucleus, in turn supporting the view that the nuclei of larger atoms also included hydrogen nuclei. The hydrogen nucleus is nothing but the proton. Rutherford couldn’t publish his papers on this finding until 1919, after the war had ended. He would go on to coin the term “proton” in 1920.

Interestingly, in 1901, Rutherford had participated in a debate, speaking in favour of the possibility that the atom was made up of smaller things, a controversial subject at the time. (His ‘opponent’ was Frederick Soddy, the chemist who proved the existence of isotopes.) It is highly unlikely that he could have anticipated that, only three or so decades later, people would begin to suspect that the proton itself was made up of smaller particles.

By the early 1960s, studies of cosmic rays and their interactions with matter indicated that the universe was made of much more than just the basic subatomic pieces. In fact, there was such a profuse number of particles that the idea that there could be a hitherto unknown organisational principle consisting of fewer smaller particles was tempting, albeit only to a few.

In 1964, Murray Gell-Mann and George Zweig independently proposed such a system, claiming that many of the particles could in fact be composed of smaller entities called quarks. By 1965, and with the help of Sheldon Glashow and James Bjorken, the quark model could explain the existence of a variety of particles as well as some other physical phenomena, strengthening their case.

Then, in a series of seminal experiments that began in the late 1960s, scientists at the Stanford Linear Accelerator Center began to do what Rutherford had done half a century prior: smash a smaller particle into a larger one with enough energy for the latter to reveal its secrets. Specifically, physicists used the linear accelerator at the SLAC to energise electrons to about 21-times the energy contained by a proton at rest, and smash them into protons. The results were particularly surprising.

A popular way to study particles, then as well as now, has been to beam a smaller particle at a larger one and scrutinise the collision for information about the larger particle. In this setup, physicists expect that greater the energy of the probing particle, the greater the resolution at which the larger particle will be probed. However, this relationship fails with protons because of scaling: electrons at higher and higher energies don’t reveal more and more about the proton. This is because, at energies beyond a certain threshold, the proton begins to resemble a collection of three point-like entities, and the electron’s interaction with the proton is reduced to its interactions with these entities, independent of its energy.

The SLAC experiments thus revealed that the proton was indeed made up of smaller entities called quarks, of two types – or flavours – called up and down. Gell-Mann and Zweig had proposed the existence of updown and strange quarks, and Glashow and Bjorken of the charm quark. By the 1970s, other physicists had proposed the existence of bottom and top quarks, discovered in 1977 and 1995, respectively. With that, the quark model was complete. More importantly for our story, it also made a complete mess of the proton – literally.

In the 1970s, physicists began to smash protons with neutrinos and antineutrinos to elicit information about the angular distribution of quarks inside particles like protons. They found that a proton in fact contained three free quarks in a veritable lake of quark-antiquark pairs, as well as that the sum of all their momenta didn’t add up to the total momentum of a proton. This hinted at the presence of another then-unknown particle that they called the gluon (which is its own mess).

In that decade, particle physicists began to build the theoretical framework called quantum chromodynamics (QCD), to explain the lives and workings of the six quarks, six antiquarks and eight gluons – all particles governed by the strong nuclear force.

Ninety years after Rutherford announced the discovery of the proton by shooting alpha particles through slices of mica and columns of air, scientists switched the world’s largest physics experiment – the Large Hadron Collider – on to study the fundamental constituents of reality by smashing protons into other protons. Using it, they have proved that the Higgs boson is real as well as have studied intricate processes with insights into the very-early universe and have pursued answers to questions that continue to baffle physicists.

Through all this, scientists have endeavoured to improve our understanding of QCD, especially by studying how quarks, antiquarks and gluons interact during a collision, knowledge that is crucial to ascertain the existence of new particles and deepen our understanding of the subatomic world.

Physicists have also been using collider experiments to examine the properties of exotic forms of matter, such as colour glass condensatesglasma and quark-gluon plasma, narrow the search for proposed particles to explain some basal discrepancies in the Standard Model of particle physics, make precision measurements of the proton’s properties for its implications for other particles (such as this and this) and explore unsolved problems concerning the proton (like the spin crisis).

And fully – rather only – 100 years after the proton was first sussed out, particle physics itself looks very different from the way it did in Rutherford’s time, and a large part of the transformation can be attributed, one way or another, to the proton. Today, physicists pursue other, very different particles, dream of building even larger proton-smashing machines and are busy knitting together theories that describe a world much smaller than the one of quarks and gluons. It’s a different world of different mysteries, as it should be, but it’s also great that there are mysteries at all.

1An alpha particle is actually a clump of two protons and two neutrons – i.e. the nucleus of the helium-4 atom.

Featured image credit: Kjerish/Wikimedia Commons, CC BY-SA 4.0.

The Nehru-Gandhis’ old clothes

The following tweet has been doing the rounds the last few days:

It carries an important message from India’s recent past, that a time of free-as-in-free speech actually did exist only half a century ago. It stands in stark contrast to the public political clime today, where people are jailed for sharing harmless memes and journalists gagged for doing their jobs, not to mention scholars being disinvited from lectures, musicians being prevented from singing and universities becoming less plural and more parochial.

However, Shankar’s cartoon, as depicted above, shouldn’t be paraded as a symbol of an era antithetical to this – 2014-2019 – alone but as one that doesn’t sit well with the politics of 21st century India altogether, including that of the Nehru-Gandhis. It is doubtful that whenever Rahul Gandhi comes to power, if at all he does, he is going to be okay with cartoons showing his great-grandfather’s clenched butt standing outside the doors of the UN, even if he might be willing to brook more dissent than the Bharatiya Janata Party has been.

The party that he leads with his mother has championed sycophancy and nepotism since the 1970s, when Indira Gandhi assumed power. This has often meant that those critical of their family – the First Family, so to speak – have never been able to climb the ranks and/or lead important institutions during Congress rule, even if they are otherwise qualified to do so. Perhaps the most stark example of this in recent memory was when Mridula Mukherjee assumed directorship of the Nehru Memorial Museum and Library in New Delhi after an opaque selection process, and proceeded to turn the institution into a building-sized panegyric for Sonia Gandhi et al.

Indeed, the same can be said for any political organisation that is held together by hero worship, centralisation of power and dynasticism. Some examples from around the country include the Dravida Munnetra Kazhagam, the All India Anna Dravida Munnetra Kazhagam, Shiv Sena, the Samajwadi Party and and the Rashtriya Janata Dal.

Today, Shankar’s illustration seems only to describe the extent to which the BJP has vitiated civil discourse and the need vote it out. However, the cartoon does not say anything about the party I would like to vote in because it says everything about what free speech really means, the kind of tolerance that political parties must harbour and, most of all, the fact that there seems to be nobody who is capable of that anymore. Even should the UPA somehow emerge triumphant on May 23, this cartoon will likely trigger as much wistfulness as it does today.

The worm and the world

Alanna Mitchell reports in the New York Times that boreal forests in the world’s north are being invaded by worms of the species Dendrobaena octaedra. They’re decomposing the leaf litter and releasing carbon dioxide into the atmosphere, transforming these carbon-negative forests into carbon-positives. In the process, they’re also disrupting climate models that scientists had prepared to understand how climate catastrophe might pan out in these areas. There’s no question that any of this is a disaster; it certainly is.

If I had written this story, I would have been very tempted to mention Nidhogg, the worm gnawing at one of the roots of Yggdrasil, the sacred tree of Norse mythology, to bring on the end of the world. This root is placed over a hot spring called Hvergelmir in Niflheim, a place of ice and cold, akin to the climate in Alberta and Alaska, where the worms have been found (the place of fire is called Muspelheim); Nidhogg lives within the spring. According to the Völuspá, which describes the creation myths of Old Norse mythology, Nidhogg’s arrival to the surface after breaking through Yggdrasil’s root signals Ragnarök, the Nordic apocalypse.

I’m sure others would have thought of this extended metaphor as well and probably decided against using it because it doesn’t add anything to the narrative and it doesn’t help make the story easier to digest anymore than it already is. Further, the addition – in India at least – would likely have drawn the ire of an entranced bhakt who can’t tell the difference between light-hearted allegory and full-blown prophecy, insisting that some Vedic text already knew of the worms before anyone else. It’s the sort of idiocy that easily beats the joy of curiosity, so it’s best kept to oneself, at least until the pall of gloom that many of us seem to be under passes.

The dance of the diamonds

You probably haven’t heard of the Chladni effect but you’ve likely seen it in action. Sprinkle some grains of sand on a thin metal plate and play a violin bow across it, and you’ll notice that the grains bounce around for a bit before settling down into a pattern, and refuse to budge after that.

This happens because of a phenomenon called a standing wave. When you drop a rock into a pond, it creates ripples on the surface. These are moving waves taking the rock’s kinetic energy away in concentric circles. A standing wave on the other hand (and like its name implies) is a wave that rises and falls in one place instead of moving around.

Such waves are formed when two waves moving in opposite directions bump into each other. For example, in the case of the metal plate, the violin bow sets off a sound wave that travels to the opposite edge of the plate, gets reflected and encounters a newer wave on the way back. When these two waves collide, they create nodes – points where their combined amplitude is lowest – and antinodes – pointes where their combined amplitude is highest.

In 1866, a German physicist named August Kundt designed an instrument, now called a Kundt’s tube, to demonstrate standing waves. A short demonstration below from user @starwalkingphoenix:

The tube is made of a transparent material and partially filled with a soft, grainy substance like talc. One end of the tube opens up to a source of sound of a single frequency while the other end is stewarded by a piston. As the piston moves, it can increase or decrease the total length of the tube. When the sound is switched on, the talc moves and settles down into the nodes. The piston is used to identify the resonant frequency: it is used to increase or decrease the tube’s length until the volume suddenly increases. That’s the sweet spot.

In the Chladni effect, the sand grains settle down into the nodes of the standing wave formed by the vibrations induced by the violin bow. These nodes are effectively the parts of the plate that are not moving, or are moving the least, even as the plate as a whole hosts vibrations. Here is a nice video showing different Chladni patterns; notice how they get more intricate at the higher frequencies:

The patterns and the effect are named for a German physicist and musician named Ernst Chladni, who experimented with them in 1787 and used what he learned to design violins that produced and emitted sound better. The English polymath Robert Hooke had performed the first such experiments with flour in the late 17th century. However, the patterns weren’t attributed to standing waves until the early 18th century by Sophie Germain, followed by Horace Lamb, Michael Faraday and John Strutt, a.k.a. Lord Raleigh. (The term ‘standing wave’ was itself coined only in 1860 by [yet] another German physicist named Franz Melde.)

Now, both Chladni and Faraday had separately noticed that while the patterns were formed most of the time, they did not when finer grains were used.

A group of scientists from a Finnish university recently rediscovered this bit of strangeness and piled some more weirdness on top of it. They immersed a square silicon plate 5 cm to a side in a tank of water and scattered small diamond beads (each 0.75 mm wide) on top. When they applied vibrations at a frequency of 9,575 Hz, the beads moved towards the parts of the plate that were vibrating the most instead of the least – i.e. towards the antinodes instead of the nodes.

A schematic illustration of the experimental setup. Source: PRL 122, 184301 (2019)

This doesn’t make sense – at least not at first, and until you stop to consider what you might be taking for granted. In the case of the metal plate, the sand grains are bounced around by the vibrations, and those that are thrown up do come back down due to gravity – unless they’re too light or the breeze is too strong, and they’re swept away.

Water is over 800-times denser than air and would exert a stronger drag force on the diamond beads, preventing them from being able to move around easily. Then there’s also the force due to the vibrations and gravity. But here’s the weird part. When the scientists combined the three forces into a common force, they found that it always pushed a bead towards the nearest antinode.

And this was just at the resonant frequency: the frequency at which an object is most amenable to vibrate given its physical properties. In other words, the resonant frequency is the frequency of the vibration that consumes the least amount of energy to cause in the body. For example, the silicon plate resonated at 9,575 Hz and 11,175 Hz.

But when the scientists applied vibrations at a non-resonant frequency of 10,675 Hz, the diamond beads moved around in swirling patterns that the scientists call “vortex-like”.

In 2016, another group of scientists – this one from France – had reported this swirling behaviour with polystyrene microbeads on a polysilicon membrane, both suspended in ultra-pure water. On that occasion, they had compared the beads’ paths to those of dancers performing a farandole, a community dance popular in Provence, France (see video below).

Polystyrene beads each 70 μm wide in a cavity rotating in a farandole-like manner at an applied frequency of 61,000 Hz. The time frame between each picture is 0.5 s. Source: PRL 116, 184501 (2016)

The scientists from the Finnish university were able to record over 96,000 data points and used them to try and figure if they could obtain an equation that would fit the data. The exercise was successful: they obtained one that could locate the “nodal, antinodal and vortical regions” on the silicon plate using two parameters (relatively) commonly used to model magnetic fields, called divergence and curl. Specifically, the divergence of the “displacement field” – the expected displacement of all beads from their initial position when a note is played for 500 milliseconds – denoted the nodal and antinodal regions and the curl denoted the parts where the diamonds would do the farandole.

However, to rephrase what they wrote in their paper, published in the journal Physical Review Letters on May 10, the scientists can’t explain the theory behind the patterns formed. Their equations are based only on experimental data.

The French group was able to advance some explanation rooted in theoretical knowledge for what was happening, although their experimental conditions were different from that of the Finnish group. Following their test, Gaël Vuillermet, Pierre-Yves Gires, Fabrice Casset and Cédric Poulain reasoned in their paper that an effect called acoustic streaming was at play.

It banks on the Navier-Stokes equations, a set of four equations that physicists use to model the flow of fluids. As Ronak Gupta recently explained in The Wire Science, these equations are linear in some contexts and nonlinear in others. When the membrane vibrates slowly, the linear form of these equations can be used to model the beads’ behaviour. This means a certain amount of change in the cause leads to a proportionate change in the effects. But when the membrane vibrates at a frequency like 61,000 Hz, only the nonlinear forms of the equation are applicable: a certain amount of change in the cause precipitates a disproportionate level of change in the effects.

The nonlinear Navier-Stokes equations are very difficult to solve or model. But in the case of acoustic streaming, scientists know that the result is for the particles to flow from the antinode to the node along the plate’s surface, then rise up and flow from the node to the antinode – in a particulate cycle, if you will.

Derek Stein, a physicist at Brown University in Rhode Island, wrote in an article accompanying the paper:

… this migration towards antinodes is a hallmark of particles being carried in acoustically generated fluid streams, and the authors were able to rule out alternative explanations. … [The] streaming effect in a liquid is only observable within a restricted window of experimental parameters. First, the buoyancy of the beads has to closely balance their weight. Second, the plate has to be sufficiently wide and thin that its resonant vibrations have large amplitudes and produce high vertical accelerations. The authors also noticed that tuning the driving frequency away from a resonance coaxed the particles to move in regular formations. This motion begged to be anthropomorphised, and the authors duly likened it to the farandole…

After this point, both research papers break off into discussing potential applications but that’s not why I am here. My favour part at this point is something the Finnish university group did: they built a small maze and guided a 750-μm-wide glass bead through it simply by vibrating its floor at different frequencies. They just had ensure that at some frequencies, the node/antinode would be to the left and at others, to the right.

Credit: K. Latifi et al., Phys. Rev. Lett. (2019)

And because they also possessed the techniques by which they could induce a particle to travel in straight lines or in curves, they could the move the beads around to trace letters of the alphabet!

Source: PRL 122, 184301 (2019)

Using ‘science’ appropriately


(Setting aside the use of the word ‘faith’) The work that some parts of CSIR has done and is doing is indeed very good. However, I feel we are not all properly attuned to the difference between the words “science” and “technology”. I don’t accuse Mande of ignorance but possibly the New Indian Express, the publisher. In a writer-publisher relationship, the latter usually determines the headlines.

Being more aware of what the words mean is important for us as mediapersons to use them in the right context, and this in turn is consequential because the improper overuse of one term can mask deficiencies in its actual implementation. For example, I would rather have used ‘Technology as saviour’ as the headline for Mande’s piece, and for various pieces in the Indian mainstream news space. But by using science, I fear these publications are giving the impression that Indian science is currently very healthy, effective and true to its potential for improving the human condition.

Quite to the contrary, funding for fundamental research has been dropping in India; translational support is limited to areas of study that can “save lives” and are in line with political goals; and the political perception of science is horribly skewed towards pseudoscience.

Before that one commentator jumps in to say things aren’t all that bad: I agree. There are some pockets of good work. I am personally excited about Indian researchers’ contributions to materials science, solid-state and condensed-matter physics, biochemistry, and experimental astronomy.

However, the fact remains that we are very far from things being as they should be, and not as political expediency needs them to be. And repeatedly using “science” when in fact we really mean “technology” could keep us form noticing that. That is, if we were mindful of the difference and used the words appropriately, I bet the word “science” would only occasionally appear on our timelines and news feeds.