On September 6, 2019, two researchers from Israel uploaded a preprint to the bioRxiv preprint server entitled ‘Can scientists fill the science journalism void? Online public engagement with two science stories authored by scientists’. Two news sites invited scientists to write science articles for them, supported by a short workshop at the start of the programme and then by a group of editors during the ideation and editing process. The two researchers tracked and analysed the results, concluding:
Overall significant differences were not found in the public’s engagement with the different items. Although, on one website there was a significant difference on two out of four engagement types, the second website did not have any difference, e.g., people did not click, like or comment more on items written by organic reporters than on the stories written by scientists. This creates an optimistic starting point for filling the science news void [with] scientists as science reporters.
Setting aside questions about the analysis’s robustness: I don’t understand the point of this study (insofar as it concerns scientists being published in news websites, not blogs), as a matter of principle. When was the optimism in question ever in doubt? And if it was, how does this preprint paper allay it?
The study aims to establish whether articles written by scientists can be just as successful – in terms of drawing traffic or audience engagement – as articles penned by trained journalists working in newsrooms. There are numerous examples that this is the case, and there are numerous other examples that this is not. But by discussing the results of their survey in a scientific paper, the authors seem to want to elevate the possibility that articles authored by scientists can perform well to a well-bounded result – which seems questionable at best, even if it is strongly confined to the Israeli market.
To take a charitable view, the study effectively reaffirms one part of a wider reality.
I strongly doubt there’s a specific underlying principle that suggests a successful outcome, at least beyond the mundane truism that the outcome is a combination of many things. From what I’ve seen in India, for example, the performance of a ‘performant article’ depends on the identity of the platform, the quality of its editors, the publication’s business model and its success, the writer’s sensibilities, the magnitude and direction of the writer’s moral compass, the writer’s fluency in the language and medium of choice, the features of the audience being targeted, and the article’s headline, length, time of publication and packaging.
It’s true that a well-written article will often perform better than average and a poorly written written article will perform worse than average, in spire of all these intervening factors, but these aren’t the only two states in which an article can exist. In this regard, claiming scientists “stand a chance” says nothing about the different factors in play and even less about why some articles won’t do well.
It also minimises editorial contributions. The two authors write in their preprint, “News sites are a competitive environment where scientists’ stories compete for attention with other news stories on hard and soft topics written by professional writers. Do they stand a chance?” This question ignores the publisher’s confounding self-interest: to maximise a story’s impact roughly proportional to the amount of labour expended to produce it, such as with the use of a social media team. More broadly, if there are fewer science journalists, there are also going to be fewer science editors (an event that precipitated the former will most likely precipitate the latter as well), which means there will also be fewer science stories written by anyone in the media.
Another issue here is something I can’t stress enough: science writers, communicators and journalists don’t have a monopoly on writing about science or scientists. The best science journalism has certainly been produced by reporters who have been science journalists for a while, but this is no reason to write off the potential for good journalism – in general – to produce stories that include science, nor to exclude such stories from analyses of how the people get their science news.
A simple example is environmental journalism in India. Thanks to prevalent injustices, many important nuggets of environmental and ecological knowledge appear in articles written by reporters working the social justice and political economics beats. This has an important lesson for science reporters and editors everywhere: not being employed full-time is typically a bitter prospect but your skills don’t have to manifest in stories that appear on pages or sections set aside for science news alone.
It also indicates that replenishing the workforce (even with free labour) won’t stave off the decline of science journalism – such as it is – as much as tackling deeper, potentially extra-scientific, issues such as parochialism and anti-intellectualism, and as a second step convincing both editors and marketers about the need to publish science journalism including and beyond considerations of profit.
Last, the authors further write:
This study examined whether readers reacted differently to science news items written by scientists as compared to news items written by organic reporters published on the same online news media sites. Generally speaking, based on our findings, the answer is no: audiences interacted similarly with both. This finding justifies the time and effort invested by the scientists and the Davidson science communication team to write attractive science stories, and justifies the resources provided by the news sites. Apparently if websites publish it, audiences will consume it.
An editor could have told you this in a heartbeat. Excluding audiences that consume content from niche outlets, and especially including audiences that flock to ‘destination’ sites (i.e. sites that cover nearly everything), authorship rarely ever matters unless the author is prominent or the publication highlights it. But while the Israeli duo has reason to celebrate this user behaviour, as it does, others have seen red.
For example, in December 2018, the Astronomy & Astrophysics journal published a paper by an Oxford University physicist named Jamie Farnes advancing a fanciful solution to the dark matter and dark energy problems. The paper was eventually widely debunked by scientists and science journalists alike but not before hundreds, if not thousands, of people were taken by an article in The Conversation that seemed to support the paper’s conclusions. What many of them – including some scientists – didn’t realise was that The Conversation often features scientists writing articles about their own work, and didn’t know the problem article had been written by Farnes himself.
So even if the preprint study skipped articles written by scientists about their own work, the duos’s “build it and they will come” inference is not generalisable, especially if – for another example – someone else from Oxford University had written favourably about Farnes’s paper. I regularly field questions from young scientist-writers baffled as to why I won’t publish articles that quote ‘independent’ scientists commenting on a study they didn’t participate in but which was funded, in part or fully, by the independent scientists’ employer(s).
I was hoping to neatly tie my observations together in a conclusion but some other work has come up, so I hope you won’t mind the abrupt ending as well as that, in the absence of a concluding portion, you won’t fall prey to the recency effect.