Science can be a candle in the dark, as long as you’re not actively trying to avoid the light,David Weinberg,The Skeptic

There is a long tradition of using darkness and light as metaphors for states of ignorance and knowledge. Carl Sagan famously subtitled his treatise on science “The Demon Haunted World… Science as a candle in the dark”.

It is a skeptic’s cliché that ghost hunters do their best work in the dark. They assemble high tech instruments and huddle in dark rooms. Any sound, smell, draft, change in temperature, or electromagnetic signal reinforces the presence of an apparition. Turning on the lights expands the amount of available information. That information may be sufficient to confirm or defeat some of the hypotheses constructed while in the dark. If your job is to find ghosts, darkness is a professional asset.

“Photophobia” is a medical term that describes aversion to light. If light is a metaphor for knowledge, photophobia can be a useful metaphor for an aversion to knowledge.

The ghost hunter makes the choice to keep the switch flipped “off”, and so they need not confront the inconvenient facts revealed by greater illumination. Sometimes, however, inconvenient information comes from sources beyond the control of the photophobe. For those instances more active strategies need to be employed.

I am reminded of scene from Douglas Adams’ The Restaurant at the End of the Universe book 2 of the Hitchhiker’s Guide to the Galaxy canon. Zaphod Beeblebrox, is a charismatic two-headed galactic wayfarer and spaceship pirate. When faced with a dangerous situation, he found great comfort in putting on specialised eyewear he carried in his pocket:

They were a double pair of Joo Janta 200 Super-Chromatic Peril Sensitive Sunglasses, …. At the first hint of trouble they turn totally black and thus prevent you from seeing anything that might alarm you.

It is human nature to avoid interaction with information that might cause discomfort or dissonance. We may wish to metaphorically don peril-sensitive sunglasses to satisfy our aversion to uncomfortable new information.

Better to light a candle!

In most real-world examples, the informational environment is not a dichotomy of darkness and light, but a continuum from darkness, to shadow, to full illumination. Darkness is metaphor for a state when information is very limited in terms of quantity and quality; as such, increasing the light refers to gathering more data and higher quality data.

The exhilarating environment of dawn is a fertile environment for new ideas. Features are slowly emerging, but details remain unformed. Each new observation is a little ray of light. There is a furious effort to recognise patterns and identify trends. This is an important and necessary step in scientific progress. The transition from darkness to light is not a passive activity – investigators design studies, gather information, and illuminate shadows so that others can make more careful, more focused observations, further elevating the ambient light. Skeptics understand that many of these emerging ideas will eventually be discarded or amended to better-informed versions as more light is shed.

I have been in science and medicine long enough to see a pattern play out again and again: an observation by an individual or group of individuals generates interest and curiosity. Hypotheses are generated. Preliminary data emerge, showing promising results. Some enthusiastically adopt a new paradigm, while others urge caution and appeal for better data.

Sometimes the hypothesis is validated, and a new evidence-based intervention is born. With tedious frequency, however, the accumulation of more reliable data fails to confirm an emerging hypothesis or demonstrate an effect that is so attenuated that it is clinically meaningless. Occasionally, a new treatment is found to actually be harmful. Like ghosts, many once-promising hypotheses vanish in the daylight.

When an emerging treatment fails higher quality investigation, most evidence-based providers accept the new information. They are disappointed to dismiss a potential weapon from their arsenal, but grateful to have sound data on which to base decisions. There are often dissenters who reject the new information and cling to the discredited hypothesis. The early advocates of the hypothesis are often the last to abandon it.

These are usually rather boring, esoteric skirmishes that take place in professional journals and scientific meeting, however, some recent clashes have been on more public display.


As the COVID-19 pandemic initially unfolded, effective treatment strategies were net yet known. This was the “dawn” phase of information-gathering and hypothesis-generation. With few proven treatments, some clinicians and investigators speculated that effective drugs could be hiding in the pharmacy. This was not an unreasonable premise. They decided to explore available medications with properties that theoretically might provide therapeutic effects for treatment of coronavirus infection.

The two best-known of these repurposed drug candidates are hydroxychloroquine and ivermectin. These were readily available drugs with well-known relatively favourable safety profiles, but were highly speculative as a treatment for coronavirus infection. A few investigators implemented these (and other) drugs into their Covid treatment protocols. They were encouraged by early results.

Due to vocal advocacy by the investigators and amplification of the message by some politicians and other public figures, awareness and enthusiasm for these drugs far outweighed the strength of the contemporaneous evidence. Many seasoned experts tried to temper this premature, irrational exuberance and advocated for further study.

As the evidence accumulated, poor quality studies and even some outright fraudulent research made matters even more confusing. As frequently seen in emerging treatments, the higher quality studies showed benefits to be attenuated or absent. Eventually randomised clinical trials were organised.

Randomised clinical trials are cumbersome, expensive, and slow; but, when available, well-designed, well-executed randomised trials are the gold-standard in evaluating the efficacy of a new treatment. The design of these studies minimises the bias inherent to various degrees in less “fussy” study designs. They shine the brightest, best focused light on a specific therapeutic claim. Randomised trials reliably and repeatedly failed to confirm benefit for hydroxychloroquine. Enthusiasm for hydroxychloroquine long outlived the preponderance of negative evidence, but it has largely faded from the mainstream conversation

Canadian science writer Jonathan Jarry wrote an excellent article on the origins of the ivermectin story. Emerging data for ivermectin display the same pattern as hydroxychloroquine, while multiple randomised trials have failed to demonstrate benefit. Yet, despite the accumulation of negative results, there remains a vocal contingency of ivermectin evangelicals.

The hydroxychloroquine and ivermectin stories are quintessential examples of promising exploratory data generating enthusiasm, only to be deflated by higher quality, more definitive studies.

The hydroxychloroquine and ivermectin stories are quintessential examples of promising exploratory data generating enthusiasm, only to be deflated by higher quality, more definitive studies. They also illustrate the avidity with which some stakeholders fixate on lower quality positive data and avert their gaze from the higher quality disconfirming data.

The rationale for rejecting higher quality data can be sorted into a few categories, and exemplified by some common forms of argument.

Pre-emptive photophobia

We don’t need lights

”We have sufficient information. The matter is settled and doing additional studies would be a waste of resources.”

No time for lights

“Doing more studies would be a waste of time. We need to deploy this treatment now.”

How dare you turn on the lights

”The question has been answered, it would be unethical to do this research”

Reactive photophobia

Its just a light!

“Randomised trials are overrated”*“You are all guilty of methodolatry and obsessed with randomised trials.”*”What about all the positive studies?”“It worked for me”“I have hundreds of happy patients”“debate me!”

Lights! What lights?

This study is invalid.””The study was too small.””You studied the wrong patient population.””The treatment was given too early.””The treatment was given too late.””The dose was too large.””The dose was too small.””That’s not how I would have done it.””The study was corrupt”.

Critiquing the critics

Critical appraisal is of research is a responsible and essential activity. An unnecessary study is a poor use of resources and can delay the implementation of a critical treatment. A poorly designed or poorly executed study can result in misleading results and bestow unearned influence on the acceptance or rejection of an hypothesis.

No study is perfect. Responsible investigators are forthcoming about the strengths, weaknesses, and limitations of their studies. Reasonable people can (and will) disagree on the minutia of the design, execution, and analysis of any study.

Not all criticisms are valid. It is hard to accept that a closely held belief is a phantom that vanishes in the light of superior evidence. There are numerous biases and logical fallacies that enable dismissal of inconvenient information.

Recognising that no study is perfect, the important questions are these: are the criticisms valid? And if they are valid, are the flaws in the study sufficient to dismiss the results? There are some principles of evidence that can be used to assess the criticisms of new research results.

Context and scale

There is a difference between a quibble about the esoteric details of a study and a fatal flaw. It is possible for a study to be so flawed that the results can be dismissed. I have written about such studies. In most cases, it is far more nuanced that that. I may read a study and have questions regarding details of the design, conduct, and/or analysis, yet fully accept the validity of the results. I also need to acknowledge that the investigators who constructed the study are almost certainly more qualified and more knowledgeable than I am.

Do those who wish to discredit a study reflect these nuances? Do they represent every nit-pick as a fatal flaw? Are their criticisms valid and conclusions proportional?

Hierarchies of evidence

There are numerous features of a study that influence how trustworthy it is in addressing a research question. In medicine, a well-designed, well-executed randomised clinical trial will nearly always outweigh numerous studies of lesser quality. When evaluating evidence, do the critics attribute higher weight to higher quality evidence? Do they acknowledge the weaknesses in the lesser quality studies?

Anomaly hunting

Clinical trials are particularly vulnerable to this type of abuse. Clinical trials are generally designed to address a specific question. The question is usually very narrowly focused, and the answer is based on a predefined criteria. Clinical trials collect an enormous amount of information and report much more than the analysis of the primary question of the study. They compare the characteristics of the treatment and placebo group. The may explore subgroups, such as different age groups, or socioeconomic groups. They also report secondary and tertiary endpoints. There may be dozens of analyses reported.

With so many analyses there are likely to be apparent oddities, outliers, and paradoxes. For instance, one small subgroup may appear to respond to an otherwise ineffective treatment. Some secondary outcomes may trend in an unexpected direction. These data and these analyses deserve careful consideration. They or may not have plausible explanations. They may generate hypotheses to be tested with further analysis or more research. Often they are just the results of randomness among multiple analyses.

Do critics attach outsized weight to these supplemental analyses?  Do they use anomalies in these analyses to discredit a soundly designed and conducted study? Do they trumpet a favourable trend in a secondary or tertiary analysis as a triumph in a negative study?

Selective photophobia

Do the critics apply consistent standards of criticism to studies that support their favoured conclusion as the ones that refute the same conclusion?


We are all tempted to shield our eyes from uncomfortable new information. The times that we succumb to intellectual photophobia we prioritise comfort and security over potential growth. Awareness of this tendency is the first step in recognising it in ourselves and in calibrating our own responses to new data.

The post Science can be a candle in the dark, as long as you’re not actively trying to avoid the light appeared first on The Skeptic.

When people become over-invested in their beliefs, they can develop intellectual photophobia: an active resistence to any evidence that might shed uncomfortable light
The post Science can be a candle in the dark, as long as you’re not actively trying to avoid the light appeared first on The Skeptic.