Recently I had a conversation about scientific claims in a field I haven’t mastered. We all find ourselves there once in a while. Whenever we speak outside of fields we know fairly well, the confidence with which we hold our convictions must drop. Because I don’t have much training in said disciplines, I haven’t thoroughly investigated the claims offered and nuances they imply, and I might very well be overlooking something. I can’t, by virtue of not having researched it much, have a clear view of the field at large and therefore won’t be able to situate whatever claim we’re discussing in its wider scientific distributions.
Still, none of these drawbacks prevent a thoughtful person from having an opinion or an inkling as to what may or may not be accurate. We can still be skeptical, argue over mechanisms and the interpretation of evidence, or delay inquiry for another time, but we must accept that when we are in foreign terrain, we might just be plain wrong. A large dose of humility is required.
Per standard Bayesian thinking, my priors are held with very little evidence and so new information should weigh heavily in my judgment. What was so odd in this recent conversation was how my sparring partner immediately pulled rank on me and proceeded to instruct me in the three things that she looked for when assessing the evidence presented:
- the reputation of the journal in which the referenced study was published;
- where the funding for the particular research came from and for whom the researcher worked;
- whether the result was the consensus of the field.
To an initial observer, this sounds prudent. More reputable journals may place higher demands on the quality of research that it prints. Funding may sway people, consciously or not. And consensus should count for something
Actually, none of those claims are true. Prestigious journals are fraught with office politics, citation circles, and a surprising amount of naval gazing. And consensus, while cozy and comfortable, is always and everywhere the wrong metric – sometimes it is flat out wrong, as it was on evolution, on relativity, on plate tectonics, and on the 1980s AIDS epidemic.
Science is not established by a show of hands. Many fringe ideas may very well be wrong, but like Heather Heying and Bret Weinstein say in defense of the intellectual outcast, “It is exactly from the fringe that progress is made.” To quote the late Swedish economist Assar Lindbeck, “If you are never controversial, you have probably never said anything genuinely new or interesting.”
Several years ago, a friend at a highly prestigious university explained to me how she treated information that was new to her. First, she looked for the author of the argument on Wikipedia, finding out biographical information about sex, ethnicity, and where they had grown up. Second, she googled any conflicts of interests and accusations of biased funding; if she found any, she would read those first, before she read anything by the author in question. How the former set of information upgraded the latter was never quite clear. What of the accuser of bias? And who fact-checks the fact-checkers?
What’s so odd about these two checklists is that none of their contents matter. From the point of view of scientific inquiry, it doesn’t matter if the author of an article is male or female, Black or Hispanic, grew up wealthy in the West or poor in the South or vice versa, or has a high h-index. The point of the scientific method as developed since Bacon or Hume, the Royal Society, and the Republic of Letters, is that evidence prevails. Assessment of what is, and whether those claims hold up against real-world evidence rules – not whether the person uttering them have financial incentives one way or another or has a particular personal history that informs their position.
“Doesn’t it take you hours to even open a book or download a paper?” I remember asking my demographically obsessed friend. Seriously following her routine would quickly derail any research agenda. Yes, she said, before she complained loudly about the time it took to do her research. No wonder.
This controversy over money, funding, and credentials has emerged again and again in these two pandemic years. AIER has seen its fair share of such arguments following the Great Barrington Declaration, as reflected on by Phil Magness and James Harrigan on its anniversary:
It’s much easier to demonize an opponent by accusing them of being on the payroll of ‘dark money’ than it is to engage their arguments on a substantive and scientific evidentiary basis.
To invoke money is to pull attention away from what matters. It’s lazy armchair theorizing that turns the topics into what is at best a meta-conversation about how funding structures in industry and academia may tilt the kind of research being done or selected for. I have argued in the past that if we find even a sliver of funding connection to some despicable source,
We don’t need to engage with whatever the researcher is saying, or investigate the scientific backing of his or her claims: we can reject them without all of that – and take the rest of the afternoon off! The scientist is clearly a dupe, ‘bought and paid for,’ a quack, and must surely have faked his or her entire research agenda.
Last month, Ashley Rindsberg’s great piece in Tablet Magazine on the Lab Leak hypothesis illustrated the same point. The first wave of objections against what was a then-fringe idea invoked racism and anti-Chinese hostilities for what sounded like an unbelievable claim (Perhaps, but why didn’t we investigate and debunk the claim instead of throwing politicized mud at one another?). The counterargument, now that more evidence is pointing towards the leakage story, is that corrupt American news media was held financially hostage to a CCP-owned, CCP-dependent corporate press. Maybe, but that only explains some of their behavior, yet does nothing to rectify the only question that matters: is it, or is it not, correct?
While financing – of science, in pharma, in politics, in climate science, and now media and all things Covid – was always a quick-and-easy card to play in arguments, it was never the first and only step in an argument. At best, it raises some red flags to investigate. Extreme politicization and one-sided funding may twist what looks like consensus in a field and conceal the valid objections that the field otherwise would contain.
That’s fair enough as a starting point, but I always invoked Ludwig von Mises’ steel manning position in questions like these. So what, said the greatest economist of the twentieth century, much more eloquently than I ever could. Even if the other was corrupt, disingenuous, bought-and-paid-for, you would still have to prove the counterargument. With logic and the evidence available. Let’s grant that your opponent was biased, writes Mises convincingly in Theory and History: An Interpretation of Social and Economic Evolution, “But then we must realize that his alleged bias produced theorems which successfully withstood all objections.” Then, is he even biased?
To overthrow an argument or evidence or a consensus we must still show that said argument is wrong, regardless of the motives that fueled its proponents.
By constantly invoking funding, or peddling money as a potential reason for scientific outcomes or political beliefs, we denigrate the intellectual capacity of one another. We pretend that, cheaply, everyone’s for sale and anyone’s morals – scientific or political – are available to the highest bidder. If that’s the case, we have much bigger problems than unbalanced funding for politically convenient topics.
We fool ourselves that our arguments follow ethnic, sex, or demographic lines in what amounts to a wholly unscientific collapse into untethered subjectivity. By concerning ourselves with financial or structural bias, we chip away at the idea of objective reality. There is no reality but the story predetermined by payroll and demographic demands.
If that were the case, I wonder why we would bother conversing at all.
[Editor’s note: For detailed information on how to assess the credibility of scientific articles and the studies they cite, see Daryl Sas’s “What’s Incredible? Learning to Read Science Articles with a Critical Eye.”]
This piece originally appeared at the American Institute for Economic Research and has been republished here with permission.
Leave a Reply