Blog Archive

Friday, November 14, 2025

Bias and Trustworthiness in Science: ‘When Are Scientific Claims Untrustworthy?’ Review and Summary of Patrick Brown’s Article in ‘The Ecomodernist’

     Trustworthiness in science is often assumed. The main factors that undermine it are politicization and bias. In science, results need to be able to be replicated. When that is not the case, as in the “replication crisis” in psychology (where only 36% of published results could be replicated), distrust in science can build. Brown also cites the reversals in dietary guidelines and the reversals in mask-wearing during COVID as examples where the science was not entirely trustworthy. Brown cites the work of Daniel Sarewitz and Steve Rayner, two researchers who study the nexus of science and society. He notes two aspects of their work that he has found useful:

1) how feedback-rich and falsifiable the knowledge is, and 2) how high the stakes are in terms of broader value-laden ramifications.”

     Sarewitz and Rayner distinguish the “appropriate expertise” of feedback-based practitioners, such as surgeons and pilots, from the “inappropriate expertise” of those credentialed but not having practical experience and demonstrable successes in the fields on which they advise.

     Brown derives a 2D graphic representation scheme for assessing general trustworthiness in science, as shown below. The variables on each axis are How strong is the pull toward a preferred conclusion? and How testable is the relevant real-world conclusion? Motivation to reach predetermined conclusions is a basic definition of bias. One might see the y-axis as the hype axis and the x-axis as the reality axis.





      Brown notes:

“…scientific knowledge is influenced by ethical intuitions, culture, and peer pressure surrounding researchers, as well as the incentive structure of the scientific funding and publishing systems.”

     He suggests that while bias is acknowledged as undesirable in science, like all people, all scientists have some level of it. It is a simple acknowledgement that there are often motivations to produce a desired result at some level. To assess this, he adds four quadrants to his x/y graph as shown below. The second graph shows examples that might fit into each quadrant. He then goes on to explain the four quadrants.






 

Quadrant I: Claims are reliably trustworthy and uncontroversial but relatively inconsequential.

     In this quadrant, there is very little motivation to be biased, and the real-world conclusions are readily testable. Basic uncontested science facts fall into this category. Brown also notes here that one’s credentials sometimes do not equate to their practical knowledge of the subject matter. This quadrant has the highest level of trustworthiness.

 

Quadrant II: Claims are contestable, but controversy remains academic.

     This quadrant refers to situations where there is some disagreement about results but little or no motivation to reach certain conclusions. Thus, as he puts it, arguments are mainly academic rather than having any social or economic implications.

 

Quadrant III: Foundational claims are trustworthy, but controversy arises from different frameworks implicitly emphasizing different values.

     Here he explains:

“…controversy arises due to disagreements on which evidence deserves the most weight and which conclusions to emphasize. Due to incentives within academic publishing or the implicit preferences of researchers, there may be strong publication biases, where certain broader conclusions are sampled much more frequently than others for reasons other than scientific merit.”

     This seems to be a common situation in many of our societal debates about science, energy, economics, the environment, politics, and more. Here, he uses the example of the question of whether raising the minimum wage helps poor people. At first, it would seem to be a no-brainer since poor people will make more money. However, it can also lead to less employment for the same pool of poor people. It could also lead to fewer hours being available to work for some. Thus, it depends on the details and how they play out, whether it will be an overall help or an overall hindrance to the poor. “Help” (and hinder) are value-laden verbs, he says, open to different interpretations, which are often informed by political opinions, rather than science.

 

Quadrant IV: From the perspective of desiring neat, straightforward answers, Quadrant IV is a mess.

     He again points to the work of Sarewitz and Raynor, and explains the quadrant as follows:

Claims are reliably controversial, contestable, and difficult to adjudicate because they are framework and model-dependent, difficult to test, embed value-laden assumptions, and are strongly susceptible to personally and culturally-influenced motivated cognition by the experts producing the knowledge. The same suite of evidence or underlying facts can be legitimately assembled into coherent narratives that seem completely at odds with each other, and there is no straightforward way to adjudicate which emphasis is “correct

     Both uncertainty and controversy are highest in Quadrant IV. Below are more examples for each quadrant.




     Next, he considers the problem of conflating Quadrant IV with Quadrant I, noting that disputing something on Quadrant I is akin to “science denial,” while doing the same with Quadrant IV should be perfectly acceptable. Quadrant IV may even contain political opinions disguised as science.

“…there is a large and seemingly increasing desire for ostensibly scientific institutions and expert bodies to overreach and use the epistemic authority granted to science by the qualities of Quadrant 1 to attempt to make authoritative statements in Quadrant IV. In its most extreme form, this amounts to dressing up political opinions as if they were scientific facts.”



     He cites a paper, “Paris Climate Agreement passes the cost-benefit test,” published in Nature Communications in 2020, as an example of conflating Quadrant IV with Quadrant I. These conflations blur the lines of "epistemic authority" since they are written in the format of science but involve values-laden conclusions. Of course, we can attempt to look at things like ethical values scientifically, but there are limitations. 





     He also notes a pertinent example and something I have noted as well:

For example, conventional climate science and climate policy, as represented by the United Nations Intergovernmental Panel on Climate Change, are heavily guided by the underlying goal of the United Nations Framework Convention on Climate Change of “avoiding dangerous anthropogenic interference with the climate system.” This framework emphasizes, for example, the precautionary principle applied specifically to climate change over cost-benefit analysis of various energy systems and their alternatives, as well as the intrinsic value of an unchanging climate over a centering of the relationship between energy and human welfare.”

     These “moral frameworks” can be akin to ideologies, he says. His final statements in the article are quite pertinent, I think, and worth reproducing:

This all gives the impression that if we could just eliminate “ideological bias,” then pure “science” would illuminate a direct path forward. Ultimately, though, these are Quadrant IV discussions where a shared set of facts can be marshaled to support arguments for diametrically opposed broader conclusions. There is no such thing as eliminating ideological bias because all prescriptive recommendations on a course of action rest on some contestable moral framework, and many of the most important claims are very difficult to test.”

Thus, it would be clarifying to focus on surfacing the ideological and moral disagreements that drive the pull towards different preferred conclusions and acknowledging why claims are difficult to adjudicate. In doing so, it would be easier to recognize that these Quadrant IV claims are inherently contestable and will always be.”

     This is an important writing. I think that scientists and policymakers alike should have a well-grounded education in the different biases and fallacies that may be encountered when weighing scientific claims. This article offers some useful ways to evaluate such claims.  

 

 

     

 

References:

 

When Are Scientific Claims Untrustworthy? Distinguishing between science and political opinions masquerading as science. Patrick Brown. The Ecomodernist. November 10, 2025. When Are Scientific Claims Untrustworthy?

 

No comments:

Post a Comment

     This webinar was mainly about the applications of deep learning networks trained on seismic attribute data in order to model CO2 plumes...