Blog Archive

Saturday, November 22, 2025

Scientific Censorship, Inappropriate Expertise, and Bias: Obstacles to Keeping Science Real


     Thomas Kuhn, in his 1962 book, The Structure of Scientific Revolutions, noted that science is in part based on consensus. The prevailing scientific views of a time period are agreed upon by consensus. That consensus becomes the paradigm. Facts are first discovered and teased out through experimentation. Then they must be agreed upon by the best scientists in the field in a consensus. Consensus may introduce some subjectivity into objective science if there is enough uncertainty in the field.

     Bias and censorship exist in science as well. People have opinions. These opinions mostly don’t affect science since it is based on facts, but when science influences policy or is translated into policy, those opinions arise, and factuality becomes less influential. Less plausible ideas are “censored” simply due to being less plausible, as they should be. Scientists are most often biased toward the most plausible consensus, the prevailing paradigm, which is generally, but not always, a good thing.  

 

Scientific Censorship

     A 2023 PNAS paper explores scientific censorship, defined as:

“…actions aimed at obstructing particular scientific ideas from reaching an audience for reasons other than low scientific quality.”

     The researchers found that prosocial concerns such as self-protection, benevolence toward peer scholars, and concern for the well-being of human social groups were motivating factors for scientific censorship. They note that there is a clear need to improve transparency and accountability in scientific decision-making. They also note that the costs and benefits of these types of censorship need to be analyzed and weighed, since some censorship is sometimes warranted. They note that scientific censorship is difficult to detect and measure, so it is rarely studied. In describing and defining censorship, they note:

Censorship is distinct from discrimination, if not always clearly so. Censorship targets particular ideas (regardless of their quality), whereas discrimination targets particular people (regardless of their merit).”




     The following table distinguishes types of censorship, who the censors typically are, the motivations of censors, and the outcomes of scientific censorship.




     They distinguish two types of censorship: soft censorship and hard censorship. Hard censorship refers to censorship from institutional authorities like governments and religious authorities, and usually involves preventing dissemination or retraction. Soft censorship can have different motivations, even benevolent ones like protecting the researcher. It involves “social punishments or threats of them (e.g., ostracism, public shaming, double standards in hirings, firings, publishing, retractions, and funding) to prevent dissemination of research.” It may be mild, like simply discouraging certain research projects that might negatively affect careers.

     Censors include governments, typically in authoritarian regimes, educational institutions such as universities, journals, and professional societies, and more informal threats of ostracism and reputational damage to both researchers and institutions. An example of the first type is when a university in Hungary relocated to Austria due to being censored by the Hungarian government. Educational donors can threaten to withhold funding if they think the research is not what they want to see. These kinds of deterrents also influence scientists to self-censor and avoid controversial research. Most scientists report some kind of self-censoring.

     They also note that soft censorship can be hard to distinguish from simple scientific rejection. They also note that scientific rejection can be subjective and influenced by the pressures noted above.

“…many criteria that influence scientific decision-making, including novelty, interest, “fit”, and even quality are often ambiguous and subjective, which enables scholars to exaggerate flaws or make unreasonable demands to justify rejection of unpalatable findings.”

     Thus, they note, bias and censorship can be mistaken for genuine science-based rejection. Scientists whose work was legitimately rejected may claim that they are being censored.

     Peer review is another process that can be biased:

“…peer reviewers evaluate research more favorably when findings support their prior beliefs, theoretical orientations, and political views.”

     They also note that science is designed to root out bias, which it generally does over time.

     In exploring the psychology of censorship, they note:

Censorship research typically explores dark psychological underpinnings such as intolerance, authoritarianism, dogmatism, rigidity, and extremism. Authoritarianism, on the political right and left, is associated with censoriousness, and censorship is often attributed to desires for power and authority.”

     They also give some interesting information about modern levels of scientific censorship:

Hundreds of scholars have been sanctioned for expressing controversial ideas, and the rate of sanctions has increased substantially over the past 10 y. Retractions of scientific articles have increased since at least 2000, many for good reasons such as statistical errors, but some were at least partly motivated by harm concerns.”   

     Some data is given below.




     The graph below shows that scientific censorship can lead to erroneous conclusions or at least less likely conclusions.  




     They also note that when science journals like Nature and Scientific American endorse political candidates, they are really being censorious and that such actions can erode trust in science.

Scientific censorship appears to be increasing. Potential explanations include expanding definitions of harm, increasing concerns about equity and inclusion in higher education, cohort effects, the growing proportion of women in science, increasing ideological homogeneity, and direct and frequent interaction between scientists and the public on social media.”

     The authors write that peer review was designed to be anonymous and confidential, but that this could increase bias and censorship, rather than reduce it, as preferred. They suggest opening up the process more. The goal is to eliminate bias and censorship regarding the acceptance or rejection of papers. They also think that scientific journals and institutions should be audited for procedural unfairness. Such audits and evaluations could make academic journals more competitive. They also call for better documentation and better data availability for retractions.

 

Expertise and Policy

     This section is a review of the ideas of Daniel Sarewitz and the late Steve Rayner on the subject. The article, published in 2021, was mostly written before COVID hit. They speak of a “post-truth condition,” due to science illiteracy, populist politics, and the proliferation of unverifiable information via the Internet and social media. Expertise in the form of skills like those of a surgeon or a pilot is not being contested, but only certain types of expertise:

Clearly, what is contested is not all science, all knowledge, and all expertise, but particular kinds of science and claims to expertise, applied to particular types of problems.”

     They note that science is limited in the kinds of questions it can answer and the types of problems it can solve. The physicist Alvin Weinberg argued in an influential 1972 article that certain questions transcend the ability of science to answer them. These questions typically involve complex and socially divisive topics. They also note that “risk” has become a more prominent concern in modern society. Concerns about public health and environmental health rise in developed and wealthy societies. Risk acceptance or rejection is often determined by scientists and policymakers.

It is thus no coincidence that the 1980s and 90s saw “risk” emerge as the explicit field of competing claims of rationality.”

“…starting in the 1970s, there has been a rapid expansion in health and environmental disputes, not-in-my backyard protests, and concerns about environmental justice, invariably accompanied by dueling experts, usually backed by competing scientific assessments of potential or actual damage to individuals and communities. These types of disputes constitute an important dimension of today’s divisive national politics.”

     Scientists interpret nature, and speak for it, they note, in order to advise policymakers. Political divisiveness has made it so that there are experts who speak for each political viewpoint. Thus, even views of nature can vary due to political persuasions. As an example, he cites Johan Rockstrom and colleagues' “planetary boundaries” way of looking at environmental issues vs. challenges to those ideas by more pragmatic writers like Ted Nordhaus, who argued that the thresholds they chose are not “non-negotiable,” but arbitrary. I have argued similarly. This is an important example because it shows that catastrophism in the form of ideas like crossing planetary boundaries, tipping points, ecosystem collapse, etc., has gained traction in recent years. Such pessimistic views, especially if not warranted, can distort science.

     The authors also talk about the power of numbers that back up scientific claims, but also can distort them if what is being measured is more abstract than concrete. They give two main examples. The first is the prediction of the percolation flux of groundwater that may encounter nuclear waste stored underground. The second is the idea of climate sensitivity, which has remained more or less in the same predicted range of 1.5-4.5 deg C.

The legacy of research on climate sensitivity is thus remarkably similar to that of percolation flux: decades of research and increasingly sophisticated science dedicated to better characterizing a numerical abstraction that does not actually describe observable phenomena, with little or no change in uncertainty.”

     They note that scientific expertise is often called on to basically predict the future. This is often done with numerical modeling. Here, they note that weather prediction has gotten very good due in part to the fact that it is a closed system, but also due to having many local predictions and model learning with massive amounts of data. We can predict the weather accurately for about a week in advance. Predicting climate is another matter and deals far more in the abstract.

The contrast between weather and climate forecasting could not be clearer. Weather forecasts are both reliable and useful because they predict outcomes in relatively closed systems for short periods with immediate feedback that can be rapidly incorporated to improve future forecasts, even as users (picnickers, ship captains) have innumerable opportunities to gain direct experience with the strengths and limits of the forecasts.”

Using mathematical models to predict the future global climate over the course of a century of rapid sociotechnical change is quite another matter. While the effects of different development pathways on future atmospheric greenhouse gas concentrations can be modeled using scenarios, there is no basis beyond conjecture for assigning relative probabilities to these alternative futures. There are also no mechanisms for improving conjectured probabilities because the time frames are too long to provide necessary feedback for learning. What’s being forecast exists only in an artificial world, constituted by numbers that correspond not to direct observations and measurements of phenomena in nature, but to an assumption-laden numerical representation of that artificial world.”  

     They argue that while modeling is numerical, it is predictive and has some level of unresolvable uncertainty. Here, they invoke Alfred North Whitehead’s 1929 “Fallacy of Misplaced Concreteness,” in which abstractions are taken as concrete facts. Models used to predict the future are often successful but not always. There are many examples of models being wrong, especially when the model assumptions applied were wrong.

     Next, they introduce three interrelated conditions that allow the establishment of causal relationships that can guide understanding and action.

First is control: the creation or exploitation of closed systems, so that important phenomena and variables involved in the system can be isolated and studied. Second is fast learning: the availability of tight feedback loops, which allow mistakes to be identified and learning to occur because causal inferences can be repeatedly tested through observations and experiments in the controlled or well-specified conditions of a more or less closed system. Third is clear goals: the shared recognition or stipulation of sharply defined endpoints toward which scientific progress can be both defined and assessed, meaning that feedback and learning can occur relative to progress toward agreed-upon outcomes that confirm the validity of what is being learned.”

     Technology influences the fulfilment of these three conditions. Technology is what makes science real for us, they note. It is essentially proof that science works. Unfortunately for complex social problems, the three conditions are rarely fulfilled. There is just too much uncertainty. So-called experts who opine about such complex problems often insist that they can predict the future with modeling. However, that does not decrease the inherent uncertainty. They label such expertise as “inappropriate expertise.” It is inappropriate because it assumes they can make the uncertain certain with their knowledge. This is in contrast to “expert-practitioners” who can readily show that their knowledge is correct. They offer an alternative:

Decision-makers tasked with responding to controversial problems of risk and society would be better served to pursue solutions through institutions that can tease out the legitimate conflicts over values and rationality that are implicated in the problems. They should focus on designing institutional approaches that make this cognitive pluralism explicit, and they should support activities to identify political and policy options that have a chance of attracting a diverse constituency of supporters.”

     They give three examples. The first is an environmental rule in Massachusetts that moved the arguments for and against the use of toxic substances from one of conflict to one of collaboration by reframing it as a rule to replace toxics with non-toxics. The second is the idea of hydrocarbon reserves, which is abstract in the sense that how many reserves there are depends on factors that often change, such as technological capabilities and costs to extract. Here, they note that making the process more pluralistic has led to better and more accurate predictions of hydrocarbon reserves than the USGS alone predicted. The third example involves macroeconomic models, which some economists argue have been very wrong over the years. Discussing the models is only a part of the Fed’s decision-making process, which involves reaching an overall consensus on interest rate determinations.

Truth, it turns out, often comes with big error bars, and that allows space for managing cognitive pluralism to build institutional trust.”

     What, then, is appropriate expertise?

Appropriate expertise emerges from institutions that ground their legitimacy not on claims of expert privilege and the authority of an undifferentiated “science,” but on institutional arrangements for managing the competing values, beliefs, worldviews, and facts arrayed around such incredibly complex problems as climate change or toxic chemical regulation or nuclear waste storage. Appropriate expertise is vested and manifested not in credentialed individuals, but in institutions that earn and maintain the trust of the polity.”

     They also note that it is easier to trust the concrete expertise of practitioner-experts than to trust those inappropriate experts who are over-leveraged in modeling.

People still listen to their dentists and auto mechanics. But many do not believe the scientists who tell them that nuclear power is safe, or that vaccines work, or that climate change is real.”

     At the end of the essay, they reiterate the importance of pluralism and consideration of the impacts on and views of other stakeholders in decision-making and policymaking. Here, they invoke the importance of democracy in policymaking.

Successfully navigating the divisive politics that arise at the intersections of technology, environment, health, and economy depends not on more and better science, nor louder exhortations to trust science, nor stronger condemnations of “science denial.” Instead, the focus must be on the design of institutional arrangements that bring the strengths and limits of our always uncertain knowledge of the world’s complexities into better alignment with the cognitive and political pluralism that is the foundation for democratic governance — and the life’s blood of any democratic society.”

     This essay is certainly food for thought on the subject of scientific expertise in the complex modern world.    

 

 

References:

 

Policy Making in the Post-Truth World: On the Limits of Science and the Rise of Inappropriate Expertise: Steve Rayner and Daniel Sarewitz. The Breakthrough Journal. Winter Issue 13, 2021. Journal Winter Issue 13_2021_PRINT_rev4.indd

Prosocial motives underlie scientific censorship by scientists: A perspective and research agenda. Cory J. Clark, Lee Jussim, Komi Frey, +35 , and William von Hippel. PNAS. Vol. 120 | No. 48. November 20, 2023. Prosocial motives underlie scientific censorship by scientists: A perspective and research agenda | PNAS

No comments:

Post a Comment

     This webinar was mainly about the applications of deep learning networks trained on seismic attribute data in order to model CO2 plumes...