This is adapted from my 2021 book, Sensible Decarbonization:
Regulation, Risk, and Relative Benefits in Different Approaches to Energy Use,
Climate Policy, and Environmental Impact
Risk assessment can be
generally defined as “the process of characterizing the potentially
adverse consequences of human exposure to an environmental hazard.” Risk management
can be defined as “the process by which policy choices are made
once the risks have been determined.” A committee of the National Research
Council (NRC) in 1983 came up with a four-step process for risk assessment: hazard
identification, dose-response assessment, exposure assessment, and risk
characterization. The council was established in 1916 to advise the federal government
on science and technology.
Hazard identification: The determination of
whether a particular chemical is or is not causally linked to particular health
effects. Dose-response assessment: The determination of the relation between
the magnitude of exposure and the probability of occurrence of the health
effects in question. Exposure assessment: The determination of the extent of
human exposure before or after application of regulatory controls. Risk
characterization: The description of the nature and often the magnitude of
human risk, including attendant uncertainty.[1]
Hazard
identification also involves consideration of the nature and strength of the
evidence that a substance presents a hazard. Dose-response assessment includes
consideration of intensity of exposure, patterns of exposure, and age and
lifestyle of those exposed that might affect susceptibility. Often animal responses
must be extrapolated to human responses and low-dose responses must be
extrapolated to high-dose responses. Exposure assessment also involves characterization
of emissions: determining the magnitude and properties of emissions at different
points that result in exposure. When emissions can’t be directly measured and
analyzed, modeling is used. Modeling is of course prone to more errors. Risk
characterization is more or less the final stage of assessment where exposure
and response are analyzed together to predict probabilities of specific harms.
This should also include the distribution of risk in a population. Risk
assessment is what we use to arrive at risk management: “Risk assessment is
a set of tools, not an end in itself. The limited resources available should be
spent to generate information that helps risk managers to choose the best
possible course of action among the available options.”[2]
The goal is to move toward quantifying risk better and to not over-rely on
qualitative or descriptive risk assessment which is more subjective and more prone
to error. Methods like cost-benefit analysis attempt, often inadequately, to
quantify risk.
The ISO 31000
standards define risk management as “the
effect of uncertainty on objectives.” Risk management involves the “identification, evaluation, and
prioritization of risks. It also involves minimizing risk and monitoring
risk. We can see that uncertainty is part of the very definition of risk which
suggests that risk is a probability, a predictive process. More specifically it
is a probability of certain impacts. Risk is often presented as a set of
options. Risks are ranked into a hierarchy of choices prioritized according to
threat level.[3]
Using the NRC
definition, basically, one wants to know what is dangerous, how much of it is dangerous,
what is the likelihood of exposure at those levels, and what should be
concluded about those risks to inform policy. Dose-response and exposure are
very important. We can measure chemicals present in the environment at minute
levels in parts per billion or even smaller amounts like parts per trillion in
recent times but that does not mean those levels will elicit any biological
response at all. If there is no plausible avenue of exposure, then there may be
little to no actual risk. Dose-response data is widely available for some
pollutants at different doses, but effects of lower doses must be extrapolated
for others. Uncertainty and sparseness of data are two problems for risk
assessment even though improvements through time are expected. Thus, data is
important for risk evaluation. That data should be widespread, applicable and
relevant, have good coverage and sufficient in number (not sparse), and be
accurate. It should be evaluated with the best science and policy prescriptions
and options should consider all costs and benefits in some sort of cost-benefit
analysis. Periodic re-assessment of risks can be important. Insurance companies
must constantly evaluate data in order to calculate financial risks, weather
risks, crime risks, property liability risks, etc. Accurate prediction of these
risks allows them to set rates and avoid payouts that could have been prevented
with better predictive risk management. “If
risks are improperly assessed and prioritized, time can be wasted in dealing
with risk of losses that are not likely to occur.”[4]
The National
Research Council recommended organizational and administrative separation of
risk assessment, which is strictly scientific, and risk management, which
involves policy decisions based on science. They define risk management as a “decision-making process that entails
consideration of political, social, economic, and engineering information with
risk-related information to develop, analyze, and compare regulatory options
and to select the appropriate regulatory response to a potential chronic health
hazard. The selection process necessarily requires the use of value judgments
on such issues as the acceptability of risk and the reasonableness of the costs
of control.”[5]
Types of risk
include operational risk, market risk, credit risk, asset liability management,
natural disaster risk, so-called climate risk which overlaps natural disaster
risk, information technology risks like cyberwarfare, and a myriad of health,
safety, and environmental risks. Risk communication is a subject that involves
communicating risks to the public. Public health and safety agencies are
concerned with risk communication. Yale has a climate change communication program
that involves communicating the risks of climate change and which researches “public climate change knowledge, attitudes,
policy preferences, and behavior, and the underlying psychological, cultural,
and political factors that influence them.”[6]
They also engage the public and companies, organizations, government, and
media. However, they seem to have some significant bias toward climate
alarmism. Climate alarmism and the precautionary principle are closely related.
As the blogger Riskmonger noted, precaution is uncertainty management and
uncertainty management is not risk management. It is risk avoidance. At
the other extreme is ignoring risk, which is also not risk management. It is
risk acceptance. We need a balanced approach to risk, of course.
Personal Quantification
of Risk Involves Human Psychology and Neurobiology: Risk Perception
Neuroscientists
say that when we are confronted with potential harm, we are hard-wired for a
fear-response before our logic kicks in. This is not true in all situations but
can and often does affect how people respond to risk. One’s risk response is
based on one’s risk perception. Often there is a gap between perceived risk and
real risk. The gap can be influenced by our own amygdalar response system which
is our ‘fight-flight-freeze’ instinct. It can also be influenced by current
events, media portrayals, how the issue is framed, whether we can easily choose
to avoid the perceived risk or not, cognitive biases, how well we can control
our exposure to the risk, and whether the risk is natural or man-made. We also calculate
risk based on our prevailing interpretation of the facts before us. The
high-risk of Covid has certainly activated instinctual reactions as it is a
real danger. Our prefrontal cortex is involved in our more logical approach to
risk which we need to use to override the amygdalar, fight-flight-freeze system,
which evolved to protect us from very real imminent threats, which we rarely
encounter in modern times compared to past times.
A great book
exploring risk perception, including its cognitive and psychological aspects,
is How Risky is It, Really? Why Our Fears Don’t Always Match the Facts, by
David Ropeik. There are several risk perception factors. Ropeik notes that we
are hardwired to fear first and think second. Often when making decisions about
risks we must do so without having all the facts. When that is the case, we use
mental shortcuts that include heuristics and biases. We each have a risk
response that involves both facts and these mental shortcuts. How we view
things often depends on how they are presented to us, especially by those we
generally trust. How things are presented is often called “framing.” This is
where the media and the extremism of both environmentalists and
anti-environmentalists comes into play and is why headlines and narrative
control are deemed so important. Those who control the headlines and narratives
do the framing. The same data can have quite different effects on risk
perception depending on how it is presented and how amenable the audiences are
to those presentations. Trust is also an issue that can be manipulated. We can
extrapolate that to companies too. If Monsanto and Exxon are regularly depicted
as ruthless profit seekers who care little for people that might be affected by
their products, then it is easy to distrust them. People also tend to distrust
entities and situations in which they have no control or influence. Evidence
suggests that perceived lack of control in a traumatic situation leads to
higher rates of PTSD. Ropeik also notes that natural risks are tolerated easier
than human-caused risks. We worry about man-made pesticides but not about
natural pesticides, some of which can be far more dangerous. Genetic engineering
is deemed unnatural and thus dangerous. Biolabs that work with pathogens are
deemed risky, especially if groups like the Organic Consumers Association
present them as having nefarious intentions verging on bioterrorism. New risks
are often deemed more dangerous than familiar ones, especially if they are
amplified by the media simply by focusing more on them. Risks that affect
children or the poor and disadvantaged are deemed more dangerous. If a risk
seems unfair, we tend to deem it more dangerous. Another risk perception factor
is lack of control. We tend to distrust what we can’t control. This is perhaps
why many perceive the risks from industrial activities to be more dangerous
than they probably are. Its’s not something over which they can exert any
control. If people feel powerless, they are more likely to overstate risks
rather than understate risks.[7]
There is also
obvious evidence that risk perception is different for different people, at
least for personal risk. Some people, so-called daredevils, thrive on personal
risk while others avoid it. Most of us are somewhere in the middle of the
spectrum in our approach to both novelty and risk.[8]
It is also the case that favoring risk taking or risk aversion can be a
function of ideology, education, parental and social training. To some extent
we tend to take on the views of those around us. Those views may affect the
risk perception gap between real risk and perceived risk in either direction. Strangely,
microbial parasites and gut bacteria have also been suggested as affecting our level
of risk-taking. The so-called reward system of the brain is also likely to be involved
in risky behaviors such as dangerous addictions. As I mentioned above there are
also cognitive biases like “loss aversion” where we tend to want to keep what
we have (stasis) rather than risk it for something potentially better
(dynamism). This can result in what psychologists call the “endowment effect,”
where we can overvalue something we have acquired, particularly something we
have struggled to acquire, or something involved in our evolutionary fitness. Another
cognitive bias is the “negativity bias” whereby we tend to have a bigger bank
of negative remembered experiences about things and events that involve
uncertainty, so we are predisposed towards pessimism. Adam Thierer notes in his
book Permissionless Innovation that “innate pessimism and survival instincts combined with
poor risk-analysis skills” influence people to distrust technology
to the point of inducing “technopanics.”[9]
As mentioned above, the availability bias perpetuated by newsworthy negativity
being ever-available, also psychologically primes us for pessimism. Strongly biased
websites and news sources that let us scroll an echo-chamber parade of
eco-pessimistic stories can put both negativity and availability biases into
hyperdrive.
Folk wisdom,
or folk psychology, also often involves health, safety prevention, preparation,
and risk. Common sayings like “to err on the side of caution,” “a stitch in
time saves nine,” “better safe than sorry,” and even “if there’s three let it
be” regarding poison ivy leaves, are a few of many examples. We have these
sayings because way back in time someone figured out the advantages of being
prepared and preventing unnecessary harm. It also seems very likely that
natural selection would favor preparation and detailed knowledge about dangers.
We memorize safety protocols and sayings are a convenient way to memorize. We
are wired to survive, often through knowledge about our relationships to the specific
environments we encounter. Thus, we quantify risk all the time. However, we do
it both rationally and irrationally due to our neurobiological circuits, our
logic, our social dispositions, and our psychology. Often the folk wisdom is
correct, but it can also be incorrect at times leading to bias and even danger.
Most people in a community do not have accurate and detailed knowledge of
industrial and technological processes so this makes their risk assessments
generally inadequate. It also makes it easier to inflate risks and less often
to deflate risks. Adequate risk assessment requires the assessors to be as
knowledgeable as possible.
The
irrationality of risk has played out recently with conspiracy theories about
the dangers of 5G communications technology, which like 3G and 4G before it,
puts out some harmless non-ionizing radiation, well within limits that could
cause any damage. Long before the beginning of the rollout, alarmists were
warning about the dangers of 5G and even before that the long-term use of cell
phones was suspected by many as being potentially carcinogenic. As the rollout
began, in some places amidst the beginnings of the coronavirus pandemic, anti-5G
activism was stoked in online groups, apparently influenced by Russian trolls,
on both the right and left fringes of the political spectrum and culminated in vandalizing
and burning new cell towers in many places in Europe and other places.[10]
One reason for that was a conspiracy theory that took hold that 5G was somehow spreading
the Covid or lowering immunity or even that it was all a plot by Bill Gates and
the World Health Organization to infect us so they could vaccinate us, or
something to that effect. This is obviously just ignorance and fear. Dr. Eric
van Rongen, vice chair at the International Commission on Non‐Ionizing
Radiation Protection, which sets the global guidelines for phone makers and telecommunications
companies on how much radiation is safe for humans, says those fears are
baseless and the dangers of 5G are equivalent to the heat dangers of having a
cup of tea every two hours.[11]
There are
people all along the spectrum from being risk tolerant to being risk averse.
Daredevils are risk tolerant. Others will decidedly avoid high-risk situations.
Norwegian polar explorer Erling Kagge suggests that exposing oneself to risk
helps to make life more meaningful and helps one to develop a more mindful presence
approach to life. If we habitually avoid risk, we may have more regrets. Risk
perception plays a part here too. Kagge notes that one of his heroes, the famed
mountain climber Tenzin Norgay, didn’t die on a mountain but died from lung
cancer due to smoking. Thus, we may get good at mitigating one kind of risk but
fare poorly in mitigating other kinds of risk. Kagge seems to suggest that too
much risk avoidance is a kind of laziness that may give us regrets and other
kinds of less evident risks.[12]
A recent article
in Undark Magazine unpacks some interesting ideas about risk perception related
to the January 6 Capitol riots. The article notes that research has shown that
risk perception changes for those that see threats to their status or identity
and that it varies according to demographics, particularly for white males who
are more willing to take risks to preserve their status or identity. Intense
support sometimes verging on fanaticism is given for a president who supports
their concerns and elevates their status and identity crises. The article notes
research from 1994 led by Paul Slovic that asked 1500 Americans how they
perceive different risks. The results showed that white males differed in risk
perception from white females and from both non-white males and females. In
every threat category white males perceived the risks as smaller and more
acceptable than the others. They dubbed the findings “the white male effect.” Subsequent
studies have confirmed the effect in America and suggested that differences in
cultural identity, socioeconomic security, and different attitudes toward
egalitarianism and community are involved. Some have attributed this to white
privilege, or more specifically to white male privilege. More recently came the
term “white nationalist privilege.” A similar study done in Sweden in 2011 showed
no discernible difference between men and women in risk perception and thus no white
male effect there. Equality between the sexes is thought to be very good in
Sweden. However, they did find that risk perception was significantly higher in
Sweden among those with foreign origins and ethnicities. Those non-native
Swedes have less privilege and less of a sense of equality in that society than
native Swedes. The researchers concluded that the white male effect observed in
America was really a subset of what they proposed as the “societal inequality
effect.”[13]
[14]
We are wired to detect
and respond to threats. Our pre-logical threat circuits can be triggered
easily, especially when we are in a hyped-up state. They can also be
manipulated by shrewd politicians or activists of any orientation. Human rights
activist, lawyer, and author Zach Norris writes about these human tendencies
regarding threats and safety and how they are manipulated. He gives a general
framework “Us vs. Them” scenario where “they,” the proposed perceived enemy, are
typically dehumanized and compared to diseases in terms like “contagions,
germs, pollutants, infections.” These are things we must act against to
remain safe. He thinks the US under-invested in social welfare and over-invested
in punishment. He argues for a care-based model of public safety rather than a
fear-based model. I tend to agree to a point.[15]
[1] National Research Council (US) Committee on the
Institutional Means for Assessment of Risks to Public Health. Washington (DC):
National Academies Press (US); 1983.
Risk Assessment in the Federal Government: Managing the Process. https://www.ncbi.nlm.nih.gov/books/NBK216628/
[8]
Gallagher, Winifred, 2011. New: Understanding Our Need for Novelty and Change.
Penguin Books.
[9] Thierer, Adam, 2014, 2016. Permissionless Innovation:
The Continuing Case for Comprehensive Technological Freedom. Mercatus Center at
George Mason University.
[10] Satariano, Adam and Alba, Davey, April 11, 2020. Burning Cell Towers, Out of Baseless Fear They Spread the Virus. New York Times. https://www.nytimes.com/2020/04/10/technology/coronavirus-5g-uk.html/
[11]
Van Rongen, Dr. Eric (as told to Elle Hardy), June 23, 2020. I'm the scientist
who sets the global guidelines on 5G safety. Take it from me: 5G doesn't cause
cancer or spread COVID-19. Business Insider. https://www.msn.com/en-us/news/technology/im-the-scientist-who-sets-the-global-guidelines-on-5g-safety-take-it-from-me-5g-doesnt-cause-cancer-or-spread-covid-19/ar-BB15S0Ty
[12] Kagge, Erling, April 16, 2020. Polar explorer Erling
Kagge: Why risk makes life meaningful. Big Think. How to be happy, with polar explorer Erling
Kagge - Big Think
No comments:
Post a Comment