Blog Archive

Wednesday, April 16, 2025

How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts. By David Ropeik. Summary & Review. (McGraw-Hill, 2010).


          This is a great book about the perception of risk and how and why there is often a gap between perceived risk and real risk. One goal of the book was to determine how we arrive at our risk perceptions. Understanding this requires neuroscience, psychology, sociology, anthropology, and behavioral economics. In order to understand real risk, we need to be able to compare the risks of doing something with the risks of not doing it. In other words, we can choose actions that we think are reducing risks, when they are really increasing risks from other sources. That, the author notes, is the danger of what he calls the Perception Gap, or risk perception gap.

 

Risk Response and the Perception Gap

     He first considers our response to risk, often divided into a dual component system of reasonable, rational response and emotional, instinctive response. He says this dichotomy is false and we should see the rational and emotional/instinctual as one system.

The system by which we respond to risk is remarkable, a fabulous and complex mix of neural wiring and chemistry, subconscious psychological processes and instincts, and fact-based cognitive firepower.”

The dichotomy is false, he says, because as humans we always use both approaches combined in order to respond to risk. We often do not have all the facts about risks and tend to fill in the blanks with our own ideas, which are often influenced by emotion and instinct. As humans, we often cannot separate out that component.  He notes that in perceiving risk, people are neither wholly rational nor wholly emotional. We are Affective, utilizing both in a mix of facts and feelings about those facts. There are neurobiological and psychological components of our Affective Risk Response system. We as humans have evolved to avoid immediate dangers. However, we often do not consider the secondary effects of our choices about risk. He notes that there are three hidden risks of the Perception Gap:

1)        The Perception Gap can lead to risky personal behavior. He gives as an example the fact that many people turned away from flying after 9/11 and turned to much riskier drives instead, when death tolls on roads rose for a few months after the event.

2)        The Perception Gap causes stress.

3)        The Perception Gap can lead to social policies that don’t maximize the protection of environmental health. An example is that we spend more money on cancer risk than heart disease risk, while heart disease kills more people than cancer.

     We often make irrational decisions about risk. We also tend to not like people telling us about some risks, preferring to figure them out for ourselves.

     He says that the primordial risk response to perceived threats is the fight-flight-freeze response. This is our first response to danger. This response happens in a more primitive part of the brain known as the subcortex. Those initial milliseconds of the risk response involve signals being sent from the thalamus to both the cortex, the thinking part of the brain, and the amygdala, the response system developed before humans evolved, which triggers fear-related physiological reactions that are part of the fight-flight-freeze response.  Signals from the thalamus reach the amygdala before they reach the cortex, so our fight-flight-freeze response often precedes our logical response. Thus, fear precedes thought when we encounter something that might be dangerous. When the amygdala is activated by possible danger, the neurotransmitter acetylcholine is released, which initiates physiological changes, and our senses become more acute. The amygdala also initiates the release of norepinephrine, which helps us remember in detail the dangerous experience to prepare for future events. There are even more neurochemical reactions associated with the fear response. The release of glucose in the brain allows us to increase our energy levels when we encounter something fearful.

     In addition to the neurobiology of risk response, there is the psychology of risk response. He introduces the subject of Chapter 2, the concept of bounded rationality, which refers to the process by which we make judgments and decisions without complete knowledge. Thus, our rational basis for responding to risk is limited, or bounded. Our risk response is often a combination of gut feelings and incomplete rationality. On most risk issues, we are only partially informed, and we often need to make decisions without having all the facts. Thus, we often use mental shortcuts to arrive at decisions. Many of these allow us to be influenced by common biases and cognitive fallacies.

     The author notes the framing effect – how risks are presented to us influences how we respond to them. Framing may emphasize certain aspects and facts and de-emphasize or even omit others. The media does quite a bit of framing with headlines, usually in favor of their own biases. There are countless examples.  

     Another mental shortcut we often use is stereotyping, also known as representativeness or categorization. We categorize things, and things we have already categorized affect how we perceive risk. One form of categorization is the small sample, say when one study reaches a certain conclusion that influences us, despite other studies not supporting that conclusion. It is a form of “cherry-picking” data. When one study supports a certain view that one wants to keep, it seems not to matter if other studies refute it. People tend to latch onto whatever supports their preconceived views. As an example of extrapolating from a small sample, he gives the study that initially linked the MMR vaccine to autism, a study of just twelve people. Since then, numerous studies of many more people have refuted the link, and now the evidence is overwhelming against the link (despite what RFK Jr. might think). Another example of categorization he gives is our perception of probability. The problem is that sometimes chance appears to be random, and at other times it may hint at a connection. However, that connection must be proven; it can’t just be assumed, as it often is.

When we have to decide things in a hurry, which is often the case with risk, and the amygdala needs a little help figuring out whether something is dangerous enough to sound the alarm, categorization helps us quickly take the partial information we have and make sense of it so that we can make a snap judgment about how to stay safe.”

     Another mental shortcut we often make is known as loss aversion or the endowment effect. It is simply the tendency we have to keep what we have rather than trade it for something less familiar, even if it is much better for us. We don’t like to lose something in which we have invested our time and money. Psychologist Daniel Kahneman developed the idea of loss aversion and the endowment effect, the idea that ownership has value even if there is something of better value available. The point is that aversion to perceived loss influences our risk decisions.

     Another cognitive trick is known as anchoring and adjustment. This takes advantage of numbers, or rather, our trouble with numbers. Anchoring refers to using an initial number and adjusting from there. Evidence shows that the initial number influences our risk response decisions. It shows that we are easily influenced by numbers that seem to either emphasize or downplay dangers. This is used in debates and by the media.

     The awareness/ready recall effect is another mental shortcut we use to evaluate risk. This effect is related to categorization in that we often use them together.

The greater our awareness, and the more readily information about a risk can be recalled, the greater our concern.”

     Another effect is innumeracy. If risk is described to us in numbers via statistics and percentages, we often don’t fully understand the numerical data and can still base our risk response on emotions, often increasing the Perception Gap.  

     Many of us also tend to have an optimism bias, whereby we remain optimistic that things are not as risky as depicted. That is one reason many people don’t flee hurricanes, wear seatbelts, or think it’s OK to have one more drink before driving.

 

Risk Perception Factors

     Chapter 3, Fear Factors, addresses why some threats feel scarier than others. A simple example is all the people who stopped flying after 911 and the ensuing increase in traffic deaths. They traded what they thought was a scarier risk for a less scary risk that turned out to be wrong. As flying has long been safer than driving, that particular risk response is not rational. This chapter gives 13 Risk Perception Factors.

     One factor is our sense of control. When we feel we have some control over a situation, we feel safer. It is the feeling of control, rather than the actual control we have, that makes us feel safer. The fear factors are also called risk perception factors. The potential pain and suffering that a situation can cause, whether realistic or not, is also a risk perception factor.

Risk Perception Factors appear to be universal, but our perceptions also depend on our experience, education, lifestyle, and other factors that make each of us unique.”

     The first risk perception factor given is trust. The deepest levels of trust are between a mother and child and are reinforced by the hormone oxytocin. It is involved in social bonding and tends to make us feel safe and as experiments have shown, it helps us trust others and it decreases activity in the amygdala. It affects the amygdala’s default setting of emotionally reacting to danger. We tend to trust the groups with which we are affiliated. Mistrust is not always fair to those mistrusted and is often misdirected. He gives examples. Our risk responses depend on how much we trust the sources of the risk, how the risk is communicated, the people and agencies tasked with protecting us against such risks, and whether the process seems trustworthy.

The second risk perception factor given is ‘risk versus benefit.’ Risk evaluation always involves comparing benefits and risks. These trade-offs must be weighed. We do it all the time from using our cell phones while we drive to choosing what to eat, drink, or breath in terms of health. We often have to make choices based on weighing risks and benefits.

As mentioned, Risk Perception Factors work both ways. The same factor can make us either less afraid or more. The Risk-versus Benefit factor is no exception. The greater the perceived benefit, the smaller the risk will seem. But if there is little perceived benefit, the associated risk will feel bigger.”

     Fear Factor 3 is control. When we feel in control of a situation, we are less fearful. It’s important to note that it is not strictly being in control that does this, but the “feeling” of being in control. We may or may not really have control over it for it to be a factor. The point is that we don’t always know how much control over a situation we really have, and this influences our sense of safety.

     Risk Perception Factor 4 is choice. If a risk seems involuntary, it is more likely to lead to fear. We like having choices, even if we sometimes misuse them. We generally don’t like others calling the shots for us. Lack of choice can make us risk-averse, and choice can make us take unnecessary risks. Thus, like other risk factors, it is a double-edged sword.

     Factor 5: natural or human-made risk? This one has also been called the ‘natural fallacy.’ We seem to tend to trust nature and natural ingredients and distrust synthetic ingredients. This tendency seems to be a product of the Risk Perception Gap. In reality, whether something is natural or synthetic has nothing to do with its safety or dangers. The only difference is that the long-term effects of natural substances are established over time while the effects of new chemicals are not known. However, this is not always true about natural substances. Some can be very harmful. A case in point I heard about recently is that natural causes were found for two different clusters of ALS, one in Guam and one in the French Alps. The one in Guam was traced to a chemical in a cyad seed used for flour and eaten. The second one was traced to False Morel mushrooms. The natural chemicals that caused the ALS years later were very similar. There are many other examples of dangerous natural substances, often herbs used for health. The fear and opposition to genetic engineering is another type of this factor.

     Factor 6 is pain and suffering. We do our best to avoid pain and suffering. We spend more money on cancer research than heart disease research, even though heart disease kills more people. This is likely due, he says, to the pain and suffering associated with cancer. The same is true, he says, for aversion to nuclear energy vs. other forms of energy. Many more people die from influenza every year than are murdered but fear of influenza is not an issue for the vast majority of us. That is the reverse side of pain and suffering, that we may underestimate risks from things like the flu, which we tend to see more as inconveniences than threats to our lives.

     Factor 7 is uncertainty. The more uncertain we are about something, the more we tend to fear it. He gives some examples: nuclear safety, fear of the Metro, Washington D.C. sniper, and what happens if we drive with our eyes closed even for a few seconds – we become concerned because our uncertainty of what is ahead of us has increased. He gives some examples where risk is overestimated due to uncertainty: effects of growth hormones used on cows, cell tower radiation, and pesticides on food. He gives three forms of uncertainty: 1) I can’t detect it, 2) I don’t understand it, and 3) nobody knows. He also notes that it is uncertainty that undergirds ideas like the Precautionary Principle. My own analysis is that this principle often unnecessarily impedes innovation and often does not reduce risk or other undesirable results.

     Factor 8 is given as catastrophic or chronic. He means here that we tend to focus more on catastrophic events than chronic ones that occur gradually over time. Plane crashes get more headlines than heart disease, but heart disease kills vastly more, though not at once. Any kind of mass death, such as from a mass shooter or some kind of massacre, seems to trigger our fears much more than many unconnected deaths per day from chronic diseases.

     Factor 9 is Can it happen to me? He notes that any risk will feel bigger if you think it could happen to you. This is kind of obvious. He notes initial fears about SARS and West Nile Virus changed after the perceived likelihood of being infected dropped.

     Factor 10: Is the risk new or familiar? Again, he uses the West Nile Virus as an example. It was well-known in some places like the Middle East, but new here, although there were similar mosquito-borne diseases here. As the disease became more familiar here, the fear of it dropped.

     Factor 11 is risks to children. We are especially sensitive to dangers to our children. Cases of missing children and child abductions caused parents to be much more careful with their kids, including not letting them wander around without supervision. Even as cases caused fear, there was really no uptick in the abductions. The extra fear was media-driven. In any case, risks to children elicit more fear. We set up Amber Alerts for missing children. New threats, such as online threats to children, elicit fear.

     Factor 12 is personification. When the media puts stories into a personal perspective, they affect us more. He quotes Stalin, who could be considered a mass murderer for his actions, who said: “One death is a tragedy. One million deaths is a statistic.” Experiments have shown that people will donate more money to save one child than a group of children. When things are brought to a personal level, we feel them more. He compares the picture of a hurricane to a picture of an old woman sitting in the ruins of her storm-ravaged home. The second picture evokes more emotions.

     Facto 13, the last given here, is fairness. Our sense of fairness is likely innate. Perceived unfairness and injustice arouse our emotions. He thinks the source of our focus on fairness is likely reciprocal altruism – giving up resources with no expectation of a return, such as when we give money to a homeless person. The bottom line is that if something feels unfair, we deem it a higher risk.

     He points out that risk perception factors are often multiple, and several can coexist, influencing our risk decisions. He calls it a complex mix of internal and external ingredients.

 

Social and Media Influences on Risk Perception

     Chapter 4: The Wisdom or Madness of the Crowd? This section explores social and media influences on our fears and risk perceptions. First, he considers climate change and all the views about it from both experts and non-experts.

     “Risk perception is not just a matter of what your amygdala says. It’s not just a matter of hat mental shortcuts you use to help you to make decisions. And its not just a matter of those psychological Risk Perception Factors. The perception of risk, it turns out, is also a matter of the fundamental ways we see ourselves as social creatures in a social world.”

     He notes that we are members of multiple social “tribes,” including gender, racial, national, political, and professional tribes, and those social groups influence how we evaluate risk. Researchers call this the cultural theory of risk or ‘cultural cognition.’ Dan Kahan writes that cultural cognition “refers to the tendency of individuals to form beliefs about societal dangers that reflect and reinforce their commitments to particular versions of the ideal society.” We develop worldviews about how society is supposed to work best, and those views influence our risk perceptions. This often happens at an unconscious level, Ropeik notes. Kahan and colleagues studied opinions about climate change in 2006/2007. They came up with four social types based on surveying 5000 people. These are hierarchists, individualists, communitarians, and egalitarians. In Kahan’s scheme, hierarchists are those who “believe that rights, duties, goods, and offices should be distributed ‘differentially’ and on the basis of clearly defined and stable social characteristics.” Egalitarians are those who think those rights, duties, goods, and offices should be distributed ‘equally’ without regard to social factors. Individualists, or less government/libertarian types, are those who believe individual concerns should outrank social concerns. Communitarians are those who believe societal concerns should outweigh individual concerns. The author aligns these to specific people in a hypothetical conversation about climate change. The notion is that the underlying worldviews of the four types can be used to accurately predict their opinions about different subjects and issues involving risk perception.

     Since humans are social animals, we also have a social risk response, Ropeik suggests.

We are hard-wired to read in those around us any sign that could be relevant to our survival.”

We also have a tendency to adopt the beliefs of those around us, although I think that is sometimes not the case. As an example of social risk perceptions, he notes the stigmatization of GMOs in Europe and other places and the stigmatization of certain chemicals, such as mercury, and companies such as Monsanto (now Bayer). He also mentions the idea of Groupthink, giving the example that group fear about terrorism led to the approval and early support of the U.S. invasion of Iraq. Another example is environmentalists dramatizing environmental risks. Thus, stigmatization and dramatization of risks are social factors of risk perception. Dramatization could be called “playing on our fears.”

     Other social risk perception factors include media sensationalism. He gives examples where a story can be manipulated to make a risk seem more risky or less risky, while avoiding lies. Headlines can also be used to manipulate, a form of framing. He gives his own headline about media influencing risk perception: “If it scares, it airs.” He goes on to explore media biases and other ways media can influence risk perception, including the personal views of the journalists. He says the media don’t tell us what to think, but they do tell us what to think about. I think that is an accurate observation.

     The final section, Chapter 5, is titled Closing the Perception Gap. He notes that the Risk Perception Gap, the difference between perceived fears and realistic fears, is itself a source of risk, as evidenced by all the people who feared flying after 911 and ended up dying in auto accidents. Using your phone while driving is a risk many of us take that we shouldn’t as people can end up dead. He also notes hysterical reactions to risks that can increase risks. A recent example is the measles outbreak in Texas, where after RFK Jr. emphasized that Vitamin A can prevent and cure measles (not at all true), people went out a bought it and overused it to the point where dozens of emergency room visits occurred due to Vitamin A overdosing and toxicity. I remember a story where a crazy guy got like several hundred Covid vaccines, although I don’t think he had any issues from it, which perhaps oddly debunks claims that the vaccine was dangerous.

     He reiterates that a perceived risk causes stress, which often triggers the release of neurotransmitters associated with threat and fear. As a result, we can act irrationally to perceived risk.

     There can also be a Societal Risk Perception Gap even among regulators. Regulators high-grade and compare risks and sometimes get it wrong. He gives the heart disease vs. cancer argument here as well. Heart disease is a greater risk, but cancer is feared more and given more attention by society, including the medical establishment. He also notes water fluoridation, which has been called one of the major public health successes, but many want to ban it, including RFK Jr., since it is toxic at much higher levels. There is justification for not fluoridating water where natural fluorine levels are close to the desired amounts. He goes on to explore the Perception Gaps many people have about nuclear energy and climate change.

     He gives some suggestions for narrowing the perception Gap: 1) Keep an open mind, 2) Give yourself time, 3) Get more information, and 4) Get your information from neutral sources. One should also know the hazards and the possibilities and the routes of exposure in order to best evaluate risk. There is often a lot to study and understand about risks. He also acknowledges that narrowing the Perception Gap is not easy. He asks why we can’t just have the government rise above the issues and give us the best evaluation of risk. His answer?

We can’t. The ideal of perfectly rational government risk management policy is no more achievable than the goal of perfectly rational personal risk perceptions. We are Affective as individuals and we are Affective as a group of individuals acting socially through our government.”

Governments utilize (the U.S. gov is required to do so) cost-benefit analysis, weighing benefits and risks, to determine policy and the effects of policy. This is generally a good idea, but it does have some limitations. Should we or could we incorporate some emotions into rational cost-benefit analysis? We are not always rational. I remember a story where people in Australia were faced with the possibility of having sewage wastewater treated to a high level of purity as their drinking water source, and many were just so disgusted that they would not even consider it. I’m not sure I would consider it either, even though science says it is very safe. We have evolved a sense of ‘disgust’ that has likely led to increased survival, since it triggers a revulsion to consuming something that triggers disgust.

     He mentions a paper by University of Pennsylvania law professor Matthew Adler titled “Cost-Benefit Analysis and the Pricing of Fear and Anxiety,” where the costs and benefits of medical gloves and their potential for being defective were analyzed. Added to the costs were the costs of blood tests to determine if diseases or contamination were transferred when gloves failed to protect the medical personnel. The blood tests reveal our preference toward safety, the better safe than sorry argument. Cost-benefit analysis is one way we try to quantify risk rationally. However, we must also consider fears and anxieties. He notes that we must not only understand the facts but also the psychology of each risk evaluation. He gives an example of flood insurance. Many people in flood zones fail to get flood insurance. He considers that this is due to the perceived benefits of living where they want to exceed the perceived risks of flooding. Thus, our own short-handed way of doing cost-benefit analysis, influenced by psychology, may expose us to higher risks.

     He addresses risk communication, which is another way to attempt to narrow the Perception Gap. In order to make better, rational judgments about risk, people need to be better informed and better educated. We have nuclear-phobia, chemo-phobia, techno-phobia, and other situations where the fears are greater than the facts suggest they should be. In risk communication, focusing on facts alone is a recipe for failure due to the emotional, neurobiological, and psychological aspects of risk being ignored. Ropeik suggests that people need to be educated about risk in general as well as about individual risks. He also suggests that information and education about risks could or perhaps should be “framed” in ways that help assuage the emotional responses to risk. In order for risk communication to be effective, he says, it must also consider the perceptions, interests, and concerns of the audience, not only the facts that the audience ought to know. We have emotions, preconceived notions based on past experience, and all the biological and psychological factors to consider in each risk evaluation. Thus, we should be careful to best understand and accurately predict how the public's response to risk communication will go. 

 

No comments:

Post a Comment

     The SCORE Consortium is a group of U.S. businesses involved in the domestic extraction of critical minerals and the development of su...

Index of Posts (Linked)