Cognitive Errors and Selection Effects

Cognitive Biases

In the late 1960s and throughout the 1970s, the field of social psychology, which is closely connected with areas of economics, was rocked by the research of Daniel Kahneman and Amos Tversky [NewYorkerKahnemanTversky]. They carried out a series of experiments that showed that humans do not make decisions in the “rational” manner that economists had assumed. Their research led to Kahneman winning the Nobel Memorial Prize for Economics (which he would have shared with Tversky if the latter had not died young), and made for rapid growth in the field of social psychology and behavioral economics.

Economic theory had often assumed that humans were a “rational actors”, making economic, financial, and other decisions based on an evaluation of the probabilities of events.

Kahneman and Tversky carried out experiments that demonstrated the existence and widespread presence of “availability bias” – roughly, the tendency to think that an event is more frequent or important if you can recall examples of it more readily. Their experiments also revealed several other examples of cognitive biases, and inspired other social scientists to investigate others.

There are many classic examples, and some that you could carry out easily with a group of friends, in classrooms, or at parties. Here are a few:

  • Ask this question in a way that encourages a rapid response:

    “Are there more earthquakes every year in California or in the United States?” People who follow US media will often answer “California” because of the frequent mention of earthquakes on the San Andreas fault line. After a bit of time they will say “… Ahhhh, I see, of course not” once they realize the logic of California earthquakes being a subset of US earthquakes.

  • Have everyone in a room write down the last 3 digits of their mobile

    phone number. Then ask them to write down their guess of how many nations are “member states” of the United Nations. People’s guesses will mostly be incorrect, and those who have just written down a larger number from their mobile will tend to guess higher, while those with a lower number from their mobile will tend to guess low.

  • Ask people who were children in the 1970s and 1980s if there are

    more violent crimes today, or when they were kids. Then compare it to the FBI violent crime statistic plots.

Daniel Kahneman wrote in his famous book Thinking Fast and Slow:

People tend to assess the relative importance of issues by the ease with which they are retrieved from memory – and this is largely determined by the extent of coverage in the media.

This is related to some of the humorously described phenomena, such as the Matthew Effect: if someone’s name is repeated often, they are more likely to get the credit for something that was done by others as well.

Cognitive biases are tenacious and very difficult to avoid. Many of Kahmenan and Tversky’s early experiments were carried out at psychology conferences, so their subjects were highly trained in understanding how minds work. Yet, they were still susceptible to these biases. Nobody should think that their awareness of cognitive bias makes them immune from its effect, but awareness does help in designing methods and checks that look for such bias in important decisions.

Selection Effects and the “Hidden Prior”

Experienced researchers pay close attention to the sample, the group of people who participate in an experiment, from which a conclusion is drawn. Disaster lurks in ignoring how the sample was selected.

A classic case study comes from the 1936 US presidential election. A magazine called Literary Digest polled 10 million individuals, of whom 2.27 million responded. This is an astonishingly large sample.

The result of the poll was that republican candidate Alfred Landon would win the election by a significant margin. Instead the election was won by a landslide by democratic candidate Franklin Delano Roosevelt.

At the same time, researcher George Gallup used a much smaller sample of 50 thousand people. He predicted a strong victory by Roosevelt.

As a skeptical thinker you should be ready to ask the next question: Who were those 2.3 million people? And who were those 50 thousand people?

It turns out that the 2.3 million people were (a) subscribers to the magazine, (b) registered automobile owners, and (c) telephone users. These key factors could have been easy for the Literary Digest magazine to find, but this laziness in selection made for a very biased sample. In the midst of the Great Depression, only wealthy Americans could afford to subscribe to a magazine, let alone to own a car or have a telephone line. The result was a skewed sample of the population that did not resemble the country as a whole.

Gallup’s 50 thousand people were carefully chosen to be a representative sample of the entire United States voting population, containing people from different economic and social backgrounds.

So, when you see a report of a study always ask the next question. Often that question will be one that looks out for selection effects.

One of the examples in Chapter Research examples, Section Seychelles Islands vaccine mystery, is a highly topical one as I write this section (2021-06). While there is no definitive answer for the evolving and unclear situation, you can still apply critical thinking skills to look for selection effects that are not clear from some of the headlines.

Another example is given by a Julia Galef, co-founder of the Center for Applied Rationality, in a thought experiment in which she describes in an online lesson at [JuliaGalefBayesianThinkingVideo].

She describes how a certain approach to statistics, called Bayesian statistics, can help one understand inferences better. In her example, she imagines a university campus which has only math PhD students and business school students.

She then imagines that you see a student acting shy, and she asks the question: “Would you guess that this is a math PhD student or a business school student?”

The initial guess would be a math student: many people think that mathematicians are more likely to be shy and less socially outgoing than most other students, especially business students.

However, if you are trained in critical thinking you ask the next question: “How many math PhD students are there at that university, and how many business school students are there?” Once you model that you might get a very different result. If there are 1,000 business school students, and only 3 math students, the probability of that shy student being a math major gets greatly reduced.

Julia calls this unstated ratio of math to business students a “hidden prior”: information that can strongly influence the accuracy of your inference, and that had not been noted. She also shows some simple useful drawings to illustrate this.

Another classic and simple example comes from astronomy. If you look up at the sky at night, and count all the stars you can see with the naked eye, you might conclude that there are only so many stars in the galaxy as you can see. The reason you might reach this conclusion is a strong selection effect: you are only counting the stars you can see! If a star is far away, then for you to be able to see it, it has to be very bright, so your sample overcounts the bright stars.

Astronomers put a lot of effort into modeling and understanding these effects, and the result is the knowledge that there are many more dim stars. They are harder to see, so we have to come up with clever methods to estimate their numbers correctly.

Causation Versus Correlation

An often-discussed cognitive error is that of confusing correlation with causation.

Some classic examples are:

  • Rainfall and the sale of umbrellas often increase and decrease

    together. You might conclude that umbrella sales cause rain.

  • Patients go to doctors and demand antibiotics for viral infections.

    Some doctors do not fight these demands and just fill out the prescriptions. The patient takes the medicine and notices that the infections improve, and conclude that the antibiotics caused that improvement (they don’t: antibiotics do not affect viral infections). Of course what was at work here was the fact that the infections would have started fading on their own, whether the patient took medicine or not.

  • Another classic phrase: “Every time that rooster crows, the sun

    comes up. That rooster must be very powerful and important!”

  • In the past, eclipses of the sun have

    been treated as dark omens. The ancient and medieval stories of eclipse coincidences are interesting, but sometimes hard to verify authoritatively. For example, there is a NASA page on solar eclipses in history that claims that the English eclipse of 1133 CE was seen as an omen that the King Henry I, son of William the Conqueror, was about to die. In reality the king died in 1135, so this story has been distorted in various ways. Still, even if Henry I had died the day after the eclipse, his death was caused by eating too many lampreys, not by the solar eclipse. Clearly a case of causation fallacy.

    On the other hand, another medieval eclipse legend has it that the son of Charlemagne, emperor Louis the Pious, died due to the terror he felt after witnessing the eclipse in 840 CE. In this case the eclipse would have actually caused his death!

    We will discuss the King Henry I solar eclipse further in Chapter Research examples, Section The King Henry I Eclipse

  • A more subtle example if given in an xkcd comic:

    https://xkcd.com/552/

This fallacy comes up often enough that the latin expression “post hoc ergo propter hoc,” meaning “after this therefore because of this,” is sometimes used in English.

Confirmation Biases

Confirmation bias is a tendency of the mind to arrange facts to fit a preconception that we have. One classic, almost caricatured, example is that if we are predisposed to think that a person is a bad person, we will see them rescue a puppy and think that they are doing it with some underlying dark intention. If we look hard enough, and are liberal with our interpretations and facts, we can find details that seem to show that bad motives were present.

Confirmation bias comes up frequently in medical care, often responsible for incorrect reporting of treatment success.

Steve Hartman [Hartman2009IneffectiveTreatmentsSeemHelpful] gives a review of how causation error and several other cognitive fallacies come up in medical treatment, including discussions of causation, confirmation bias, and how it relates to the phenomenon of cognitive dissonance. For each of these he gives a description of the phenomenon, a presentation of how it comes up in medical treatment, and sometimes a discussion of the evolutionary psychology aspects of the distortion.

For our purposes, as we train to be critical thinkers, we obviously need to be aware of cognitive fallacies.

What kind of mind set avoids these cognitive problems?

In a TED talk [JuliaGalefTEDxPSU] Julia Galef gives an analysis of the “Affaire Dreyfus,” a political scandal that rocked French military and political life from 1894 to 1906. Dreyfus, a French military officer, was convicted to life in a penal colony in South America for passing secrets to German agents.

He was innocent, and his innocence was demonstrated clearly, but the combination of prejudice and biases in examining the case resulted in his serving 5 years of his sentence. Eventually, he was finally exonerated and able to resume his military service.

The Affaire Dreyfus is a vast cocktail of poor thinking and poor behaviors (including anti-semitism) by the professionals involved in handling it. It illustrates a remarkable number of cognitive biases and other errors. However, a different character emerges from the affair: colonel Marie-Georges Picquart, who investigated the espionage case and found the true culprit. Picquart was as prejudiced as his contemporaries, but his mind set in the investigation was driven by a desire to find out what had really happened, even if that led to exonerating a man he did not like.

In her TED talk Galef contrasts two mind sets: that of the solder, driven to force a victory for their cause, and that of the scout, driven to learn the real layout of the situation - a situation where curiosity, rather than loyalty, makes you excel. Picquart is portrayed as an example of the scout mind set.

Galef’s “scout mind set” is her proposal for how to do careful inquiry, escaping from the various cognitive defects and avoiding biases. She points out that awareness of bias barely helps with overcoming bias, similar to Kahneman and Tversky’s conclusion. The process of entering a scout mind set is a deep reimagining of one’s identity, what type of person you are and what type of person you want to be. Are you someone who whole-heartedly fight for what you believe at that moment, or are you someone who seeks justice and truth (even if they might disagree with your current opinions).

In an interview on Vox [GalefMatthewsInterviewVox], Galef points out that choosing to identify as the person who “always has the right answer” is counterproductive. One can, instead, identify as the person who “can admit being wrong” and who distinguishes between different levels of uncertainty in their beliefs.

Implicit Bias and Underrepresentation

Implicit Bias

If you knowingly reach conclusions about someone based on their membership in a group, that is called an explicit stereotype. For example, you might say: “One of the authors of this book has an Italian last name; he probably appreciates good food”.

All of this might seem benevolent and to be “all good fun”, and cetainly in that example nobody is getting hurt.

If the stereotype is “One of the authors of this book has an Italian last name; he is probably not a hard worker”, then damage can occur due to unfair treatment which stems from incorrect conclusions.

These examples of explicit stereotype can be corrected with training and adjustment of mind set.

In 1995, researchers Mahzarin Banaji and Anthony Greenwald identified a type of bias that we experience before we start thinking about a situation: implicit bias. Implicit bias is an immediate reaction to a person’s appearance, voice, or other facets that place them in a group. [banaji2016blindspot] They also created the implicit association test [BanajiProjectImplicit] which anyone can take. It uses timing information on how long it takes you to answer certain questions, and estimates if you have an implicit bias toward certain groups of people.

Implicit bias is insiduous: it has an effect on our hiring of people, selecting students, awarding grants, and other actions that affect people’s lives. As you might have guessed by now, awareness of implicit bias does not help much in overcoming it.

A variety of techniques have been developed to try to reduce implicit bias, with various degrees of success.

For our purposes the main things we need to be aware of are:

  • Any bias based on “belonging to a group” is damaging at many levels.

    The biggest aspect of the matter is the social justice concern, but lesser aspects of it are still important. For example, we could exclude a good fraction of good people from positions where they can make important contributions.

  • The cure has not yet been shown to be better than the illness. At

    this point in our journey as critical thinkers we are well-trained to be skeptical of breezy corporate personnel initiatives. If a company holds a one-day seminar which attempts to eliminte implicit bias, we will not be surprised to notice that the seminar involves a polished speaker, glossy brochures and hand-outs, and mild phrases that discuss problems without being provocative. These are the types of remedies that have been shown to have no effect.

  • We also know that serious, in-depth, and sometimes intrusive

    work can improve a scholar’s mind set, so we do not throw up our hands in frustration and say that nothing can be done – we simply prepare for hard work.

Underrepresented and Underserved Groups

One of the consequences of implicit bias is that it makes it harder to tackle the problem of underrepresentation in the more lucrative and fulfilling professions, such as research and science/engineering careers.

Note that most of the underrepresentation we deal with is based on gender and race, although other aspects do exist. I might occasionally give examples from one type or another, but they often apply to all types.

A comment by Lorena Barba (video at [BarbaFrustrationDiversityEffortsInSTEMVideo] and slides at [BarbaFrustrationDiversityEffortsInSTEMSlides]) is key here:

Occupational gender segregation is one of the leading factors in the wage gap, and therefore desegregating high-paying, high-demand occupations is a social-justice concern.

(The same goes, of course, for occupational race segregation.)

Most of the attempts at removing bias-based obstacles in the path of underrepresented groups start too late. When you apply to a university, or for a job, there are often careful safeguards in place to make sure that you are not passed over due to bias. Or, there was, but with recent supreme court decisions on Affirmative action, the safeguards become more and more unstable.

Much of this is too late anyways, because of the role of implicit bias in earlier stages of screening for student aptitudes.

An example on the gender side of things: in late elementary school students are often encouraged to start taking computer programming courses. The students sent this way are almost always boys, and the difference starts here. Implicit bias enters the picture from many different angles. One example is that there is a frequent feeling that boys are beginning to spend too much time in video games, and a programming course will give them a better use for their computer. Additionally, girls are seen to be more gifted in the artistic fields and can be told not to bother with the complexity of math and computers.

Other implicit biases have teachers correcting very young boys and girls for different types of failures, with the suggestion that girls are trained to be more risk-adverse, and thus to avoid areas of very high complexity.

A few years later, this body of students has moved on, say from 6th grade to 12th grade, and formal methods are now in place to make sure that young women are given career advice that is as valid as what is given to young men. However, by then the spirit of tinkering has been largely lost. University Engineering faculty have to work to compensate for years of difference in the amount of engineering training.

For our purposes, as critical thinkers, we want to develop a few reactions to this deeply flawed situation of underrepresentation in many exciting careers. We want to feel the social justic aspect of it deeply: all people should have the same access to lucrative and fulfilling careers. We also want to question easy statements about solutions to these problems. We need to intervene early on to make sure that all the needed encouragement is given to people from underrepresented groups. Don’t worry: the people from entitled groups will still get those opportunities and they will do just fine!

A final note on bias in the work place: I mentioned that once we enter the professional workforce we have professional norms and human resource departments with policies and procedures. And yet, the situation is not rosy even then: there are persistent occurrences of efforts to belittle and even try to eliminate diversity efforts.

A portion of Lorena Barba’s talk (mentioned above) is a discussion of the notorious 10-page anti-diversity manifesto circulated by a Google employee in 2017. This shows that even in our day we still have highly educated and well-employed people who feel the urge to fight against diversity efforts.

Groupthink and red team exercises

Groupthink is an insiduous psychological phenomenon in which a group of people start making irrational or dysfunctional decisions because they are trying to prioritize harmony and getting along with others above all else. It’s easy to go with the flow of the people around you and default to agreement, but this is an easy way to fall into beliefs you might not hold if you were to scrutinize them.

The term is inspired by the dystopian political fiction from the mid-20th-century: Orwell’s novel 1984. The novel described deliberate brainwashing of the population by the authoritarian leadership. While the pyschological phenonmena of groupthink is not always deliberate, or forcefully imposed, it still results in a similar conforming of views.

Research psychologist Irving Janis applied his theory of groupthink to study a whole collection of policy fiascos, including Nazi Germany’s decision to invade the Soviet Union, the Bay of Pigs invasion, and the escalation of the war in Vietnam.

Historians and social psychologists have written extensively about this, and debate the role of groupthink in these events. A broad and deep reading of this subject can be though-provoking and rich in insight.

More recently, groupthink undertones can be found in online communities, specifically in memes. The term “meme” was first coined by the British evolutionary biologist Richard Dawkins in his 1976 book “The Selfish Gene”. Though published long before the emergence of internet culture, Dawkins used the term to describe an idea, behavior, or cultural practice that spreads from person to person through imitation. Coming from the Greek word “mimema,” “something imitated,” the term has been adopted from it’s original use to refer to rapidly spread or replicated images or jokes. Often, this quick change of information on the massive scale that is the internet can result in the reinforcement of existing biases, echo chambers, and the suppression of dissenting opinions. Is a meme of a cute kitty going to make you part of groupthink? No, of course not. However, darker recesses of the internet may use memes, information that screams “Copy me! Imitate me!” to create groupthink driven environments. When memes venture into the dark and begin to blur fact and satire, danger can arise.

Critical thinkers need to be aware of the possibility of groupthink when they see anything that is the result of a group of people pondering an issue. They should look beyond the surface and make sure that the group was chartered with explicit awareness of this danger.

To avoid the dangers of groupthink the United States Department of Defense created the idea of “red team exercises”. A “red team” plays the role of an opposing side, fully immersed in the opposition’s goals and not influenced by one’s own side. Red teaming is now used in many areas of military and cybersecurity activity, as well as being used in some scientific projects, where a separate team is created to look for problems with a scientific software architecture.

Red teaming is important, but we should always ask what comes next. A breezy statement of “we have red-teamed this …” does not mean that we are safe. Management chains will often send the results of a red team to an upper layer of management without emphasizing the problems that were discovered.

Two vivid examples of red team exercises that were ignored are a 1932 simulation of an attack on Pearl Harbor, and some of the results from a review of Boston’s Logan Airport security conducted before the 9/11 terrorist attacks, where two of the hijacked 9/11 airplanes departed from.

The ways in which red teams get ignored is shown in one whistleblower’s testimony to the US Congress during the 9/11 hearings:

The bottom line of FAA’s response to its Red Team findings is that the Red Team was gradually working its way out of a job. The more serious the problems in aviation security we identified, the more FAA tied our hands behind our backs and restricted our activities. All we were doing in their eyes was identifying and “causing” problems that they preferred not to know about. [DzakovicNineElevenTestimony]

Red Team Exercises can provide a solution to the dreadful groupthink, but, like anything, it can only be effective if executed properly.