Confirmation bias in science

Alex Holcombe
4 min readMay 21, 2021

--

It’s easy to get attached to your own ideas, and react defensively when someone challenges them. Four hundred years ago, Francis Bacon recognized that this human foible was a major obstacle in the quest for knowledge.

Francis Bacon. Image:Public domain.

In his book Novum Organum, Bacon wrote:

The human understanding, when it has adopted an opinion, forces everything else to add fresh support and confirmation; and although more cogent and abundant instances may exist to the contrary, yet it either does not observe them or it despises them, or it gets rid of and rejects them by some distinction, with violent and injurious prejudice, rather than sacrifice the authority of its first conclusions.

In other words, people are biased against evidence that undermines their pre-existing beliefs. They tend to dismiss such evidence and focus instead on things that confirm their beliefs. With his discovery of the phenomenon that today we call confirmation bias, Bacon had put his finger on a major problem for progress. With confirmation bias unchecked, how could society efficiently gain new knowledge?

Bacon argued that people should concentrate on systematic observation and experimentation. This seeded the blossoming of modern science that occurred in the 17th-century. Bacon thought that once his philosophy was sufficiently developed, it would “open up and lay down a new and certain pathway from the perceptions of the senses themselves to the mind.” This never came to fruition, but many scientists believed that it had. Since Bacon, researchers have often assumed that their beliefs are not important, because their methods, they thought, were a reliable recipe for the steady advancement of knowledge.

Unfortunately, the effects of confirmation bias are too insidious to be easily overcome. Indeed, these biases have continued to contribute to false findings in every corner of science.

Every physics student learns about the oil-drop experiment by Robert Millikan in 1908 to measure the charge of the electron, for which he won the Nobel Prize in 1923. While the experiment was a triumph in that it confirmed aspects of Einstein’s wave-particle duality theory, the number that Millikan got was not exactly right. As the physicist Richard Feynman explained:

It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of — this history — because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong — and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that.

In other words, subsequent researchers believed that Millikan’s number was basically right, so confirmation bias kicked in and they finessed their own results to favor Millikan’s number.

Measurements of the charge of an electron after Millikan were slightly wrong, in the same direction as Millikan’s estimate. Image: Anonymaus, CC-BY-SA.

This sort of problem is not limited to physics, of course. In almost any cutting-edge research area, there can be many ways that a study can go wrong. New equipment, or new procedures, tend to be unreliable. When the equipment is broken or the procedure not yet refined, data should be thrown out. The problem is that it’s not always easy to know whether unexpected results reflect a failed experiment, caused by bad methodology, or a successful one.

Because of confirmation bias, researchers will always be tempted to chalk up data that don’t fit their theory to the data being bad rather than their theory being wrong. Being a good scientist means taking precautions to reduce confirmation bias’s influence.

The above (CC-BY) is a draft of a section of our “Good science, bad science” class at the University of Sydney.

--

--

Alex Holcombe
Alex Holcombe

Written by Alex Holcombe

Science, Experimental Psych. Reforms. @PsyOpenAccess

No responses yet