I was inspired by a recent video by Sixty Symbols, a physics YouTube channel ran by Brady Haran, who also runs other popular educational channels like Numberphile and Periodic Videos. In this video, Brady talks to Nottingham University physicist Philip Moriarty about the nature of the scientific method. Moriarty casts doubt upon any one “scientific method”, as the real process is far more complicated than that. They also talk about Popperian falsification, which is an insufficient way of demarcating science from non-science. I can personally attest to Moriarty’s descriptions regarding the philosophy of science, but he explains it better than I can. It’s 15 minutes well-spent, by my assessment.

This is based on a paper in The History and Philosophy of Physics by Sean Carroll, one of my personal favorite science communicators. Carroll has been a critic of falsifiability as a necessary part of science for quite a few years now. Carroll raises the issue by talking about different type of “unfalsifiabilities” in the paper. He notes that there is a distinction between something that could not even be falsified even in principle, and comparing it to other unfalsifiable situations such as experiments that humans could not practically perform, or where we could plausibly explore certain areas of a theory but not others.
Some of this philosophy is integrated into his 2016 book, The Big Picture (I know you read this one, Johno). In the book, Carroll describes what he calls a Bayesian approach* to constructing models of the universe. We never entirely disprove a hypothesis in a binary sense. With new evidence, some models turn out to account for the data better than others, but as always, science is provisional. Ultimately, our scientific view of nature comes out to be messy and philosophical, rather than empirical and rigorous as we might prefer.
An excerpt in the paper:
But it is obviously true that our credences in scientific theories are affected by things other than collecting new data via observation and experiment. Choosing priors is one obvious example, but an important role is also played by simply understanding a theory better (which is the job of theoretical physicists). When Einstein calculated the precession of Mercury in general relativity and found that it correctly accounted for the known discrepancy with Newtonian gravity, there is no doubt that his credence in his theory increased substantially, as it should have. No new data were collected; the situtation concerning Mercury’s orbit was already known. But an improved theoretical understanding changed the degree of belief it made sense to have in that particular model. When Gerard ’t Hooft showed that gauge theories with spontaneous symmetry breaking were renormalizable [25], the credence in such models increased dramatically in the minds of high-energy physicists, and justifiably so. Again, this change was not in response to new data, but to a better understanding of the theory.
The punchline is that falsification might just be a convenient demarcation that doesn’t really get to the heart of science and empirical epistemology. It’s nice to have a rigorous criterion to make the decision for us, but at some points it becomes more of an arbitrary construct than a useful tool. Really, we should simply focus on how best to account for the data we have and forget about falsification. For those who don’t have access to the paper, here is a blog post of his on the topic.
What do you think?
*I am tempted to call it a quasi-Bayesian approach, since assigning prior probabilities to scientific models rather than rigorous probabilistic outcomes seems somewhat arbitrary, and integrating Bayesian statistics into model-building seems like the wrong epistemological tool. Nonetheless, we do develop models of the world that update in a Bayesian-like model with new evidence when it is introduced, so I’m not entirely opposed to Carroll’s Bayesian language here.
Stay in touch! Like A Tippling Philosopher on Facebook: