I appreciated much of the feedback from my last post on contextualizing science, particularly the one Jonathan chose to highlight.
Some of the feedback I received from that piece was a bit less than charitable. Aside from the comments from nazi goats, everything was mostly fair within this blog’s comment sections. When it was shared elsewhere, it was criticized as “postmodernism” (I am not a postmodernist, though I think that movement isn’t quite as bad as enlightenment skeptics characterize it). While it’s used as a snarl word to imply moral relativism or that the person in question doesn’t believe in objective reality, the reality is that there’s a lot more to it than that. And addressing that there are cultural and societal forces that affect scientific discovery seems to land you in the pot of postmodernist dismissal.
It seems fairly self-evident to me that cultural and political forces can influence scientific data, even in hard sciences, but perhaps it’s worth digging into just a little more.
I got quite a bit of eye-rolling at my use of the term “contextualization”. First of all, I think Anne Fenwick’s comment (the same as linked above) gives a beautiful example of how cultural forces can shift data. In one sense, the data is objective. There was an objective metric for how a test-taker performs on an IQ test, and the number that gets churned out at the end reflects objectively how well the test taker performed. In another sense, the test doesn’t necessarily objectively reflect what the test is ostensibly designed to test.
Psychology, of course, is softer of a science compared to chemistry or physics. Do these data require contextualization? I’d say perhaps, and try not to reject this out of hand.
The data is, of course, data. But, of course, data can still reflect the society that it is produced in. But let’s first explore a bit what it means to contextualize data.
You may perform an experiment and get some number in your experiment, which is fantastic. But the number on it’s own is meaningless. If you put some error bars on the number, that’s better since it lets the reader know how much that number can change, but it still doesn’t say anything. Once you put the number on a graph or a chart and apply labels and units to it, it starts to make a lot more sense. If you put the number up with some other numbers, you may see a trend. Apply some statistical tests, and you’ll see how significant this trend is. You can see how important each of these layers are to communicating the data from the person performing an experiment to another researcher.
Communicating is essential in science, as science is a collaborative enterprise. Two researchers may be working on the same research project, yet if one researcher does an experiment, that researcher must somehow communicate their results to the other person. There is no one way to communicate data, and it is the communicator’s job to make sure this number is understood. You can tweak the size of the axes, shift the scale, use a different type of graph, and cut out data or experiments that don’t express anything interesting. It is hopefully clear that all the layers of contextualization described in the previous paragraph are important for the communication of these objective data. Even if the data points are objective and reflect an unbiased measurement from some machine or device, how these data are expressed can easily affect how it is understood.
At this point, the data need to be interpreted. If you’ve looked at a paper in a field you don’t understand, you may not understand the graphs, charts, and images. Most scientific tests aren’t straightforward measurements of simple parameters or characteristics. If you’re looking at, say, an Nuclear Magnetic Resonance (NMR) or an infrared (IR) spectrum of someone’s tests, you’re just looking at charts with lots of peaks and valleys across an x-axis that lists different wavenumbers or concentrations. The researcher is going to end up doing controls to see what happens to these peaks and valleys as, say, you remove the compound you’re trying to test for. As the concentration of a compound or chemical bond changes, the heights of certain peaks will increase or decrease accordingly.
This is, of course, just one example. To someone not familiar with NMR or IR, these peaks mean nothing. They have no frame of reference for how these peaks could possibly refer to some sort of underlying molecular structure. Of course, once you spend a few years learning physical and analytical chemistry and understanding the mechanisms behind these very abstract measurements, then it becomes very apparent how you can draw very robust inferences from these charts.
Of course, some interpretations from data are overstated. They may not have a high significance level, the effect size may be off, or you can simply reach faulty conclusions. Measurements by the OPERA scientists, for example, seemed to strongly imply that Neutrinos were travelling faster than the speed of light. At this point, most people know the punchline that these measurements were simply errors as a result of faulty equipment. Still, the measurements the scientists made seemed to strongly imply that something could have been off about our understanding, and it was valid enough of an interpretation to look into until it turned out to just be a measurement error. It’s not difficult to see how very competent and experienced researchers are still capable of making errors in interpreting what their data say, even to the point of getting those interpretations through peer review.
At the very least, it should be obvious to most people that all sciences are subject to social and cultural forces in terms of resources. Grants and funding are always scarce for researchers. Professors are always pushing to get tenure at universities, and pressing their graduate students hard to get published as much as possible. The publish-or-perish mentality is omnipresent in modern science, and those financial and career pressures do incentivize researchers to do whatever they can to get their names on published papers. This can mean dishonest things such as cherrypicking data, p-hacking, and even fabricating data wholesale to tell a story that isn’t there.
Scientists are usually cautious to take any of these interpretations in a paper at face-value as a result. That’s ok, because science as a collaborative effort is designed to weed out faulty data and interpretations. Still, as we read these papers, we know they are coming from researchers that are likely desperate to increase their notoriety and prestige and trying to advance their careers as much as possible. Perhaps we wouldn’t need to worry about fabricated data and illegitimate practices if there weren’t such enormous pressure on researchers to churn out publications, and maybe if economic resources were scarce we wouldn’t have to worry nearly as much about these practices. Yet this is the current system we have to deal with, and bad predatory journals and faulty methodologies reflect that.
It’s easy to say that “the data is the data”, but those data aren’t produced in a vacuum. Furthermore, it’s easy to communicate thoroughly legitimate data in dishonest ways. Corporations used to promote studies that showed that smoking tobacco wasn’t a risk for certain diseases that certainly weren’t affected by smoking. Even if smoking does come with significantly increased risks for things like various cancers and bronchitis, if a company points to a disease that tobacco doesn’t effect, this makes cigarettes appear safer.
This should be readily apparent with figures like Charles Murray, whose career rests on eliminating things like affirmative action, and pointing out scientific data that he feels supports his case. Even if his data is robust and sound, he is thoroughly capable of presenting that genuine data in disingenuous way, which could easily have adverse effects for black Americans.
The fact of the matter is what data ends up being produced and what gets published and what gets promoted is in some ways a reflection of the world around us. This isn’t the fault of science, but a product of us being flawed and biased humans with agendas.
Many of us skeptics might be hesitant to talk about ways that totally legitimate science can be abused, or how flawed science can slip through the cracks. This is often for good reasons, since dishonest creationists and charlatans are waiting in the wings to swoop in and take advantage of these facts and use this to advance their agendas and muddy the waters. I don’t have a perfect answer to discussing the limitations of science as it is currently practiced honestly and transparently without letting charlatans abuse these limitations. However, we definitely do not need to keep hammering home where science can get it wrong (and it certainly can), since it very clearly does and is profoundly useful as a tool for discovery and advancing mankind. It’s simply useful to keep everything in perspective, and any reasonable perspective should see that “objective data” is not immune to flaws, while still being the best tool we have for exploring the natural world around us.
Note: I will be on a panel at OrbitCon (an online skeptic’s conference) with a few other panelists at 3 PM CDT tomorrow (April 15, 2018) to discuss misconceptions of science within our field and touching briefly on things such as scientism. I will probably bring up a couple points discussed in this piece or the previous piece, and the other panelists seem fantastic as well. Please check it out alongside some of the other panels that are going on this weekend!