I'm an admirer of science.
My mother played a big part in fostering this when, in my childhood years, she bought me science kits, chemistry sets, and the like. I fondly remember cramming a card table into my bedroom closet, putting a light bulb over the clothing rod, and happily conducting experiments in my very own "laboratory."
Our home also featured a subscription to Scientific American, which I still get, along with the British science magazine, New Scientist. I rarely read religious or spiritual books anymore, because I'd rather learn about reality instead of myths.
But until I finished reading Naomi Oreskes book, Why Trust Science?, I hadn't given much thought to why science has been so effective in learning truths about the world.
Often on this blog I've extolled the virtues of the scientific method. However, early on in her book, Oreskes -- who is a professor of the history of science at Harvard University -- dispelled this as a reason for why science works so well. She writes:
There is no identifiable (singular) scientific method. And if there is no singular scientific method, then there is no way to insist on ex ante [before the event] trust by virtue of its use.
OK. I'll agree with a Harvard historian of science on this point, since obviously she knows much more about the scientific method, or lack thereof, than I do.
By the way, some Googling of my own blogs revealed that in 2012 I'd written about Oreskes' previous book, Merchants of Doubt, which is about how mercenary scientists aided tobacco and chemical companies in hiding the scientific truth about the products they were peddling, something that is happening now with the fossil fuel industry's attempts to make global warming denialism look reasonable.
So if the scientific method isn't why we should trust science, what is?
This quotation from Why Trust Science? provides an overview of Oreskes' argument. Which basically is that, as the book jacket says, "the trustworthiness of scientific claims derives from the social process by which they are rigorously vetted."
My arguments require a few caveats. Most important is that there is no guarantee that the ideal of objectivity through diversity and critical interrogation will always be achieved, and therefore no guarantee that scientists are correct in any given case.
The argument is rather that, given the existence of these procedures and assuming they are followed, there is a mechanism by which errors, biases, and incompleteness can be identified and corrected. In a sense, the argument is probabilistic: that if scientists follow these practices and procedures, they increase the odds that their science does not go awry.
Moreover, outsiders may judge scientific claims in part by considering how diverse and open to critique the community involved is. If there is evidence that a community is not open, or is dominated by a small clique or even a few aggressive individuals -- or if we have evidence (and not just allegations) that some voices are being suppressed -- this may be grounds for warranted skepticism.
Oreskes gives examples of where science seemingly has gone wrong. However, it turns out that these examples of bad science actually prove her thesis, because in each case there wasn't a genuine consensus among a scientific community -- just a small number of scientists who loudly claimed that something was true which later turned out to be false.
For example, in 1873 Edward H. Clarke, an American physician, argued against the higher education of women because it would cause their ovaries and uteri to shrink, thereby adversely affecting their fertility. Female physicians argued against this at the time, but since most doctors back then were male, Clarke's ideas were given more credence that they deserved.
Thus Oreskes persuasively argues that diversity in a scientific community is important if scientific findings are to be given a rigorous examination that will make them more likely to be defensible.
Our personal experiences -- of wealth or poverty, privilege or disadvantage, maleness or femaleness, heteronormativity or queerness, disability or able-bodiedness -- cannot but influence our perspectives on and interpretations of the world. Therefore, ceteris paribus [all other things being equal], a more diverse group will bring to bear more perspectives on an issue than a less diverse one.
...Put another way: objectivity is likely to be maximized when there are recognized and robust avenues for criticism, such as peer review, when the community is open, non-defensive, and responsive to criticism, and when the community is sufficiently diverse that a broad range of views can be developed, heard, and appropriately considered.
...The key point here is that often "assumptions are not perceived as such." They are so embedded as to go unrecognized as assumptions, and this is most likely to occur in homogeneous communities.
Of course, religious communities rarely exhibit any of the qualities that make science so successful.
By nature, they are closed, not open. They resist criticism, viewing faith as superior to reasoned argument. They are defensive when their belief system is challenged. And religions are notably reluctant to change their views.
On the other hand, says Oreskes:
When we observe scientists, we find that they have developed a variety of practices for vetting knowledge -- for identifying problems in their theories and experiments and attempting to correct them.
While these practices are fallible, we have substantial empirical evidence that they do detect error and insufficiency. They stimulate scientists to reconsider their views and, as warranted by evidence, to change them. This is what constitutes progress in science.