I'm an admirer of science.
My mother played a big part in fostering this when, in my childhood years, she bought me science kits, chemistry sets, and the like. I fondly remember cramming a card table into my bedroom closet, putting a light bulb over the clothing rod, and happily conducting experiments in my very own "laboratory."
Our home also featured a subscription to Scientific American, which I still get, along with the British science magazine, New Scientist. I rarely read religious or spiritual books anymore, because I'd rather learn about reality instead of myths.
But until I finished reading Naomi Oreskes book, Why Trust Science?, I hadn't given much thought to why science has been so effective in learning truths about the world.
Often on this blog I've extolled the virtues of the scientific method. However, early on in her book, Oreskes -- who is a professor of the history of science at Harvard University -- dispelled this as a reason for why science works so well. She writes:
There is no identifiable (singular) scientific method. And if there is no singular scientific method, then there is no way to insist on ex ante [before the event] trust by virtue of its use.
OK. I'll agree with a Harvard historian of science on this point, since obviously she knows much more about the scientific method, or lack thereof, than I do.
By the way, some Googling of my own blogs revealed that in 2012 I'd written about Oreskes' previous book, Merchants of Doubt, which is about how mercenary scientists aided tobacco and chemical companies in hiding the scientific truth about the products they were peddling, something that is happening now with the fossil fuel industry's attempts to make global warming denialism look reasonable.
So if the scientific method isn't why we should trust science, what is?
This quotation from Why Trust Science? provides an overview of Oreskes' argument. Which basically is that, as the book jacket says, "the trustworthiness of scientific claims derives from the social process by which they are rigorously vetted."
My arguments require a few caveats. Most important is that there is no guarantee that the ideal of objectivity through diversity and critical interrogation will always be achieved, and therefore no guarantee that scientists are correct in any given case.
The argument is rather that, given the existence of these procedures and assuming they are followed, there is a mechanism by which errors, biases, and incompleteness can be identified and corrected. In a sense, the argument is probabilistic: that if scientists follow these practices and procedures, they increase the odds that their science does not go awry.
Moreover, outsiders may judge scientific claims in part by considering how diverse and open to critique the community involved is. If there is evidence that a community is not open, or is dominated by a small clique or even a few aggressive individuals -- or if we have evidence (and not just allegations) that some voices are being suppressed -- this may be grounds for warranted skepticism.
Oreskes gives examples of where science seemingly has gone wrong. However, it turns out that these examples of bad science actually prove her thesis, because in each case there wasn't a genuine consensus among a scientific community -- just a small number of scientists who loudly claimed that something was true which later turned out to be false.
For example, in 1873 Edward H. Clarke, an American physician, argued against the higher education of women because it would cause their ovaries and uteri to shrink, thereby adversely affecting their fertility. Female physicians argued against this at the time, but since most doctors back then were male, Clarke's ideas were given more credence that they deserved.
Thus Oreskes persuasively argues that diversity in a scientific community is important if scientific findings are to be given a rigorous examination that will make them more likely to be defensible.
Our personal experiences -- of wealth or poverty, privilege or disadvantage, maleness or femaleness, heteronormativity or queerness, disability or able-bodiedness -- cannot but influence our perspectives on and interpretations of the world. Therefore, ceteris paribus [all other things being equal], a more diverse group will bring to bear more perspectives on an issue than a less diverse one.
...Put another way: objectivity is likely to be maximized when there are recognized and robust avenues for criticism, such as peer review, when the community is open, non-defensive, and responsive to criticism, and when the community is sufficiently diverse that a broad range of views can be developed, heard, and appropriately considered.
...The key point here is that often "assumptions are not perceived as such." They are so embedded as to go unrecognized as assumptions, and this is most likely to occur in homogeneous communities.
Of course, religious communities rarely exhibit any of the qualities that make science so successful.
By nature, they are closed, not open. They resist criticism, viewing faith as superior to reasoned argument. They are defensive when their belief system is challenged. And religions are notably reluctant to change their views.
On the other hand, says Oreskes:
When we observe scientists, we find that they have developed a variety of practices for vetting knowledge -- for identifying problems in their theories and experiments and attempting to correct them.
While these practices are fallible, we have substantial empirical evidence that they do detect error and insufficiency. They stimulate scientists to reconsider their views and, as warranted by evidence, to change them. This is what constitutes progress in science.
The philosophy of science, the requirements of evidence, and the utility of concensus among divergent constituents are all helpful to establishing truths about this creation.
But ultimately this remains a human process subject to human prejudices and error. And ultimately we must decide for ourselves what we trust based only on our own judgment.
I think, Brian Ji, that one of the foundational scientific principles of any belief about anything, which you have eloquently reiterated over the years, is that belief is always at first an untested hypothesis, and so it must be testable, so that evidence can be gained that will either support or discredit that view.
And that evidence can also be scrutinized.
Which means at the end of the day we make a decision about both evidence and belief.
Is our decision based upon the principles of logic, reason and science?
And are we willing to learn new facts, or review old evidence within a more enlightened context?
To that end life long learning, and the open mind required for it seems part and parcel of good science.
Finally, are we ourselves willing to be rewired by experiences and ideas we didn't understand before?
Our active participation depends upon a willingness to give up a favored opinion, however much of past evidence we think there was, when new facts prove otherwise.
Posted by: Spence Tepper | January 04, 2020 at 08:55 AM
... But if we can only test that hypothesis by putting aside a dearly held belief for a time, then what is holding us back?
We are all scientists, at the end of the day, thinking, testing, and learning. But are we good scientists? Are we learning what we want to learn? Or something unintuitive, something new? Or something old we just didn't understand at the time?
What holds us back from setting aside our beloved notions to give some idea we don't like a little airspace, a little excercise, an honest and fair test?
Posted by: Spence Tepper | January 04, 2020 at 09:03 AM
Well, I remember when I worked at the genomics research center of MIT and Harvard that there was fierce competition among the PhDs to prove a new theory correct or debunk an old one. They need recognition through published works to further their careers. Really. Proving theories became more competitive than fact finding. That’s a jaded inside look at the scientific community. It’s human nature I guess.
https://bigthink.com/devil-in-the-data/can-science-be-trusted
Posted by: Sonia | January 04, 2020 at 01:41 PM
If science depends on consensus then it's a religion. Truth isn't truth because people with credentials have been persuaded to agree on it.
Posted by: Jesse | January 04, 2020 at 10:53 PM
Science has confirmed that every essence if matter vibrates. Yes it can be trusted. Its has confirmed that there is more than what science knows or will ever know out there.
Nice . Science had confined The Word. Ha
Posted by: Arjuna | January 05, 2020 at 02:15 AM