Every year Edge online magazine poses a grand, sweeping question to its members (people drawn from the upper echelons of society and academia). 2017’s question was: What scientific term or concept ought to be more widely known? The answers span the gamut of the sciences and social sciences and are certainly worth a read (though with 204 contributors and 143,250 words it might take you a while to get through it all! I’ve been using a random sampling method…). An issue I have though is the hyper-specificity of the answers. I understand that each respondent would like to be unique but reading through the list there is no way even a well educated and well-read individual is going to remember or frankly care about most of these concepts.
Mulling this over a bit, one pressing problem I think in our current post-truth age is that everyone has a pet theory or framework of how the world works, whether it’s about global warming being a Chinese hoax or the Rothschild’s having a secret banking cult/empire, often without evidence and usually with no contact with reality. Is there a way to cull these weird ideas before they take root?
So in the spirit of pretending to be a public intellectual I’m going to throw my hat into the ring with a term that surprisingly doesn’t appear by itself in the official list: Falsifiability. It’s a much maligned concept (see criticism section of the wiki article link), but one that sees great favor among employed and practicing scientists. Rather than get into a technical or historical discussion about it (Cliffs Notes: Karl Popper, Thomas Kuhn, black swans), I’ll take a different tack and walk you through what it means to actually apply the concept of falsifiability and why it’s like that “one weird trick” to make you a better informed person.
Let’s start by defining a colloquial concept of scientific falsifiability: scientific theories are those that can be empirically falsified (i.e. can be shown to be wrong). Actually, why restrict ourselves only to the domain of science and precisely defined scientific theories. Good mental models are those that can be empirically falsified. Hmm.. does the falsification have to be empirical? Fine, good mental models are those that can be falsified. But what if I require you to travel to Australia to test my theory that water is colorless (n.b. it’s very slightly blue actually)? That doesn’t seem related or particularly reasonable, so let’s rephrase it as good mental models are those that can be falsified in a reasonable manner. Sure, the word “reasonable” is doing a fair bit of work in that sentence and yeah I’m hand-waving a bit over what a “mental model” is and fine I’ll concede that falsifiable = shown to be wrong isn’t well defined… BUT this definition will suffice if we stop being pedantic and use a little bit of judgement.
Okay so let’s pick a patently silly idea as an example: I believe that the moon is made up of green cheese. So how do we go about seeing if this theory passes muster. We can recast our statement above into a question and apply it on our idea: what evidence would cause me to reconsider my view?
There are 2 categories of answers to that question.
- If no evidence will cause me to reconsider that the moon is made of cheese then I just have a religious belief about the sanctity of moon-cheese. All hail brie! This isn’t very productive and is a strong sign that my idea is bunk.
- If there is some evidence that will cause me to change my mind then great I’m not in the pocket of big-cheese and I’ve cleared the first hurdle of having a good theory. Suppose I now learn that big government NASA folks have samples of moon rock. Logic tells me (assuming I believe the veracity of the samples) that a chemical analysis of those rocks can disprove my theory.
Focusing on #2, let’s say I run that analysis and it conclusively shows that the rocks aren’t cheese. At this point I am again left with 2 options:
- I can discard my theory and I’ve successfully prevented myself from becoming a crazy person espousing cheese based conspiracies.
- I can immunize my theory by expanding it and adding caveats. Effectively, I render the evidence against it obsolete. The trade-off of course is that my new theory is actually less powerful. Say my new idea is that the moon is made of cheese that morphs into rock when pieces leave the surface of the moon. But this of course opens up whole new complications – e.g. what is the physical process that gets you from cheese –> rock?
Where does this leave us then? Well, by working through this mental process of evaluating my ideas I have a powerful tool to discard bad ideas and sharpen my thinking. I can immediately see if an idea is worth pursuing or is just a collection of deeply held biases.
For a real world example look no further than the debate about climate change that I noted above. If you are a skeptic of it ask yourself the question above: what evidence would cause me to reconsider my view? Again, if your heart of hearts tells you there is no evidence that will cause you to change your mind then you are committing a massive, easily correctable error. Have another go and formulate your alternative opinion (using the criterion that it has to be falsifiable) and then try to break it by validating it with evidence such as the United Nations Panel’s (IPCC) 5th Assessment Report – specifically the scientific basis (the summaries are a good place to start) – and see if it passes or fails. If it fails, maybe immunize it and repeat for a few cycles. Frankly I think you’d be surprised where you end up.
Keep in mind though that if you immunize your pet theories too much they become very weak with no explanatory power and are now more likely to be offed with the tiniest bit of evidence. Anyway you can probably see at this point why falsifiability is a wonderful thing and why I consider it a scientific concept that ought to be far more widely known!