Some Thoughts About “Knowledge Is Probabilistic”

Det er vanskeligt at spå, især når det gælder Fremtiden.
[“It is difficult to predict, especially when it applies to the future.”]
—Danish saying, author unknown (early 20th century)

The question of whether the universe proceeds along deterministic lines, or according to individual will, or by blind chance, or something else, or by any combination of these, is probably, ultimately, unknowable. There is no difference that we can think of that we could observe in the world that would not be at least possible under the other models. Much ink has been spilled over the matter, to essentially no avail. It is also, despite all those papers and books, ultimately unimportant. If there is no way to distinguish between the cases, then it cannot make any difference to our lives. It is not even clear how the difference can be meaningful at all if it does not have any consequences.

What we do know is that we do not know—not anything, not for sure. Descartes was right about radical doubt, as far as it goes, and wrong in his invocation of a deus ex machina to rescue him. We do not know for certain about the external world. We do not even know for certain about abstract definitions: we have all had the experience of being wrong about a word's meaning, or about a mathematical rule. But this is again a pointless definition of knowing. If we can never “know” anything under that definition, then it is not a definition that is worth spending any time on, because it can never affect us. If that is what the word ‘know’ means¹, then “knowing” is irrelevant and uninteresting, and a concept describable as some other word is what we can be interested in.

We start to see that, as so often in philosophy, this problem is largely a problem of ambiguous terms: in this case, the word ‘know’ and its friends. One philosopher uses the word ‘know’ to mean absolute certainty, and thus proves the impossibility of knowledge. Another uses it to mean “justified true belief”, another thinks knowledge is whatever is coherent, another whatever is useful, and so on. Philosophers frequently argue past each other based on their own, idiosyncratic definition of what it is to know.

This is also a problem of binary thinking. Even though reality is scalar, philosophers insist on treating knowledge as an on-off switch: that is, that I either know a thing or I don't. They disagree on where to place the binary; they do not disagree that there is a binary. Some grope towards the truth by suggesting that standards of knowledge can vary in different contexts, but then fall back into their binary comfort zone. As truth-seekers, philosophers tend to be temperamentally inclined to look for certainty, even certainty about certainty.


As a statistician, I am very comfortable with understanding the world probabilistically.² From the perspective of our knowledge perception, we need to treat knowledge as a probability, not a binary. When I say, “I know that x”, I am saying that I assign a high enough probability to x to treat it as true. And when I get evidence that makes me believe that my earlier belief was not true, I change the probabilities in the opposite direction. This is why radical scepticism misses the point. If when I say, “I know that x”, you respond, “But you are not certain of x”, I immediately reply, “But I don't have to be: I still know it.” I can say that I know that there is a glass of water on my desk because the probability of it being true is sufficiently high: I can see it, I have just drunk out of it, etc. If I were to wake up suddenly and realise I had been dreaming and there was no glass³, I would flip the probability. I would say that I had thought I knew, but didn't.

Note that this is still only a probability, and could be flipped back in future. Knowledge claims can only ever be provisional estimates, always waiting on the possibility of new evidence. What is important is how strict we wish to be in describing our claim as knowledge. Again, this is an approach that statisticians are used to via p-tests. It also makes our approach compatible with other approaches: to us, they are disagreeing primarily over where to draw the line. I do not have to exclude the possibility of an evil demon to conclude that I know that the glass of water is on my desk. For purposes of decision points such as whether to drink from it when I am thirsty, the probability that it is there is (more than) high enough for me to accept it as true when I am at such a decision point.

The mindset involved in this way of thinking is closely related to that promoted by Bayes' Theorem: P(A|B)=P(B|A)*P(A)/P(B). (For non-statisticians, this means that the chance of A happening, given that B has happened, is equal to the probability that B happened given A, multiplied by the probability of A and divided by the probability of B.) As well as being an important result in statistics, it supports a questioning and open-minded attitude which is equivalent to our approach to knowledge, and again excludes the certainty model of knowledge: if a proposition is assigned probability 1 (or 0), no amount of evidence to the contrary could cause us to adjust our priors.


Justification, or related jargon terms, are a major element of the theory of knowledge. The idea is that a belief needs to be justified in order to count as knowledge. For example, if I believe that there is a glass is on my desk because I am dreaming, but there is in fact coincidentally a glass on my desk, do I actually know that there is a glass on my desk? Let's draw up a table of some (non-exhaustive) scenarios about my glass.

Truth and Knowledge by Justification
Situation Justification Any errors? Is there a glass? Is this knowledge?
I can see and feel a glass on the desk, and my wife can too. Very strong No Yes Yes
I am hallucinating. No No
Yes (coincidentally) ?
I can see and feel a glass on the desk, but no one else is here. Strong No Yes Yes
I am hallucinating. No No
Yes (coincidentally) ?
My wife tells me there is a glass on the desk, but I am not looking at it. Moderate No Yes Yes
I am hallucinating. No No
Yes (coincidentally) ?
My wife is hallucinating. No No
Yes (coincidentally) ?
I have a gut instinct there is a glass on the desk. Weak / Invalid No Yes ?
I am hallucinating. No No
Yes (coincidentally) ?

Whenever we can answer firmly “Yes”, there is a glass. We can always answer firmly “No” whenever there is no glass. There are, essentially, two types of edge case: the case where I have a justification that I accept but know is weak (e.g., gut instinct); and the case where I have a justification that I do not know is weak or invalid (e.g., hallucination). Since we already rejected as pointless the certainty model of knowledge, there is no objective point at which we can draw a binary line between strengths of justification, so we cannot construct a rule to eliminate weak justifications. And if we start demanding knowledge of knowledge then we end with infinite recursion where we must know that we know that we know that we know… So both of these ‘?’ categories are still forms of knowledge when they are true.

The reason that we rightly do not approve of these approaches is not then because they are not knowledge, but because they use methods for acquiring knowledge that are unreliable. Justification is therefore a virtue that can apply to knowledge, much as reliability is a virtue that can apply to a manufacturing process. Imagine I slapdash together a meal without following a recipe or measuring anything, even by eye, and it turns out to be delicious. My approach was unvirtuous, because it is unlikely to produce delicious recipes in future—but that does not mean that this meal is not delicious. In the same way, knowledge is true belief, and justification is the approving word we use to describe methods that tend to produce true beliefs.


Our knowledge of the world is a set of probabilities. All knowledge claims are open to the possibility of revision in the light of new evidence, which would not be possible if knowledge were not probabilistic. For different purposes we are willing to set different probability thresholds for accepting a belief as true. Beliefs are either true or not, but we can only ever have probabilistic estimates of that status.

sum θoətiz abaʊt “nolij bii probəbəlistik” [blank]

pleis-hoʊldə tekst