Say that one in a thousand people has disease x. There’s a test for x, but five percent of the time it says someone has x when he doesn’t. You, the test says, have x. How likely is it that you do?

A little math goes a long way here. If one in a thousand people is infected but five in a hundred people falsely test positive, then out of every ten thousand people tested, ten will have the disease and test positive while five hundred will test positive without having the disease. The question, then, is whether your positive result is one of the ten out of 510 that’s accurate. There is only a 1.96 percent chance (10/510) that you’re infected.

Now consider this. In 2013 a few dozen doctors and medical students were asked this question, and around three-quarters of them got the answer wrong. In fact, almost half of them said there’s a ninety-five percent chance you have the disease.

Statistical thinking does not come easily to us. Our primate brains are not designed to grasp it, or even to reach for it, naturally or intuitively. Reason bows to passion. Instinct substitutes for evidence.

Much of what passes for “common sense” is nonsense. Take our aversion to ambiguity. Would you rather wager on drawing a red ball from a bowl with an equal number of red and white balls, or from a bowl with a random distribution of such balls? If you’re like most people, you’d much prefer to bet on a draw from the first bowl. The risk hasn’t changed; only your perception of it.