Experts are smarter than the rest of us. They perform great feats—polio has been eradicated, search engines exist, and there are probes in interstellar space. And they know a lot. If you want the best available information on COVID-19, disregard that chain email your uncle sent you. Consult the website of the Centers for Disease Control and Prevention.

It should already be clear that the experts in question are scientific experts. But what’s the difference between science and everything else? The demarcation problem—how to separate science from pseudoscience—is trickier than it might seem. You might assume that the key criterion is falsifiability. Science is correctly predicting, in 1705, that a comet will emerge in 1758. Pseudoscience is writing: “Strong opinions might come into conflict with strong emotions today, Virgo.” But requiring strict falsifiability places psychology, sociology, economics, and much else on the wrong side of the line.

So be it, some would say. “The social sciences are on par with astrology,” declared Imre Lakatos; “it is no use beating about the bush.” And he thought falsifiability too high a bar! All the same, it’d probably be best to keep some of the social scientists employed. We cannot, and need not, “solve” the demarcation problem. As in all things, what’s called for is judgment. We might begin by simply asking, when an expert makes a claim, whether she possesses the facts she’d need to make the claim with any confidence. Someone who expects the Sun to engulf the Earth in billions of years might be saying something respectable, even though we’ll never witness the event. Someone who announces a one-in-six chance the world will soon end has not a clue what he’s talking about, whether the world ends or not.

It’s a setback for experts everywhere when someone with all the trappings of expertise mixes scientific and pseudoscientific claims together, as Toby Ord, a philosophy professor at Oxford University, does in his book The Precipice: Existential Risk and the Future of Humanity. Ord wants to raise awareness about the biggest dangers we face—the “risks that threaten the destruction of humanity’s longterm potential.” Not a bad idea. But in trying to bring vibrancy to his case, he repeatedly lays claim to knowledge he cannot have.

Ord’s review of naturally occurring risks is rigorous and informative. Most such risks arise from events that have occurred before; some even occur at semi-regular intervals. The fossil record gives us a sense of how often these events wipe out species, and the laws of physics can tell us much else that we want to know about them. In short, many natural risks are amenable to technical thinking. By tracking asteroids and studying past asteroid strikes and asteroid-caused extinctions, for example, we can reasonably conclude, as Ord does, that we are probably not about to be annihilated by an asteroid.

But this kind of analysis gets little purchase on manmade risks. Take artificial intelligence. We know of no other time AI has appeared; we have no past AI apocalypses to study. For that matter, we have only the weakest grip on how technological innovation works. Down the ages, the brightest experts have made dazzlingly wrong predictions on the subject. How, indeed, could it be otherwise? Technology comes from humans. Human interaction is path dependent and sensitive to initial conditions. Human societies are an emergent product of billions of inscrutable thoughts and emotions. It’s chaos all the way down. Charting the paths of thousands of asteroids into the distant future is child’s play next to predicting the next ten Super Bowl champions, the price of oil six years from now, or the state of the German pharmaceutical sector when all current patents have expired.

What are the odds that, in the next hundred years, we’ll be killed by an asteroid or comet? What are the odds that, in the next hundred years, we’ll be killed by AI? Similar though they may look, these are profoundly different questions. The one is difficult, the other unintelligible. Yet Ord answers both. Around one in a million, he says in response to the first. Around one in ten, he says in response to the second. His admission that “significant uncertainty remain[s]” in the latter estimate is misleading. No certainty resides in it to begin with.

Why one in ten? “One might be surprised to see such a high number for such a speculative risk,” Ord concedes, “so it warrants some explanation.” That it does, though what Ord offers is not reassuring. He begins by declaring himself free to adopt “a Bayesian approach of starting with a prior” that “reflects” his “overall impressions.” This is expert-speak for pulling a guess out of a hat. Ord notes “the overall view of the expert community” that “there is something like a one in two chance” that general AI will emerge “in the coming century.” He then proposes that because general AI would be powerful, we “shouldn’t be shocked” if it swept us aside. And he suspects that “aligning” general AI “with our values” will be hard. That’s it. A hunch, a survey of hunches, and some speculation.

Adding all existential risks together, Ord says, generates about a one-in-six chance that we’ll perish in the next hundred years. This figure embraces the possibility that everyone dies from something totally unforeseen. The odds that such an event will occur are by definition unknowable. In Ord’s opinion they’re about one in thirty. The headline assertion of one-in-six has been repeated in the press as though it were a real thing, a real statement about the world. It is not. It is the kind of pronouncement scientists sometimes wryly describe as not even wrong.

Armed with specious probabilities, Ord turns to formulating a “grand strategy for humanity.” It has three steps. The third, to “achieve our potential,” is the only desirable, obtainable, or coherent one. We will check it off (or not) regardless of what Ord cares to say about it. Scientists, researchers, and entrepreneurs are not all following some philosopher’s program.

At any rate, that third step must wait, Ord insists. The two others must precede it. In the first step, we will obtain “existential security.” We will “reach a place of safety—a place where existential risk is low and stays low.” We will do this by giving more money and power to government agencies, such as the World Health Organization; by creating new government mandates and entitlements, such as legislative “representation” for future generations; and by creating new international governing institutions, such as a court that considers the safeness of scientific experiments.

Then, in the second step, we will undertake what Ord calls “the Long Reflection.” We will think and talk our way to “a final answer to the question of which is the best kind of future for humanity.” Moral philosophy will “play a central role” in this process. “The conversation should be courteous and respectful to all perspectives,” Ord writes; but it also must be “robust,” because it is to “deliver a verdict that stands the test of eternity.”

The first step can be described as the precautionary principle run amok. Scaremongers excel at political debate. Cries for more safety lend themselves to slogans; warnings about the dangers of too much safety do not. And harms that arise from action (say, deaths from a novel drug the FDA approves) are usually more visible than harms that arise from inaction (deaths from the absence of a drug the FDA delays). It is in the nature of government to say no.

Money just makes matters worse. A growing budget encourages mission creep—a search for more things to say no about. The CDC’s purpose was to track and manage the deadliest diseases. It now lectures people about guns, vape pens, and obesity. As its budget doubled and doubled again, it seems to have lost focus. After all, it hindered early efforts to test for COVID‑19 (the FDA was even worse) and blocked researchers’ efforts to study and combat the virus. The CDC remains an expert organization; it is, to repeat, a better source of information about COVID-19 than most alternatives. This does not mean that the CDC is well run, or that it could use more money or power.

Another problem is that public servants come to conflate what’s good for them with what’s good for everyone. Courts are some of the worst offenders. Just as moral philosophers fool themselves into thinking the world needs heroic moral philosophers, judges fool themselves into thinking it needs heroic judges. Beware a new judicial body. The glory a judge will gain by expanding its jurisdiction! The plaudits she will receive when she pens the next Marbury v. Madison! Ord says he “would envisage very few experiments being denied” by his international science court. It’s almost touching.

“Precautionary principle” is just a polite way to say “sclerosis by design.” Rent-seekers and entrenched interests benefit. It can’t be assumed that anyone else does. Letting people try new things creates hazards, but so does letting the government get in people’s way. Moving is risky. Standing still is risky. No default option is not risky. As Michael Crichton observed, the precautionary principle, properly applied, forbids the precautionary principle. (And therefore, added Crichton, the principle “cannot be spoken of in terms that are too harsh.”)

No one, not even a government of Toby Ords, can deliver “existential security.” We cannot know what we would need to know. In fact, the great threat might lie in ennobling the really smart people who assume otherwise. The specialists who offer an answer when “I don’t know” is the only plausible response. Perhaps the surest way to get us all killed is to ask a panel of experts to save us. Like the servant fleeing for Samarra, they’ll blindly rush to an appointment with Death.

If the flaw in the first step is that experts aren’t wizards, the flaw in the second is that professional moral philosophers aren’t experts. Professors of moral theory specialize in arguing about moral theory with other professors of moral theory. Their main talent is lobbing meaningless abstractions at one another. What qualifies these insular theologians to guide the world is unclear, although their own conviction that they can do so is remarkably persistent. Ord’s “Long Reflection” taps into an abiding conceit that the wise philosophers can form the virtuous plan that produces the beautiful society. Not even philosophy departments run like that.

We should not expect a philosopher-dominated “Long Reflection” to achieve much. As Ord himself will tell you, moral philosophers “disagree about almost every topic.” And what little most of them agree on is not revealed through the practice of some distinct and productive discipline. Science seeks to discover facts about what is “out there,” in the world. There being no moral facts “out there” to find—or, at minimum, no agreed method for finding them—academic moral theory retreats to cleverly restating the opinions of the social set from which the theorists are drawn. A handful of professors dress up conservative views in terms like “natural law.” Many others busy themselves cloaking progressive views in terms like “reflective equilibrium.” Societal norms and sentiments drive the theorists, not the other way around.

Speaking of which, is a prominent faction of the academy’s orthodox (i.e., progressive) moral philosophers falling behind the times? They persist in thinking that their logic games can produce right answers that a reasonable person must find convincing. How privileged do you have to be, to believe yourself an oracle of moral objectivity? The whole endeavor smacks of oppression and imperialism. Academic moralists’ claim on others’ attention was always weak. It’s hardly likely to grow stronger while problematization remains the coin of the realm on the intellectual Left.

Ord has produced a scholarly yet accessible work. But it’s not without its flaws. The odds that it will contribute to our salvation are about one in apple minus pogo stick.

Also published by Forbes.com on WLF’s contributor page.