Risky Business - The American Spectator | USA News
Risky Business
by

Global Catastrophic Risks
Edited by Nick Bostrom and Milan M. Cirkovic
(Oxford University Press, 554 Pages, $50)

“The battle to feed humanity is over. In the 1970s the world will experience famines— hundred of millions of people are going to starve to death.” So wrote Stanford professor Paul Ehrlich in The Population Bomb (1968). The sky-is-falling crowd soon tired of Malthusian famine and embraced a new calamity: global cooling. A 1975 Newsweek article noted “sudden, large increases in Northern Hemisphere snow cover” and agonized over “a reversion to [a] little ice age.” A decade later, the end was still imminent, this time through the reduced sunlight that would follow a nuclear war. Nuclear winter, as it was called, was turned into a consciousness-raising fright movie, The Day After (1983), a genre that has proven to be profitable. Count on Americans to make a buck on Armageddon.

Now, of course, it is global warming, the temporarily triumphant meme in the apocalyptic mind. The lyrics change, but the tune is eternal: government powers need to be expanded; optimists need to browbeaten and separated from their property.

The Martians/asteroids/glaciers are coming, and when they do, everyone will be grateful for the costly precautions forced upon us. Of course, who or what is coming is hard to say. The world will someday end with fire or ice, but we await clarification as to the proximate causes. The menu of looming catastrophes is a long one, growing with our advancing knowledge of the universe and powers of self-immolation.

Global Catastrophic Risks, a collection of two dozen learned and generally balanced essays, canvasses this dismal scene and dishes up warnings and advice. This is a book in which shriveling retirement accounts and the looming bankruptcy of the automobile industry do not register. The editors, Nick Bostrom and Milan Cirkovic, direct our gaze at “existential risks,” dangers so grave that, should they happen once, “there would be no opportunity to learn from the experience.” These are catastrophes that threaten humanity, intelligent life of any kind, and possibly all life on earth. Or worse. Imagine, if you will, “permanent and extreme forms of slavery or mind control” at the hands of a genetically enhanced Stalin or supercharged computer. Extinction would be a blessing.

AS A SPECIES, we face threats from the cosmos and from ourselves. Geological records suggest five ruptures over the past half-billion years, when most of the then extant species died out. Only for the last mass extinction, 65 million years ago and likely coinciding with the collision of a massive asteroid, is there anything approaching a scientific consensus. The other calamities are still shrouded in mystery, possible explanations being meteors, supervolcanos, solar flares, supernovae explosions, or (my favorite) gamma ray bursts.

Then there are the new threats, of man’s own making. Two chapters consider nuclear war and terrorism, but others sketch far more imaginative and comprehensive catastrophes. The nascent fields of genetic engineering, nanotechnology, and artificial intelligence may give rise to weapons more powerful by multiples than anything we can conceive today. Furthermore, building an atomic bomb is complicated; this may be less true for newer technologies.

As a gloss on the increasing ease with which humanity- imperiling weapons might be invented, Eliezer Yudkowsky rewrites the canonical Moore’s Law as: “Every 18 months, the minimum IQ necessary to destroy the world drops by one point.” Monitoring every nation-state determined to build an atomic bomb is hard enough; monitoring every rogue scientist and angst-ridden teenager bent on Armageddon may prove impossible. As Chris Phoenix and Mike Treder note in their chapter on nanotechnology, “The likelihood of at least one powerful actor being insane is not small.”

Ruminations along these lines lead, in chapter after chapter, to the so-called Fermi Paradox. The legend behind the paradox is a lunchtime conversation between the great physicist Enrico Fermi and some colleagues at Los Alamos in 1950. Fermi noted that our solar system is billions of years younger than others in the Milky Way, allowing, at least in theory, many years for advanced civilizations to develop; but if so, he asked, where are they? Why the Great Silence? One explanation is that Earth, either through providential design or cosmic fluke, is rare in its hospitality to life. Another explanation is that all habitable planets are, as one author puts it, “quasiperiodically reset by exogenous events, like gamma ray bursts.”

The window of time between the origins of intelligent life and a catastrophic event is never wide enough, this argument runs, to allow the development of technologies that surmount those threats. But perhaps the most haunting answer to Fermi’s paradox is that the ingenuity of intelligent life quickly outstrips its caution and prudence. As Yudkowsky writes, “perhaps breakout technology leads to catastrophic runaway technology.”

By temperament, some will find such considerations gloomy, others liberating, and still others pointless. There is much to be said for the last perspective. The economist Robin Hanson proposes that we construct a doomsday cave, stockpiled with food, and colonized by a thousand people to hunker down and wait out the decades of death that follow some catastrophe. This is all rather fanciful: who would submit to life in a cave in the absence of an immediate threat, and once the threat is identified, who will survive the stampede? As Hanson points out, given the small number of cave dwellers, humanity could not soon re-create civilization, with our mass-scale agriculture and intricate labor segmentation, so we would need to return to a hunter-gatherer existence. This suggests that the cave needs fewer senators and trophy second wives (the likely inhabitants) and more MacGyvers and Annie Oakleys. Several authors complain that humanity is afflicted with cognitive biases that lead us to underinvest in precautions against very low-probability events (“black swans”) that could entail enormous losses. Federal judge, law professor, and versatile public intellectual Richard Posner proposes to address this problem by requiring a cost-benefit analysis before embarking on scientific experiments of uncertain ramifications. Yet for some projects the numbers are so speculatively gigantic or infinitesimal that the ordinary model crumbles. There is a nonzero probability that a supercollider will disturb the laws of physics in a way that jeopardizes the entire universe. What number should we assign to that cost? What is the probability of its occurrence?

Posner estimates the cost of human extinction at $600 trillion, which he multiplies by the annual chance of a strangelet disaster (guessed to be 1 in 10 million) to come up with a number with a lot of zeros, likely more than the expected cost to the world of Iran’s acquisition of an atomic bomb within the next year or two. Which risk is more worthy of attention: strangelets or Iranian nukes? Sufficient unto the day is the evil thereof.

The book’s editors argue that global risks should be assessed collectively because we need to make comparative judgments about where to allocate time and money. Some chapters follow through on this project and consider risks in the aggregate, but many focus on the author’s specialty: there are the supervolcano partisans, the plague partisans, the global warming partisans, etc. The supervolcano experts in particular must find it galling that these days the climatologists get all the press—not to mention the audiences on private jets with movie stars and has-been politicians. Given that humanity’s closest brush with extinction was likely the result of a massive volcanic eruption 70,000 years ago (which lowered global temperatures by several degrees and may have reduced our total population to the thousands), the supervolcano experts need a better publicity agent. The chapter on global warming adopts the “consensus” view of a gradual 3.5-degree warming in global temperatures over the next century. The claim that all climate experts agree on this model is overstated; in any event, such 100-year projections would be more alarming if meteorologists accurately predicted rain five days out. And one must note the natural proclivity to play up the importance of one’s own expertise, and how bias of this sort skews judgment.

The global warming experts insist that recent warming trends are largely the result of human-generated greenhouse gases, but a dissenter on this point is Arnon Dar, author of a fascinating chapter on supernovae explosions and other cosmic terrors. Dar tentatively locates the cause of global warming in solar activity, which may abate over the next decade.

IT IS TELLING THAT the only authors so confident of their place in the catastrophe pantheon that they are emboldened to propose taxes—styled as price controls on carbon dioxide emissions—are the global warming experts. However ill advised this might be, far scarier ideas have been floated in recent years to prevent global warming. Some have suggested that we launch balloons fitted with adjustable mirrors, still others that we scatter billions of refractors to dim the sun. Reading of such proposals, one cannot help but think of such triumphs of human ingenuity as the introduction of cane toads in Australia in 1935. Intended to combat a troublesome species of beetle, the cane toad, without any natural predators, is now an unchecked marauder far more of a nuisance than the beetle ever was. Which makes me wonder: how will we retrieve all those refractors if, more effectively than planned, they blot out the light of the sun?

With such harebrained ideas in the works, it is likely that man himself, without the intervention of comets or supervolcanos, will be the author of his own demise. And our preparations for catastrophic risks might propel us down the road to extinction, or to one of those even darker destinations. Aldous Huxley in Brave New World imagined a cataclysmic war in the 21st century, in the aftermath of which people submitted to a world state as the only perceived safeguard against future horrors. In Global Catastrophic Risks’ final chapter, Bryan Caplan suggests that “extreme pessimism about the environment could become the rationale for a Green totalitarianism.” Those who trumpet looming catastrophes often end up calling for, or acquiescing in, power transfers to global governments or entities, which are said to be best able to address the gravest of threats faced by a common humanity. Several of the 20th century’s greatest scientists, starting with Albert Einstein and Linus Pauling, have argued that a world government is necessary to prevent nuclear war. One wonders if such scientists have abandoned their famed method; it would be nice if they considered the actual data—how have the United Nations or even the European Union managed the most minor of crises?—before investing transnational bodies with vast authority.

The last time human beings joined together and aspired to godlike permanence they set out to build a tower. God was not pleased by our efforts in Babel, and He may register displeasure over our recent efforts in that direction. Accepting a certain amount of risk, even catastrophic risk, is preferable to pursuing quixotic—or worse—plans to secure our species forever.  

Sign up to receive our latest updates! Register


By submitting this form, you are consenting to receive marketing emails from: . You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Be a Free Market Loving Patriot. Subscribe Today!