Future Babble - BestLightNovel.com
You’re reading novel Future Babble Part 8 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
It should be obvious from this that being skeptical about predictions does not render people unable to make a decision; it just makes them cautious. This is not a bad thing, and, indeed in some circ.u.mstances, it can be a very good thing.
This is evident in the early-1970s debate about how governments should respond to the food crisis. William and Paul Paddock, authors of Famine 1975!, advocated a policy they called "triage": Rich nations should send all their food aid to those poor countries that still had some hope of one day feeding themselves; hopeless countries like India and Egypt should be cut off immediately. "To send food is to throw sand in the ocean," they wrote. The Paddocks knew countries that lost the aid would plunge into famine. They were quite explicit about that. But famine was going to come anyway, and this would at least improve the odds for countries that still had a chance. In The Population Bomb, Paul Ehrlich lavishly praised Famine 1975!-it "may be remembered as one of the most important books of our age"-and declared that "there is no rational choice except to adopt some form of the Paddocks' strategy as far as food distribution is concerned."
The Paddocks' proposal was not adopted, which is fortunate because the prediction of inevitable famines on which it was based was wrong. If the Paddocks' policy had been implemented, food s.h.i.+pments to India, Egypt, and elsewhere would have been cut off at a time when tens of millions of people stood at the brink of famine, and this would likely have pushed them over the edge: William and Paul Paddock, Paul Ehrlich, and all the other experts who were certain they knew what would happen in the future would have created the famines they predicted.
Ehrlich should have paid closer attention to something he wrote at the end of The Population Bomb. "Any scientist lives constantly with the possibility that he may be wrong," he wrote. Recognizing this, it's critical to ask, "What if my prediction doesn't pan out? What if I'm wrong? Will the course of action I've recommended still be a good one?" Ehrlich thought the answer in his case was obvious. "If I'm right, we'll save the world," he wrote. "If I'm wrong, people will still be better fed, better housed, and happier, thanks to our efforts." Even in 1968, it should have been clear this was glib nonsense. Cutting off food aid could tip nations into famine, and if the prediction of coming famines was indeed false, cutting off food aid would cause famines that would not otherwise happen. The logic was plain but Ehrlich didn't see it because he didn't seriously consider the possibility that he was wrong.
And yet, Ehrlich did have the right idea: While our decisions have to be made on the basis of what we think is going to happen, we must always consider how our decisions will fare if the future turns out to be very different. A good decision is one that delivers positive results in a wide range of futures.
Consider climate change. For the record, I accept that anthropogenic climate change is all too real. But as the reader may guess, I am skeptical of climate models that purport to forecast changes in the climate decades and even centuries out. Climate scientists are quite blunt that there is lots about climate that science does not understand, which is precisely why scientists find the field exciting to work in. Combine that ignorance with the almost indescribably complex interactions at work in the ma.s.sive, nonlinear systems that make up climate and there are huge uncertainties woven into every climate prediction. This does not mean we should shrug and walk away, however. The models may overestimate the extent of climate change and the damage it does. But they may also underestimate it, as seems to have already happened in some cases. Walking away would be foolhardy. We have to decide what to do.
In making that decision, we must seriously consider the possibility that we are wrong. That cuts both ways. For those who scoff at climate change, it means considering the possibility that decades from now, as ice caps melt and coasts flood and agricultural yields fall, only a stiff dose of hindsight bias will save them from the humiliation of recalling what they once believed. For those who think climate change is real, it means imagining a future where people who look back at the hand-wringing about climate change today are as amazed and amused as we are when we look back at scary predictions from the 1970s. These are radically different futures. Is there anything we can do now that will look good in either future, or in the many possible futures that lie somewhere between those two extremes?
Some proposals definitely flunk this test. Expensive schemes for "carbon sequestration"-pumping carbon dioxide emissions back underground-would be a waste of money if climate change is a dud. But many others pay off no matter what. Capturing methane emitted from landfills not only stops a potent greenhouse gas from entering the atmosphere, it delivers a fuel that may be burned to generate electricity. That's a winner in any future. As are improvements in energy efficiency that not only reduce carbon dioxide emissions but also save money. The same can be said of many other choices, including the big climate change proposal endorsed by most economists: a stiff carbon tax with the revenues returned to the economy in the form of cuts to other taxes. Would it deliver benefits even if climate change turns out to be bunk? Absolutely. Carbon taxes raise the effective cost of fossil fuels, making alternative energy more compet.i.tive and spurring research and development. And reducing the use of fossil fuels while increasing the diversity of our energy sources would be wonderful for a whole host of reasons aside from climate change. It would reduce local air pollution, reduce the risk of catastrophic oil spills, buffer economies against the ma.s.sive shocks inflicted by oil price spikes, and lessen the world's vulnerability to instability in the Middle East and elsewhere. It would also reduce the torrent of cash flowing from the developed world to the thuggish governments that control most major oil-producing nations, including Saudi Arabia, Iran, and Russia. And of course there's peak oil. If the peaksters turn out to be right, finally, how much of our economy is fueled by oil will determine how badly we will suffer-so carbon taxes would steadily reduce that threat too.
For Americans, in particular, there's some unfortunate history to keep in mind. In the 1970s, when oil prices were surging and most experts agreed oil was only going to get much more expensive, huge advances were made in conservation and the development of alternative energy. But in the mid-1980s, when the price crashed, the advances slowed or stopped; at least they did in the United States. While Americans rejoiced at the return of cheap gas, most northern European countries kept the price high with stiff taxes. As a result, Europe got dramatically more energy efficient than the United States. In 2008, with oil prices soaring to previously unimaginable highs, the conservative American columnist Charles Krauthammer fumed that a quarter of a century earlier he and many others had called for an energy tax in order to curtail consumption "and keep the money at home." It didn't happen. And so, "instead of hiking the price ourselves by means of a gasoline tax that could be instantly refunded to the American people in the form of lower payroll taxes, we let the Saudis, Venezuelans, Russians, and Iranians do the taxing for us-and pocket the money that the tax would have recycled back to the American worker." But the United States did do something in the twenty years between the fall of oil prices in the 1980s and the surge in the first decade of the twenty-first century: It ma.s.sively escalated its military involvement in the Persian Gulf region, which was done primarily to protect the flow of oil that is the global economy's lifeblood. Two wars and trillions of dollars later, the cost of that approach, in both treasure and blood, is staggering. Seen in this light, Jimmy Carter's dire oil forecast of 1977 and his call for the "moral equivalent of war" look very different. The forecast was wrong, but Carter's call for a concerted national effort to improve energy efficiency and develop alternative energy was exactly right. If the United States had kept at it in the 1980s and 1990s, it would have been far more secure in the twenty-first century. Tragically, it did not. It abandoned Carter's direction after his forecast collapsed, and so, almost three decades after Carter's famous speech, the United States was still addicted to oil-a fact bemoaned by environmentalists, economists, generals, national security experts, and politicians ranging from George W. Bush to Barack Obama.
What all this means, very simply, is that accurate prediction often isn't needed in order to make good decisions. A rough sense of the possibilities and probabilities will often do. We can't predict earthquakes, but we do know where they are more and less likely and we make building codes less or more strict accordingly. That works. Similarly, it wasn't necessary to predict the 9/11 terrorist attack in order to know that having reinforced c.o.c.kpit doors on jets is a good idea. In the 1990s, several incidents, including the stabbing of a j.a.pan Airlines pilot by a deranged man, convinced many safety advocates and regulators that reinforced doors were a wise investment. They weren't in place on 9/11 because the airlines didn't want the extra cost and they successfully lobbied politicians to block the proposal. It was politics, not unpredictability, that left planes vulnerable that fateful morning. Commodity speculation is another model. Every day, traders buy and sell futures contracts, which, as the name suggests, are based on what people predict future prices of commodities will be. This might look like people making money by predicting the future, but the traders' maxim "Cut your losses and let profits run" hints at what's really going on. Traders aren't so foolish as to think they can predict the future. Instead, they make a large number of bets. When one goes bad, they quickly sell it off. The loss is minimal. But the profit from a good call is likely to be substantial. As a result, the traders make money even if the overwhelming majority of their forecasts are wrong-which they fully expect they will be.
Accepting uncertainty stops us from striding confidently through the darkness, but it's still possible to make decisions and take actions: We can stretch out our hands and cautiously grope our way through the darkness, always alert to the possibility of surprise. This may not be as thrilling as believing we possess a map to the future and setting out boldly for some distant El Dorado, but it is considerably less likely to end in a collision of one's nose and reality.
DOING IT BETTER.
Alan Barnes wasn't satisfied. His unit churned out forecasts about political and economic events but he didn't know how good they were. He and his staff had a sense, of course. "It happened anecdotally," he says. "You make a judgment about something and six months later it blows up in your face. Something happens that you said wasn't going to happen. Your colleagues make it clear." But aside from collegial ribbing or boasting when someone nailed a big call, there was nothing to go by. The accuracy of the unit's forecasts had never been systematically a.n.a.lyzed because the people who used the unit's forecasts had never asked if they were any good. If they wouldn't hold the unit accountable, Barnes would.
Barnes's unit is part of the Privy Council Office of the Canadian government. The sort of questions it handles-who will win the Russian election? will China strike a deal to develop Nigerian oil reserves?-is standard stuff in intelligence circles. The problem Barnes faced is also typical. Intelligence agencies don't track the accuracy of their forecasts. Instead, they use process standards. The Central Intelligence Agency, for one, has a checklist of best practices its a.n.a.lysts should follow. Did you examine other hypotheses? Did you consider contrary evidence? If all the boxes are ticked, the a.n.a.lyst's forecast is considered sound. Whether events actually unfold as the a.n.a.lyst expected is considered irrelevant in judging the quality of the forecast.
Barnes thought this was a mistake. His unit was constantly issuing reports saying it was unlikely that this or that would happen in the Russian election and probable that China would sign that Nigerian oil deal. But n.o.body knew how often the outcome matched the forecast, and if he wanted to improve his unit's judgments, he first had to know how good those judgments were.
To do that, Barnes had to overcome the problem of language. What does probably mean? To one person, in one context, it may mean there's a slightly better than fifty-fifty chance of something-say, 55 or 60 percent. But to another person, or in another context, it may mean that thing has a very good chance-something like 75 or 80 percent. The language of forecasting is riddled with this ambiguity, so Barnes created a numerical scale from 0 to 10 that attached precise numbers to key terms. Probably means 7 or 8 out of 10. So does likely. Almost certain and highly likely are a 9 out of 10. At the other end of the scale, very unlikely and little prospect are a 1 while unlikely is a 2 or 3. This cut the ambiguity and made statistical a.n.a.lysis possible.
The problem in a.s.sessing probability judgments is that it's impossible to say if any one such judgment is right or wrong. Most people would consider a forecast of "It is likely that X will happen" to be right if X does happen. But what if X doesn't happen? Was the forecast wrong? No-because the forecast implicitly said, "Although X is likely to happen, X may not happen." The same is true if the probabilities are expressed in numbers. But if numbers are used, and many different forecasts are collected, the numbers can be crunched and the "calibration" of the judgments determined. If the forecasts are perfectly accurate, the probability calls they make should match the probabilities that actually played out: So events that are said to be 90 percent likely should happen 90 percent of the time; 70 percent of the 70 percent calls should happen; and so on. With the help of David Mandel, a psychologist who studies judgment with the Canadian Department of National Defense, Barnes got to work.
Barnes and Mandel gathered data from fifty-one intelligence memoranda issued by Barnes's unit over an eighteen-month period. In all, they were able to compare 580 predictions with real-world outcomes. In order to avoid skewing the results by the difficulty of the predictions being made, they subdivided the calls into three categories of difficulty and calculated each separately.
The results were impressive: Overall, there was almost none of the overconfidence that is usually found in these sorts of tests. "The calibration value for that sample was 0.014, which is a very high degree of calibration," Mandel says. That's almost as good as the results obtained from the one professional group that has long been shown to have the best calibration-meteorologists-and the a.n.a.lysts were looking much further into the future than meteorologists ever do, even as far ahead as a year. And, of course, the a.n.a.lysts were dealing with people-complex, self-aware, unpredictable people. As expected, when the results of the three difficulty levels were broken out, the a.n.a.lysts did best on the easiest calls and worst on the hardest. But that didn't explain the overall result. "The calibration index for the hardest level of judgments was still 0.05," Mandel says, "which is still very good calibration."
But how did they do it? Barnes's first response when asked this question is revealing. "To a certain extent," he says, "I would have to express some skepticism about the outcome." That's right. After careful testing revealed his team has far better judgment than most, Barnes didn't boast about the results and cite them as proof that he and his a.n.a.lysts are uniquely insightful people. Instead, he suggested the methodology of the testing may have been somewhat flawed and his team might not be as good as the tests made them out to be.
So we discussed Barnes's concern about the methodology. It was modest. Even if correct, it didn't overturn the general results. As Barnes put it, "I'm not convinced that what we've done is accurate to three decimal points. There is still a significant fudge factor in this. I think it's broadly indicative but I wouldn't be quite as confident in the level of precision."
The very fact that Barnes's first instinct was to express doubt is essential to understanding why his team did so well, because doubt is the hallmark of the fox.
Recall that people whose thinking style marks them as "foxes," in Philip Tetlock's terms, are modest about their ability to forecast the future, comfortable with complexity and uncertainty, and very selfcritical-they are always questioning whether what they believe to be true really is. Foxes also reject intellectual templates, preferring to gather ideas and information from as many sources as they can get their hands on.
When I summarized "fox-thinking" with Alan Barnes, he nodded. "That's fundamental to how we approach a.n.a.lysis."
Three key elements explain why foxes' cognitive style improves the accuracy of their forecasts. First is aggregation. Heaps of research show that combining multiple sources of information is more likely to produce good results than using a single source of information. This fundamental fact was popularized by journalist James Surowiecki in The Wisdom of Crowds, which is an unfortunate phrase because crowds in a literal sense are not wise. In fact, they tend toward conformity and "groupthink," which makes them a terrible structure for decision making. What is "wise" is the combined judgment of large numbers of people making decisions independently. In these circ.u.mstances, the mistakes of any one person are likely to be canceled out by the countervailing mistakes of others while the solid information each person brings is combined with that brought by others. The net result: a collective judgment that will almost certainly be superior to the judgment of any one person. Even combining the judgments of laypeople is likely to produce a better judgment than that of one expert, however well informed. This powerful phenomenon is the basis for "prediction markets," like the famous Iowa Presidential Election Markets, in which people bet on political predictions in much the same way they buy and sell stocks. And it's not only the judgments of individuals that can be aggregated. Combining poll results is a good way to produce an aggregate poll that is more accurate than any of the polls that went into it.
By grabbing on to whatever information is available, from whatever source, foxes aggregate. They may not do it as well as a prediction market, but the fundamental process is the same, and it helps make their judgment superior to that of hedgehogs who know One Big Thing and aren't interested in finding out more. Alan Barnes's a.n.a.lysts are urged to aggregate, and they're given every opportunity to do so, first by selecting the topics they wish to work on, then with access to cla.s.sified and uncla.s.sified information, and finally by being allowed the time they need to do the job. And that's just the first step. After they come to a conclusion, they talk it over with Barnes, and then it's sent to external experts for comment. The conclusion is constantly revised in light of new information and new views, so the final judgment is the product of a wide array of inputs-which is to say, it is the product of aggregation.
The second element at work is what psychologists call metacognition. This is simply thinking about thinking. Alan Barnes constantly pushes his a.n.a.lysts to reflect on their conclusions, to question them, to ask themselves where they came from and whether they really make sense or not. "I find that in the more instinctive way of drafting these kinds of reports . . . people quite often make judgments without even really consciously thinking about making those judgments," Barnes says. "It just sort of flows out as part of the drafting process. It just feels right." Requiring a.n.a.lysts to explain their judgments forces them to think consciously about them, and conscious thought is the only way to catch the mistakes intuition often makes. Barnes has also found that attaching numbers to probability statements such as "unlikely" also helps because the precision of the number forces a.n.a.lysts to pause and think a little more carefully.
None of this is likely to work, however, without knowledge of the psychological traps people can fall into. "The biggest problem is not the bias per se," observes David Mandel. "It's the ease with which we proceed with those biases and are unaware of their impact on our judgment." Overcoming biases is a major challenge because, as psychologists have shown, people taught to watch out for psychological biases readily spot their pernicious influence in the thinking of other people. But we perceive our own judgment to be objective and factual. This, too, is a psychological bias. It goes by the clever name "bias bias." It is why Barnes's a.n.a.lysts are not only required to learn about "confirmation bias" and the many other hazards identified by psychology, they are asked to use techniques designed to catch and correct cognitive biases. The learning part is easy. Putting it to work is much harder. A basic method for overcoming confirmation bias, for example, is to draft a list of reasons that your belief may be wrong, but the a.n.a.lysts "are reluctant to use even that extremely simple tool," Barnes says. "Even myself, when I am drafting a paper, I must admit I don't use it as often as I should. It's just not a normal way of thinking." But such measures are essential in order to catch mistakes and make sound judgments.
The last of the three key elements at work is humility. "I think when you're dealing with future events, pretending that you have absolute certainty is doing the reader a great disservice," Barnes says. His unit will use the far ends of the probability scale if, after rigorous consideration, they think that's where the evidence points. But they'll also deliver middling probability calls, even though people want to hear "This will happen," not "There is an 80 percent chance this will happen." Barnes will even draw people's attention to the simple fact that when his unit says something is "very likely to happen," it may not-which is necessary because even very sophisticated people often treat a forecast of 80 percent as if it were 100 percent. When they do, Barnes has to remind them of some old and wise advice: "Don't put all your eggs in one basket, because the world is ultimately unpredictable," he says.
It's not only the strutting certainty of the TV talking head that Barnes avoids, it's predictions that are impossible. That means, among other things, not trying to peer too far into the future. A narrow question in a time frame of six months or a year, fine. But Barnes and his a.n.a.lysts don't predict the fate of China decades out. They don't forecast the price of oil in 2020. And they wouldn't dream of imagining what the world will look like when a baby boy born this year becomes a father. That may be the stuff of best sellers, but in a complex, nonlinear world, it's beyond human judgment. And Barnes knows his limits, as do all foxes. It's an essential feature of the species. "Whenever I start to feel certain I am right," one fox-expert told Philip Tetlock, "a little voice inside tells me to start worrying."
The billionaire financier George Soros is a cla.s.sic fox. As a young man, he studied philosophy with Karl Popper, who taught him the value of introspection, humility, and pluralism. As far removed as Popperian philosophy might seem from the cutthroat world of finance, this was practical training. In a career spanning almost six decades, Soros has made an uncountable number of predictions about matters of enormous complexity, and his record-far from perfect but much better than most-is testament to the quality of his judgment. So is his wealth. Soros can also boast that he saw the real estate bubble building long before it popped in 2008, and that he had correctly warned about instability in the financial system. But Soros is not the sort of man who boasts. In January 2009, as the world plunged into the crisis Soros had long feared, a reporter asked him, in effect, why he's so good at what he does. Few would have objected if he had said, "It's because I'm so much smarter than everyone else." But he did not. Instead, he repeated his favorite theme. "I think that my conceptual framework, which basically emphasizes the importance of misconceptions, makes me extremely critical of my own decisions," he said. "I know that I am bound to be wrong, and therefore am more likely to correct my own mistakes." Soros's old teacher would have smiled. "Instead of posing as prophets we must become the makers of our fate," Karl Popper wrote. "We must learn to do things as best we can, and to look out for our mistakes." The student learned well.
Another man who saw the explosion of 2008 coming was Vince Cable. Formerly chief economist at Sh.e.l.l, Cable became a member of the British Parliament and chief spokesperson on economics for the Liberal Democrats, and it was in this role that he furiously sounded the alarm on the real estate and finance bubbles. Like Soros, he was vindicated by the crash but he did not boast. Indeed, in his 2009 book The Storm: The World Economic Crisis and What It Means, Cable insisted he is no Nostradamus. "When I was paid for attempting to predict future economic developments for a leading multinational company," he wrote, "I was frequently reminded of the Arabic saying: 'Those who claim to foresee the future are lying, even if by chance they are later proved right.' The extraordinary speed with which the crisis has unfolded and overwhelmed the unready should underline the need for caution in antic.i.p.ating the next few months, let alone years. It is perhaps more helpful to think of plausible scenarios than likely developments, and to frame any policy proposals in a spirit of humility, recognizing that no one fully understands what has happened or how the current drama will play out."
In his book The Sages, former banker Charles R. Morris looks at Soros, esteemed former chairman of the Federal Reserve Paul Volker, and legendary investor Warren Buffett. The quality all three share, according to Morris-who knows Soros and Volker personally-is "humility in the face of what one does not know." That is no accident. When Socrates was told that the Oracle of Delphi had deemed him the wisest man in Athens, he was characteristically skeptical. He felt he really didn't know much, while all around him were smart people who were confident they knew lots. So Socrates wandered about, questioning people to determine what they really knew. He discovered they were as ignorant as he. But unlike everyone else, Socrates knew he was ignorant, and this meant, Socrates decided, that he really was the wisest man in Athens.
That's how foxes think. They may eventually be convinced by the evidence that they are somewhat better than others at predicting the future, but only somewhat. As any fox worthy of the name would quickly add, their ability to predict is modest and strictly limited. The world is infinitely complex and the human mind fallible, so the future will forever be uncertain.
And foxes are just fine with that.
A FINAL OBJECTION.
I have to acknowledge that in an important sense my skepticism about predictions is itself based on a prediction. Chaos theory and nonlinearity set strict limits on our ability to see into the future, as does our imperfect understanding of human consciousness and decision making. But this is only true based on what we know now. What about the future? Scientific theories are occasionally overthrown. What is unknown sometimes becomes known. Isn't it possible that, as scientists explore and computing power grows, we may one day be able to do what today is theoretically and practically impossible? Isn't that, after all, a good summary of the history of science? As scientist and author Arthur C. Clarke sagely observed, "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
I must concede this point. Work on a vast array of theoretical issues and forecasting models continues at a furious pace. Occasionally, it produces stumbles, such as the very impressive models developed by political scientists which revealed in the spring of 2000 that the presidential election later that year would be won in a landslide by Al Gore. And it was, let us not forget, the latest and greatest modeling that told economists nothing especially bad was going to happen in 2008. But still, real progress has been made. Prediction markets, for example, are a genuine advance. And on a more theoretical level, skeptics like me have to accept the possibility that the bases for their skepticism may be overturned because scientific knowledge is always subject to revision in light of new evidence. It is never fixed and final. "The normal state of affairs in science is unsettled and uncertain," writes geophysicist Henry Pollack, "and no amount of new research will completely eliminate uncertainty."
So I have to accept that someone, someday, may be able to do what people have tried and failed to do for millennia. Which is why, when that claim is made, it deserves our attention, and why we should have a quick look at the work of Bruce Bueno de Mesquita.
"Politics is predictable," Bueno de Mesquita declared in The Predictioneer's Game, his 2009 best seller. For decades, Bueno de Mesquita has been a respected political scientist with a sideline running a consulting firm whose clients include global corporations and intelligence agencies. They want to know what will happen in the future and he tells them.
Bruce Bueno de Mesquita is very much a hedgehog. The idea that is the foundation of all his thinking is that people do what they believe is in their self-interest. He's not the only one to embrace this foundation, of course. Over the last several decades, most economists have treated it as axiomatic, although that view is waning thanks to advances in the study of decision making and the failure of events-notably the crash of 2008-to conform to the "rational man" model. But Bueno de Mesquita is faithful to his One Big Idea and he uses it, along with concepts springing from the related field of "game theory," to predict what people, corporations, and nations will do. In practice, Bueno de Mesquita enlists experts to help him identify who has a stake in the matter at hand, what they want, how badly they want it, and how much clout they have. Everything else is ignored. Cultural traditions, historical background, and personalities are all irrelevant. Bueno de Mesquita then takes his key information, plugs it into a computer programmed to run the algorithm he invented, and out pops the future.
Bueno de Mesquita is sure his method works, and in The Predictioneer's Game he regales readers with fascinating stories full of drama and amazing outcomes. He also has plenty of client testimonials. And he has a statistic, which he repeats like a mantra: "According to a decla.s.sified CIA a.s.sessment, the predictions for which I've been responsible have a 90 percent accuracy rate." Many people find this evidence impressive. "Some of you may be skeptical," an official of the Carnegie Council said in an introduction that began with a list of Bueno de Mesquita's predictive hits. "But be forewarned. Professor Bueno de Mesquita claims a ninety percent accuracy rate in his use of game theory to predict political trends, and his fans include many Fortune 500 companies, the CIA, and the Department of Defense."
This sort of credulity is all too common when people a.s.sess predictions and the judgment of those who make them. Bueno de Mesquita has made thousands of predictions and even if he made them all flipping a coin, some would have been right. Anecdotes are suggestive only. And nice words from clients are not terribly compelling when we keep in mind the long list of smart people who believed dumb things. George Was.h.i.+ngton swore by the Perkins Metallic Tractor-a contraption said to draw out any disease when it was waved over the afflicted body part-but the patronage of that great man didn't change the fact that the Perkins Metallic Tractor was junk. The whole point of modern science is to get beyond the illusory insight of anecdotes and testimonials, and since Bueno de Mesquita claims his methods are strictly scientific, it seems especially retrograde to fall back on such "evidence" here.
Then there is that "90 percent" figure. As Philip Tetlock wrote in a review of Bueno de Mesquita's book, we need to know much more before accepting it as compelling evidence. "A 90 percent hit rate is, for example, no great achievement for meteorologists predicting that it will not rain in Phoenix. And it is no big deal to achieve a 100 percent hit rate of predicting X-no matter what X may be-if doing so comes at the cost of an equally high false-alarm rate. Anyone can predict every war from now until eternity by simply predicting war all the time." A reviewer for The New York Times who located the doc.u.ment that is the source of Bueno de Mesquita's number was even less impressed. "The pa.s.sage in question describes forecasts about political outcomes in 30 countries between October 1982 and October 1985. It says: 'Forecasts done with traditional methods and with Policon'-Bueno de Mesquita's system-'were found to be accurate about 90 percent of the time. . . . Both traditional approaches and Policon often hit the target, but Policon a.n.a.lyses got the bull's eye twice as often.'" Summing up, the reviewer noted that the pa.s.sage "refers to a small sample of a.n.a.lyses done long ago on limited problems and with not overwhelming success. It also didn't come from some super-secret doc.u.ment written by the head of the agency: It came from an a.n.a.lyst whose last prominent appearance in the press was for his post-CIA adventure running a sausage company."
The ability to consistently and reliably predict major events in the future could do incalculable good for humanity-it certainly would have helped in 1914-and so it's no exaggeration to say it would be at least as valuable as a cure for cancer. And what does a rational person demand when someone claims to have invented a cure for cancer? Evidence, naturally. Not anecdotes and testimonials, but proper scientific testing, whether it's the gold standard of double-blind trials conducted by disinterested third parties or one of the many other testing methods. Unless and until the person making the claim produces such evidence, we wouldn't take him seriously.
That's the rational way of handling big claims about important matters, but that's not how we deal with predictions. We don't think carefully and demand evidence. If the prediction feels right, we go with our gut. As a result, there is little or no accountability in the prediction business. Many pract.i.tioners find that state of affairs quite acceptable. Those who don't are not given the resources to do better and so, like Alan Barnes, they are forced to make do with what they have, producing results that fall below the level of rigor they would like to demand of themselves. "Our clients have been too willing to accept a.n.a.lyses that are not as good as they could be and should be," Barnes says. "And because we're not getting that kind of pressure, there's not much incentive to improve the way we do business." Philip Tetlock has proposed that major consumers of forecasting-big corporations and intelligence agencies-fund carefully conducted research that would rigorously a.s.sess forecasting methods, but there hasn't been much interest.
A SPOONFUL OF SKEPTICISM.
Skepticism is a good idea at all times, but when the news is especially tumultuous and nervous references to uncertainty are sprouting like weeds on the roof of an abandoned factory, it is essential. The 1970s were one such time. As I write, we are in another.
The crash of 2008 was a shock. The global recession of 2009 was a torment. Unemployment is high, economies weak, and government debt steadily mounts. The media are filled with experts telling us what comes next. We watch, frightened and fascinated, like the audience of a horror movie. We want to know. We must know.
For the moment, what the experts are saying is, in an odd way, rea.s.suring. It's bleak, to be sure. But it's not apocalyptic, which is a big improvement over what they were saying when the crash was accelerating and the gloomier forecasters, such as former Goldman Sachs chairman John Whitehead, were warning it would be "worse than the Great Depression." To date, things aren't worse than the Great Depression, nor are they remotely as bad as the Great Depression. Naturally, the gloomsters would be sure to add "so far." And they would be right. Things change. The situation may get very much worse. Of course it may also go in the other direction, slowly or suddenly, modestly or sharply. The range of possible futures is vast.
As it always is. Anyone who reads history with sufficient imagination to overcome hindsight bias knows this. What appears most likely, or even certain, does not happen, while what happens is something quite unexpected. It was true for my grandfather's generation. It was true for my mother's. It is true for mine. And it will be true for my children's, which is one of the few grand-scale predictions I am comfortable making. As journalist James Fallows observed, "What looks like tomorrow's problem is rarely the real problem when tomorrow rolls around."
Fallows's point was proved by the very article in which he wrote those words. It was a book review published in the bleak and frightening year of 1974, and the subject of the review was Robert Heilbroner's crus.h.i.+ngly grim An Inquiry into the Human Prospect. Fallows didn't care for the book, but it wasn't the pessimism that put him off. It was Heilbroner's "over-inflated certainty," a quality that, Fallows noted, Heilbroner occasionally shared with optimists like Herman Kahn. It's a mistake to be so sure we know what's coming, Fallows wrote. A little humility is in order. Almost four decades later, it's hard to read An Inquiry into the Human Prospect without laughing, and even harder not to think Fallows was a smart man.
In fact, Fallows was a smart young man at the time he wrote that review. Three years later, at the beginning of the Carter administration, he became the youngest person ever to hold the post of chief presidential speechwriter. He left after two years to begin a long and distinguished career in journalism, and so it was that in the bleak and frightening year of 2009 a considerably older James Fallows attempted, like Robert Heilbroner before him, to peer into the future. But it was not the human prospect Fallows grappled with. It was America's.
All the talk is of American decline, Fallows noted. Is it true? Do the best days of the United States lie in the past? Plenty of experts are sure they know the answer. It's no, say George Friedman and others. "Wrong," respond Chris Hedges, author of Empire of Illusion, and many more pundits. And Fallows? He starts by putting the question in perspective. Even in the colonies that would later become the United States, he notes, blasts against a society that had lost its way were routinely heard and they have been a fixture of American life ever since. In the modern era, fears surge and ebb, jeremiads come and go, but in literally every decade, there have been substantial numbers of experts proclaiming that the United States is a setting sun. "Through the entirety of my conscious life," Fallows wrote, "America has been on the brink of ruination, or so we have heard, from the launch of Sputnik through whatever is the latest indication of national falling apart or falling behind. Pick a year over the past half century, and I will supply an indicator of what at the time seemed a major turning point for the worse." Fallows then canva.s.ses a wide array of factors working in America's favor, from the quality of top-tier university research to the country's continued ability to attract the best and brightest from around the world. It seems as if Fallows will side with Friedman and predict sunny days ahead.
But then, as foxes always do, Fallows considers the opposite perspective. The fact that decline was predicted in the past and did not come does not mean all predictions of decline must fail, he cautions. Only that they may fail. America's problems are real. And substantial. Fallows lists many, putting particular emphasis on a sclerotic federal government that may be incapable of making the changes necessary to prevent decline.
Fallows's conclusion? He refuses to draw one. The "only sensible answer" to the question of whether the United States is Rome in the waning days of the empire, he writes, is "maybe." To a mind craving certainty, that's not an answer at all. But it is the correct answer. What will happen to the United States is contingent on innumerable choices that will be made by more than three hundred million Americans, individually and collectively. It is further contingent on the individual and collective choices of billions of others on the planet. And it is contingent on factors in the natural world about which human understanding and ability to predict are strictly limited. That's a lot of contingencies. Gather them together, pile them one on top of the other, and you get a very thick deck of cards. Shuffle and deal. What hand will the United States draw? Experts who think they can answer that are fooling themselves and those who listen to them. We'll know when we see the cards.
Whatever disagreements one may have with the particular a.n.a.lyses offered by Fallows, his essay is a model of how a fox works his way forward in the darkness of the future. It is informed by the past, it is revealing about the present, and it surveys a wide array of futures. It is infused with metacognition ("Maybe I'm biased," Fallows cautions at one point). It offers hopeful visions of what could be; it warns against dangers that also could be. It explores our values by asking us what we want to happen and what we don't. And it goes no further. It raises issues, questions, and choices, and it suggests possibilities and probabilities. But it does not peddle certainties, and it does not predict.
What I've just described may sound commonplace. "No serious futurist deals in 'predictions,'" Alvin Toffler wrote in the introduction to Future Shock. "These are left for television oracles and newspaper astrologers." Similar statements can be found in countless essays and books about the future. "It is impossible to predict the future, and all attempts to do so in any detail appear ludicrous within a very few years," wrote Arthur C. Clarke in the introduction to Profiles of the Future, published in 1962. "This book has a more realistic yet at the same time more ambitious aim. It does not try to describe the future, but to define the boundaries within which possible futures must lie." But as common as statements like this are, so is what follows: predictions about the future. "The gas engine is on its way out, as any petroleum geologist will a.s.sure you in his more unguarded moments," Clarke wrote a mere forty-eight pages after announcing that he would not attempt to predict the future. Also finished, according to Clarke, were s.h.i.+ps and cars. They would be replaced by hovercraft. Clarke even knew what "the characteristic road sign of the 1990s" would say: NO WHEELED VEHICLES ON THIS HIGHWAY.
It's easy to say, in the abstract, that the world is unpredictable. But it's a struggle to live by that belief. Medieval monks would test the strength of their commitment to celibacy by lying in bed, naked, with a woman, and anyone who contemplates the future faces a similar temptation. Embracing uncertainty may be the cold intellectual ideal, but it's the soft, warm sensation of certainty we crave.
So what will happen in our future? To repeat a phrase that appears often in this book, I don't know. No one does. The future will be determined by an almost infinite array of what the philosopher Michael Oakeshott called "interlocking contingencies." Certainty about the outcome is seductive. It's also ridiculous. The best that we can do is study, think, and choose as best we can in the spirit of building toward the future, as James Fallows put it. Then hope for a little luck. That's not a satisfying conclusion. It's even a little frightening. But, if it's any consolation, we can remember that it was no different for earlier generations, whether they knew it or not.
At the end of that 1975 episode of All in the Family, Mike despairs for the child he is about to bring into the world. A friend gives him a newspaper clipping. Chastened, he hands it to Gloria, who reads it aloud. It's something Alistair Cooke wrote, she says.
"Who?" her father, Archie, whispers to his wife. "Alice the cook," says the confused Edith.
"In the best of times our days are numbered anyway," it begins. "And so it would be a crime against nature for any generation to take the world's crisis so solemnly that it put off enjoying those things for which we were designed in the first place. The opportunity to do good work, to fall in love, to enjoy friends, to hit a ball, and to bounce a baby."
A wise person, that Alice.
Notes.
CHAPTER 1.
Page.
2. John Bates Clark, "Recollections of the Twentieth Century," The Atlantic Monthly, January 1902.
3. G. P. Gooch, History of Our Time 1885-1911, 1911.
3. Quoted in "Europeans Are from Venus," New York Times, February 10, 2008.
3. Quoted in "Europeans Are from Venus."
4. Richard Overy, The Twilight Years: The Paradox of Britain Between the Wars, 2009.
5. Overy, The Twilight Years.
5. Quoted in John Mueller, "The Catastrophe Quota," Journal of Conflict Resolution, September 1994.
5. Quoted in John Mueller, Overblown, 2006.
5. Quoted in John Mueller, Atomic Obsession, 2009.
5. Quoted in Mueller, "The Catastrophe Quota."
6. Robert Fogel, Working Paper 11125, National Bureau of Economic Research, February 2005.
6. Fogel, Working Paper 11125.
8. Fogel, Working Paper 11125.
10. Overy, The Twilight Years.
10. "The Futurists: Looking Toward A.D. 2000," Time, February 25, 1966.
11. Jonathan Sch.e.l.l, The Fate of the Earth, 1982.
12. James Bonner, Science, August 25, 1967.
12. Daniel Yergin, The Prize, 1992.
12. Time, December 7, 2009.
13. The Walrus, May 2010.
13. IMF Working Paper WP/00/77.
14. Charles Morris, The Sages, 2009.
20. David Wallechinsky et al., The Book of Predictions, 1980.
21. The Economist, June 3, 1995.
CHAPTER 2.
Page 32. The New Republic, February 25, 1978.