BestLightNovel.com

Future Babble Part 3

Future Babble - BestLightNovel.com

You’re reading novel Future Babble Part 3 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

After coming to a conclusion, Sir Francis Bacon wrote, "the human understanding . . . draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate." Heaps of research conducted in the twentieth and twenty-first centuries have only confirmed Bacon's wisdom. Dubbed "confirmation bias" by psychologist Peter Wason, it is as simple as it is dangerous: Once we form a belief, for any reason, good or bad, rational or bonkers, we will eagerly seek out and accept information that supports it while not bothering to look for information that does not-and if we are unavoidably confronted with information that doesn't fit, we will be hypercritical of it, looking for any excuse to dismiss it as worthless.

One famous experiment was conducted in 1979, when capital punishment was a hot issue in the United States. Researchers a.s.sembled a group of people who already had an opinion about whether the death penalty was an effective way to deter crime. Half the group believed it was; half did not. They were then asked to read a study that concluded capital punishment does deter crime. This was followed by an information sheet that detailed the methods used in the study and its findings. They were also asked to read criticisms of the study that had been made by others and the responses of the studies' authors to those criticisms. Finally, they were asked to judge the quality of the study. Was it solid? Did it strengthen the case for capital punishment? The whole procedure was then repeated with a study that concluded the death penalty does not deter crime. (The order of presentation was varied to avoid bias.) At the end, people were asked if their views about capital punishment had changed.

The studies were not real. The psychologists wrote them with the intention of producing two pieces of evidence that were mirror images of each other, identical in every way except for their conclusions. If people process information rationally, this whole experience should have been a wash. People would see a study of a certain quality on one side, a study of the same quality on the other, and they would shrug, with little or no change in their views. But that's not what happened. Instead, people judged the two studies-which were methodologically identical, remember-very differently. The study that supported their belief was deemed to be high-quality work that got to the facts of the matter. But the other study? Oh, it was flawed. Very poor stuff. And so it was dismissed. Having processed the information in a blatantly biased fas.h.i.+on, the final outcome was inevitable: People left the experiment more strongly convinced than when they came in that they were right and those who disagreed were wrong.

"If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others," wrote psychologist Raymond Nickerson, "the confirmation bias would have to be among the candidates for consideration." In Peter Wason's seminal experiment, he provided people with feedback so that when they sought out confirming evidence and came to a false conclusion, they were told, clearly and unmistakably, that it was incorrect. Then they were asked to try again. Incredibly, half of those who had been told their belief was false continued to search for confirmation that it was right: Admitting a mistake and moving on does not come easily to h.o.m.o sapiens.

Like everyone else, experts are susceptible to confirmation bias. One study asked seventy-five social scientists to examine a paper that had been submitted for publication in an academic journal. This sort of peer review is routine and is intended to weed out work that is uninformative or methodologically weak. What it's not supposed to do is screen papers based on their conclusions. Research is either solid or not. Whether it happens to confirm the reviewer's beliefs is irrelevant. At least, it's supposed to be irrelevant. But it's not, as this study demonstrated. One version of the paper sent out for peer review came to conclusions that were in line with the commonly held view in the field; a second version of the paper was methodologically identical but its conclusions contradicted the conventional wisdom. Reviewers who got the paper that supported their views typically judged it to be relevant work of sound quality and they recommended it be published; those who got the paper that contradicted their views tended to think it was irrelevant and unsound and they said it should be rejected. "Reviewers were strongly biased," the researcher concluded. Not that they were aware of their bias, mind you. In fact, they would have been offended at the very suggestion.



Perhaps we should call this the "Toynbee phenomenon," because there is no more spectacular example than Arnold Toynbee's A Study of History. By 1921, Toynbee's vision was locked in. He was certain there was a pattern in cla.s.sical and Western histories. That pattern became the outline of A Study of History. Then Toynbee started rummaging through the histories of other civilizations and found that they, too, followed the same pattern-not because the pattern was real but because Toynbee's information processing was profoundly biased. To paraphrase Sir Francis Bacon, Toynbee energetically searched for and collected information that supported his convictions while "neglecting or despising" information that did not-and when contrary evidence was too big to dismiss or ignore, he cobbled together ingenious stories that transformed contradiction into confirmation. "His whole scheme is really a scheme of pigeon-holes elaborately arranged and labelled, into which ready-made historical facts can be put," wrote the philosopher and historian R. G. Collingwood. A.J.P. Taylor's judgment was even more severe. "The events of the past can be made to prove anything if they are arranged in a suitable pattern, and Professor Toynbee has succeeded in forcing them into a scheme that was in his head from the beginning."

BETTER A FOX THAN A HEDGEHOG.

It is a heartening fact that many experts saw through the delusions of Arnold Toynbee. In a phrase, they showed better judgment. That's worth emphasizing because it's tempting to become cynical about experts and their opinions. We should resist that temptation, for all experts are not alike.

In Philip Tetlock's research, he was careful to have experts make predictions on matters both within and beyond their particular specialty. Only when they were operating within their specialty were experts really predicting as capital-E Experts. Otherwise, they were more like smart, informed laypeople. a.n.a.lyzing the numbers, Tetlock found some experts making predictions as Experts were more accurate than when they made them as laypeople. No surprise, they know more. They should be more accurate. More surprising is that others were actually less accurate.

As the reader should be able to guess by now, the experts who were more accurate when they made predictions within their specialty were foxes; those who were less accurate were hedgehogs. Hedgehogs are bad at predicting the future under any circ.u.mstances, but it seems the more they know about what they're predicting, the worse they get. The explanation for this important and bizarre result lies, at least in part, in the psychological mechanisms discussed here.

Expertise means more knowledge, and more knowledge produces more detail and complication. More detail and complication make it harder to come to a clear and confident answer. At least it should make it harder. Say the question is "How will the economy do next year?" Someone who has only a few facts to go by may find they all point in one direction. But someone who has ma.s.ses of information available-facts about economic history and theory, about finance, bonds and stocks, production and consumption trends, interest rates, international trade, and so on-won't find all the facts neatly lined up and pointing like an arrow in one direction. It's far more likely the facts will point to boom, and bust, and lots of places in between, and it will be a struggle to bring even modest clarity to the whole chaotic picture.

Foxes are okay with that. They like complexity and uncertainty, even if that means they can only draw cautious conclusions and they have to admit they could be wrong. "Maybe" is fine with them.

But not hedgehogs. They find complexity and uncertainty unacceptable. They want simple and certain answers. And they are sure they can get them using the One Big Idea that drives their thinking. With this mindset, the hedgehog's greater knowledge doesn't challenge the psychological biases we're all p.r.o.ne to. Instead, it supercharges them. As Arnold Toynbee demonstrated so well, expertise boosts the hedgehog's ability to see patterns that aren't there and to deal with contradictory evidence by rationalizing it away or twisting it so it supports what the hedgehog believes. In this way, the hedgehog gets an answer that will almost certainly be-to quote H. L. Mencken-clear, simple, and wrong. Of course the hedgehog isn't likely to accept that he may be wrong. Confidence is a defining feature of the species: Not only are hedgehogs more overconfident than foxes, they are far more likely to declare outcomes "certain" or "impossible." Could they be wrong? Never!

In his cla.s.sic 1952 examination of pseudoscience, Fads and Fallacies in the Name of Science, Martin Gardner took a fascinating look at the work of late nineteenth- and early twentieth-century "pyramidologists." These obsessive investigators measured every nook and cranny of the pyramids, inside and out, using every imaginable unit and method. They then tried to prove "mathematically" that the pyramid's dimensions were encoded with a vast trove of knowledge, including a complete record of all the great events of the past and future. With ma.s.ses of data at hand, and an unrestrained desire to prove what they were certain was right, they succeeded. In a sense. And only up to a point. "Many great books have been written on this subject, some of which have been presented to me by their authors," Bertrand Russell dryly observed. "It is a singular fact that the Great Pyramid always predicts the history of the world accurately up to the date of publication of the book in question, but after that date it becomes less reliable." As Gardner demonstrated, the pyramidologists were filled with pa.s.sionate belief. Almost without exception, they were devout Christians, and by picking the numbers that fit, while ignoring the rest, they made the pyramid's dimensions align with past events. Projecting forward, they then "discovered" that the events described in the Book of Revelation would soon unfold. One of the earliest pyramidologists claimed 1882 would mark the beginning of the end. Later investigators predicted it would come in 1911, 1914, 1920, or 1925. When those predictions failed to pan out, claims were made for 1933 and 1936. As one prediction after another pa.s.sed without Jesus descending from the clouds, interest in this first wave of pyramidology slowly faded.

By the time Gardner wrote his book in 1952, most people had forgotten pyramidology. Or they thought it was silly. In 1952, smart people knew the future was written in the pages of Toynbee.

Gardner wasn't so sure. The same tendency to fit data to belief can be seen, he wrote, "in the great cyclical theories of history-the works of men like Hegel, Spengler, Marx, and perhaps, though one must say it in hushed tones, the works of Toynbee. The ability of the mind to fool itself by unconscious 'fudging' on the facts-an overemphasis here and an underemphasis there-is far greater than most people realize. The literature of Pyramidology stands as a permanent and pathetic tribute to that ability. Will the work of the prophetic historians mentioned above seem to readers of the year 2000 as artificial in their constructions as the predictions of the Pyramidologists?"

It's fitting that Gardner made his point by asking a question about the future rather than making a bold and certain claim. Martin Gardner was a cla.s.sic fox. So were the historians who scoffed when so many other smart people were venerating Toynbee as a prophet. History is immensely complex, they insisted, and each event is unique. Only the delusional see a simple pattern rolling smoothly through the past, present, and future. "He dwells in a world of his own imagining," wrote Pieter Geyl in one of his final attacks on Arnold Toynbee, "where the challenges of rationally thinking mortals cannot reach him."

The foxes were right. About history. And about Arnold Toynbee. That brilliant hedgehog never understood how badly he deceived himself and the world, which makes his life story, for all the man's fame and wealth, a tragedy.

4.

The Experts Agree: Expect Much More of the Same.

[Against the menace of j.a.panese economic power] there is now only one way out. The time has come for the United States to make common cause with the Soviet Union.

-GORE VIDAL, 1986.

"We are definitely at war with j.a.pan," says the American hero of Rising Sun, Michael Crichton's 1992 suspense novel. Americans may not know it; they may even deny it. But the war rages on because, to the j.a.panese, business is war by other means. And j.a.pan is rolling from victory to victory. "Sooner or later, Americans must come to grips with the fact that j.a.pan has become the leading industrial nation in the world," Crichton writes in an afterword. "The j.a.panese have the longest lifespan. They have the highest employment, the highest literacy, the smallest gap between rich and poor. Their manufactured goods have the highest quality. They have the best food. The fact is that a country the size of Montana, with half our population, will soon have an economy equal to ours."

More op-ed than potboiler-not many thrillers come with bibliographies-Rising Sun was the culmination of a long line of American jeremiads about the danger in the East. j.a.pan "threatens our way of life and ultimately our freedoms as much as past dangers from n.a.z.i Germany and the Soviet Union," wrote Robert Zielinski and Nigel Holloway in the 1991 book Unequal Equities. A year earlier, in Agents of Influence, Pat Choate warned that j.a.pan had achieved "effective political domination over the United States." In 1988, the former American trade representative Clyde Prestowitz worried that the United States and j.a.pan were "trading places," as the t.i.tle of his book put it. "The power of the United States and the quality of American life is [sic] diminis.h.i.+ng in every respect," Prestowitz wrote. In 1992, Robert Reich-economist and future secretary of labor-put together a list of all the books he could find in this alarming subgenre. It came to a total of thirty-five, all with t.i.tles like The Coming War with j.a.pan, The Silent War, and Trade Wars.

j.a.pan blocked American companies from selling in its domestic market, these books complained, while it ruthlessly exploited the openness of the American market. j.a.pan planned and plotted; it saved, invested, and researched; it elevated productivity. And it got stronger by the day. Its banks were giants, its stock markets rich, its real estate more valuable than any on earth. j.a.pan swallowed whole industries, starting with televisions, then cars. Now, with j.a.pan's growing control of the semiconductor and computer markets, it was high tech. In Crichton's novel, the plot revolves around a videotape of a murder that has been doctored by j.a.panese villains who are sure the American detectives-using "inferior American video technology"-will never spot the fake. Meanwhile, American debt was piling up as fast as predictions of American economic decline; in 1992, a terrifying book called Bankruptcy 1995 spent nine months on the New York Times best-seller list. American growth was slow, employment and productivity were down, and investment and research were stagnant.

Put it all together and the trend lines revealed the future-the j.a.panese economy would pa.s.s the American, and the victors of the Second World War would be defeated in the economic war. "November, 2004," begins the bleak opening of Daniel Burstein's Yen!, a 1988 best seller. "America, battered by astronomical debts and reeling from prolonged economic decline, is gripped by a new and grave economic crisis." j.a.panese banks hold America's debt. j.a.panese corporations have bought out American corporations and a.s.sets. j.a.panese manufacturers look on American workers as cheap overseas labor. And then things get really bad. By the finish of Burstein's dramatic opening, the United States is feeble and ragged while j.a.pan is no longer "simply the richest country in the word." It is the strongest.

Less excitable thinkers didn't see j.a.pan's rise in quite such martial terms but they did agree that j.a.pan was a giant rapidly becoming a t.i.tan. In the 1990 book Millennium, Jacques Attali, the former adviser to French president Francois Mitterrand, described an early twenty-first century in which both the Soviet Union and the United States ceased to be superpowers, leaving j.a.pan contending with Europe for the economic leaders.h.i.+p of the world. Moscow would fall into orbit around Brussels, Attali predicted. Was.h.i.+ngton, DC, would revolve around Tokyo. Lester Thurow sketched a similar vision in his influential best seller Head to Head, published in 1992. The recent collapse of the Soviet Union meant the coming years would see a global economic war between j.a.pan, Europe, and the United States, wrote Thurow, a famous economist and former dean of the MIT Sloan School of Management. Thurow examined each of the "three relatively equal contenders" like a punter at the races. "If one looks at the last 20 years, j.a.pan would have to be considered the betting favorite to win the economic honors of owning the 21st century," Thurow wrote. But Europe was also expanding smartly, and Thurow decided it had the edge. "Future historians will record that the 21st century belonged to the House of Europa!" And the United States? It's the weakest of the three, Thurow wrote. Americans should learn to speak j.a.panese or German.

The details varied somewhat from forecast to forecast, but the views of Thurow and Attali were received wisdom among big thinkers. "Just how powerful, economically, will j.a.pan be in the early 21st-century?" asked the historian Paul Kennedy in his much-discussed 1987 best seller The Rise and Fall of the Great Powers. "Barring large-scale war, or ecological disaster, or a return to a 1930s-style world slump and protectionism, the consensus answer seems to be: much more powerful." As they peered nervously into the future, the feelings of many Americans were perfectly expressed by an ailing President George H. W. Bush when he keeled over and vomited in the lap of the j.a.panese prime minister.

They needn't have worried. The experts were wrong.

By the time the Hollywood adaptation of Rising Sun was released in 1993, j.a.pan was in big trouble. Real estate had tanked, stocks had plunged, and j.a.pan's mammoth banks staggered under a stupendous load of bad debt. What followed would be known as "the lost decade," a period of economic stagnation that surprised experts and made a hash of forecasts the world over.

Europe did better in the 1990s but it, too, failed to fulfill the forecasts of Lester Thurow and so many others. The United States also surprised the experts, but in quite a different way. The decade that was so widely expected to see the decline, if not the fall, of the American giant, turned into a golden age as technology-driven gains in productivity produced strong growth, surging stocks, rock-bottom unemployment, a slew of social indicators trending positive, and-miracle of miracles-a federal budget churning out huge surpluses. By the turn of the millennium, the United States had become a "hyperpower" that dominated "the world as no empire has ever done before in the entire history of humankind," in the purple words of one French observer. The first decade of the twenty-first century was much less delightful for the United States-it featured a mild recession, two wars, slow growth, the crash of 2008, a brutal recession, and soaring deficits-but Europe and j.a.pan still got smaller in Uncle Sam's rearview mirror. Between 1991 and 2009, the American economy grew 63 percent, compared to 16 percent for j.a.pan, 22 percent for Germany, and 35 percent for France. In 2008, the gross national income of the United States was greater than that of Germany, the United Kingdom, France, Italy, and Spain combined. And it was more than three times that of j.a.pan.

How could so many experts have been so wrong? A complete answer would be a book in itself. But a crucial component of the answer lies in psychology. For all the statistics and reasoning involved, the experts derived their judgments, to one degree or another, from what they felt to be true. And in doing so, they were fooled by a common bias.

In psychology and behavioral economics, status quo bias is a term applied in many different contexts, but it usually boils down to the fact that people are conservative: We stick with the status quo unless something compels us otherwise. In the realm of prediction, this manifests itself in the tendency to see tomorrow as being like today. Of course, this doesn't mean we expect nothing to change. Change is what made today what it is. But the change we expect is more of the same. If crime, stocks, gas prices, or anything else goes up today, we will tend to expect it to go up tomorrow. And so tomorrow won't be identical to today. It will be like today. Only more so.

This tendency to take current trends and project them into the future is the starting point of most attempts to predict. Very often, it's also the end point. That's not necessarily a bad thing. After all, tomorrow typically is like today. Current trends do tend to continue. But not always. Change happens. And the farther we look into the future, the more opportunity there is for current trends to be modified, bent, or reversed. Predicting the future by projecting the present is like driving with no hands. It works while you are on a long stretch of straight road, but even a gentle curve is trouble, and a sharp turn always ends in a flaming wreck.

In 1977, researchers inadvertently demonstrated this basic truth when they asked eight hundred experts in international affairs to predict what the world would look like five and twenty-five years out. "The experts typically predicted little or no change in events or trends, rather than predicting major change," the researchers noted. That paid off in the many cases where there actually was little or no change. But the experts went off the road at every curve, and there were some spectacular crashes. Asked about Communist governments in 2002, for example, almost one-quarter of the experts predicted there would be the same number as in 1977, while 45 percent predicted there would be more. As a straight-line projection from the world of 1977, that's reasonable. As insight into the world as it actually was in 2002-more than a decade after most Communist governments had been swept away-it was about as helpful as a randomly selected pa.s.sage from Nostradamus.

Similar wrecks can be found in almost any record of expert predictions. In his 1968 book The End of the American Era, for example, the political scientist Andrew Hacker insisted race relations in the United States would get much, much worse. There will be "dynamiting of bridges and water mains, firing of buildings, a.s.sa.s.sination of public officials and private luminaries," Hacker wrote. "And of course there will be occasional rampages." Hacker also stated with perfect certainty that "as the white birth rate declines," blacks will "start to approach 20 or perhaps 25 percent of the population." Hacker would have been bang on if the trends of 1968 had continued. But they didn't, so he wasn't. The renowned sociologist Daniel Bell made the same mistake in his landmark 1976 book The Cultural Contradictions of Capitalism. Inflation is here to stay, he wrote, which is certainly how it felt in 1976. And Bell's belief was widely shared. In The Book of Predictions-a compilation of forecasts published in 1981-every one of the fourteen predictions about inflation in the United States saw it rising rapidly for at least another decade. Some claimed it would keep growing until 2030. A few even explained why it was simply impossible to whip inflation. And yet, seven years after the publication of Bell's book, and two years after The Book of Predictions, inflation was whipped.

Some especially breathtaking examples of hands-free driving can be found in The World in 2030 A.D., a fascinating book written by F. E. Smith in 1930. Smith, also known as the earl of Birkenhead, was a senior British politician and a close friend of Winston Churchill. Intellectually curious, scientifically informed, well-read, and imaginative, Smith expected the coming century to produce astonis.h.i.+ng change. "The child of 2030, looking back on 1930, will consider it as primitive and quaint as the conditions of 1830 seem to the children of today," he wrote. But not even Smith's adventurous frame of mind could save him from the trap of status quo bias. In discussing the future of labor, Smith noted that the number of hours worked by the average person had fallen in recent decades and so, he confidently concluded, "by 2030 it is probable that the average 'week' of the factory hand will consist of 16 or perhaps 24 hours." He was no better on military matters. "The whole question of future strategy and tactics pivots on the development of the tank," he wrote. That was cutting-edge thinking in 1930, and it proved exactly right when the Second World War began in 1939, but Smith wasn't content to look ahead a mere nine years. "The military mind of 2030 will be formed by what engineers accomplish in this direction during the next 60 or 70 years. And, in view of what has been accomplished since 1918, I see no limits to the evolution of mobile fortresses." Smith was even worse on politics, his specialty. "Economic and political pressure may make it imperative that the heart of the [British] Empire should migrate from London to Canada or even to Australia at some date in the next century or in the ages which are to follow," he wrote. But no matter. "The integrity of the Empire will survive this transplantation without shock or disaster." And India would still be the jewel in the crown. "British rule in India will endure. By 2030, whatever means of self-government India has achieved, she will still remain a loyal and integral part of the British Empire."

Among literary theorists and historians, it is a truism that novels set in the future say a great deal about the time they were written and little or nothing about the future. Thanks to status quo bias, the same is often true of expert predictions, a fact that becomes steadily more apparent as time pa.s.ses and predictions grow musty with age. The World in 2030 is a perfect demonstration. Read today, it's a fascinating book full of marvelous insights that have absolutely nothing to do with the subject of its t.i.tle. In fact, as a guide to the early twenty-first century, it is completely worthless. Its value lies entirely in what it tells us about the British political cla.s.s, and one British politician, in 1930. The same is true of Rising Sun and so many of the other books written during the panic about j.a.pan Inc. They drew on the information and feelings available to the authors at the time they were written, and they faithfully reflect that moment, but the factors that actually made the difference in the years that followed seldom appear in these books. The Internet explosion was a surprise to most, as was the rise of Silicon Valley, abetted by the s.h.i.+ft in high-tech development from hardware to software. The turnaround in the American budget and the decline of j.a.pan's banks were dramatically contrary to current trends. A few observers may have spotted some of these developments coming but not one foresaw them all, much less understood their c.u.mulative effect. And perhaps most telling of all, these books say little or nothing about one of the biggest economic developments of the 1990s and the first decade of the twenty-first century: the emergence of China and India as global economic powers. In Clyde Prestowitz's Trading Places, the Asian giants are ignored. In Jacques Attali's Millennium, the existence of China and India was at least acknowledged, which is something, but Attali was sure both would remain poor and backward. Their role in the twenty-first century, he wrote, would be to serve as spoils in the economic war waged by the mighty j.a.panese and European blocs; or, if they resisted foreign domination, they could instigate war. Attali did concede that his forecast would be completely upended if China and India "were to be integrated into the global economy and market"-but "that miracle is most unlikely." Lester Thurow did even worse. In Head to Head, he never mentioned India, and China was dismissed in two short pages. "While China will always be important politically and militarily," Thurow wrote, "it will not have a big impact on the world economy in the first half of the 21st century. . . ."

Why didn't the experts and pundits see a problem with what they were doing? Trends end, surprises happen-everyone knows that. And the danger of running a straight-line projection of current trends into the future is notorious. "Long-term growth forecasts are complicated by the fact that the top performers of the last ten years may not necessarily be the top performers of the next ten years," noted a 2005 Deutsche Bank report. "Who could have imagined in 1991 that a decade of stagnation would beset j.a.pan? Who would have forecast in the same year that an impressive rebound of the U.S. economy was to follow? Simply extrapolating the past cannot provide reliable forecasts." Wise words. Curiously, though, the report they are found in predicts that the developed countries whose economies will grow most between 2005 and 2020 are, in order, Ireland, the United States, and Spain. That looks a lot like a straight-line projection of the trend in 2005, when those three countries were doing wonderfully. But three years later-when Ireland, the United States, and Spain led the world in the great cliff-plunge of 2008-it looked like yet another demonstration that "simply extrapolating the past cannot provide reliable forecasts."

Daniel Burstein's Yen! has a chart showing j.a.panese stock market levels steadily rising for the previous twenty years under the headline "Tokyo's One-Way Stock Market: Up." That's the sort of hubris that offends the G.o.ds, which may explain why, three years later, j.a.panese stocks crashed and the market spent the next twenty years going anywhere but up. One would think it's obvious that a stock market cannot go up forever, no matter how long it has been going that way, but the desire to extend the trend line is powerful. It's as if people can't help themselves, as if it's an addiction. And to understand an addiction, it's back to the brain we must go.

PICK A NUMBER.

You are in a university lab where a psychologist is talking. Already you are on high alert. There's a trick question coming because that's what they do, those psychologists. But you're ready.

The psychologist shows you a wheel of fortune. He gives it a spin. The wheel whips around and around, slows, and finally the marker comes to rest on a number. That number is randomly selected. You know that. It means nothing. You know that too.

Now, the psychologist says, What percentage of African countries are members of the United Nations?

This strange little experiment was devised by Daniel Kahneman and Amos Tversky, two psychologists whose work on decision making has been enormously influential in a wide array of fields. It even launched a whole field of economics known as "behavioral economics," which is essentially a merger of economics and psychology. In 2002, Kahneman was awarded the n.o.bel Prize in economics-Tversky died in 1996-which is particularly impressive for a man who had never taken so much as a single cla.s.s in the subject.

The wheel-of-fortune experiment was typical of the work of Kahneman and Tversky. It appeared trivial, even silly. But it revealed something profoundly important. When the wheel landed on the number 65, the median estimate on the question about African countries in the UN was 45 percent (this was at a time when UN members.h.i.+p was lower than it is today). When the wheel stopped at the number 10, however, the median guess was 25 percent. With this experiment, and many others like it, Kahneman and Tversky showed that when people try to come up with a number, they do not simply look at the facts available and rationally calculate the number. Instead, they grab on to the nearest available number-dubbed the "anchor"-and they adjust in whichever direction seems reasonable. Thus, a high anchor skews the final estimate high; a low anchor skews it low.

This result is so bizarre it may be hard to accept, but Kahneman and Tversky's experiment has been repeated many times, with many variations, and the result is always the same. Sometimes, people are asked to make a number out of the digits of their telephone number. In others, the anchor is constructed from the respondent's Social Security number. Different versions slip a number in surrept.i.tiously. In one experiment, people were asked whether Gandhi was older or younger than nine when he died. It's a silly question, of course. But when people were subsequently asked, "How old was Gandhi when he died?" the number nine influenced their answer. We know that because when others were first asked whether Gandhi was older or younger than 140 and then asked how old Gandhi was when he died, their average answer was very different: In the first case, the average was 50; in the second, 67. Researchers have even found that when they tell people that the first number they are exposed to is irrelevant and should not have any bearing on their estimate, it still does. The "anchoring and adjustment heuristic," as it is called, is unconscious. The conscious mind does not control it. It cannot turn it off. It's not even aware of it: When people are asked if the anchor number influenced their decision, they insist it did not.

When experts try to forecast numbers, they don't begin by spinning a wheel of fortune, so the number that acts as the unconscious anchor isn't likely to be so arbitrary. On the contrary. If an expert is predicting, say, the unemployment rate in three years, he will likely begin by recalling the unemployment rate today. If another tries to antic.i.p.ate how many countries will have nuclear weapons a decade in the future, she will start by calling to mind the number with nuclear weapons now. In each case, the current number serves as the anchor, which is generally a reasonable way of coming up with an estimate. But it does mean the prediction starts with a built-in bias toward the status quo. And bear in mind that sometimes the current number is not a reasonable starting point. Consider that in 2006, plenty of people in the United States, the United Kingdom, Ireland, and elsewhere were estimating how much their houses would be worth in 2009 or 2010 because house prices had been rising rapidly and they wanted to know if refinancing their mortgages made sense, or if they should buy another property or make some other investment in real estate. Many factors weighed on their judgment, naturally, but one that certainly did is the anchoring and adjustment heuristic. How much will my house be worth in a few years? Ask that question and you inevitably bring to mind how much it's worth now. And then you adjust-up. But by 2006, a real estate bubble had grossly inflated house prices and so the anchor value people used was unrealistic, and they paid the price when the bubble burst. The very same process undoubtedly went on in the brains of the bankers and financial wizards who bundled mortgage debt into arcane products for sale on global markets, setting in place the explosives that detonated in 2008. In every case, a number that should not have been the starting point very likely was.

This anchoring and adjustment heuristic is one source of status quo bias. But it's a minor one, admittedly. A much bigger contributor is another discovery of Kahneman and Tversky.

THINK OF AN EXAMPLE.

In one of their earliest experiments, Kahneman and Tversky had people read a list of names. Some were men, some were women, but all were famous to some degree. The researchers then asked people to judge whether the list had more men or women. In reality, there were equal numbers of men and women, but that's not how people saw it. They consistently said there were more women. Why?

When people attempt to judge how common something is-or how likely it is to happen in the future-they attempt to think of an example of that thing. If an example is recalled easily, it must be common. If it's harder to recall, it must be less common. Kahneman and Tversky called this "the availability heuristic"-a simple rule of thumb that uses how "available" something is in memory as the basis for judging how common it is. In Kahneman and Tversky's experiment, the women on the list were more famous than the men, and this made their names more memorable. After reading the list, it was easier to recall examples of women and so, using the availability heuristic, people concluded there must be more women than men on the list. Kahneman and Tversky confirmed these results with different trials in which the men on the list were more famous than the women, and, predictably, people concluded there were more men than women.

Again, this is not a conscious calculation. The "availability heuristic" is a tool of the unconscious mind. It churns out conclusions automatically, without conscious effort. We experience these conclusions as intuitions. We don't know where they come from and we don't know how they are produced, they just feel right. Whether they are right is another matter. Like the other hardwired processes of the unconscious mind, the availability heuristic is the product of the ancient environment in which our brains evolved. It worked well there. When your ancestor approached the watering hole, he may have thought, "Should I worry about crocodiles?" Without any conscious effort, he would search his memory for examples of crocodiles eating people. If one came to mind easily, it made sense to conclude that, yes, he should watch out for crocodiles, for two reasons: One, the only information available in that environment was personal experience or the experience of the other members of your ancestor's little band; two, memories fade, so recent memories tend to be easier to recall. Thus, if your ancestor could easily recall an example of a crocodile eating someone, chances are it happened recently and somewhere in the neighborhood. Conclusion: Beware crocodiles.

Needless to say, that world is not ours, and one of the biggest differences between that environment and the one we live in is information. We are awash in images, stories, and data. Sitting here in my Ottawa office, I Googled the words Rome, live, feed and now I'm looking at realtime pictures of Palatine Hill in the Eternal City. The whole process took about six seconds. Of course, no one is impressed by this, as the Internet, cell phones, satellites, television, and all our other information technologies are old hat for most people living in developed countries. But they're only old hat from the perspective of an individual living today. From the perspective of our species, and biology, they are startlingly new innovations that have created an information environment completely unlike anything that has ever existed. And we are processing information in this dazzling new world of information superabundance with a brain that evolved in an environment of extreme information scarcity.

To see how profound the implications of this mismatch can be, look at the reactions to the terrorist attacks of September 11, 2001.

Almost everyone on the planet remembers the attacks. We saw the second plane hit live, on television, and we stared as the towers crumbled. It was as if we watched the whole thing through the living room window, and those images-so surprising and horrific-were seared into memory. In the weeks and months after, we talked about nothing but terrorism. Will there be another attack? When? How? Polls found the overwhelming majority of Americans were certain terrorists would strike again and it would be catastrophic. Most thought they and their families were in physical danger. These perceptions were shaped by many factors, but the availability heuristic was certainly one. How easy is it to think of an example of a terrorist attack? After 9/11, that question would have sounded absurd. It was hard not to think of an example. Nothing was fresher, more vivid, more available to our minds than that. The unconscious mind shouted its conclusion: This is going to happen again!

And yet it did not. Years later, this was considered proof that the government had successfully stopped the attacks that would certainly have come. The absence of attacks was "contrary to every expectation and prediction," wrote conservative pundit Charles Krauthammer in 2004. "Anybody-any one of these security experts, including myself-would have told you on September 11, 2001, we're looking at dozens and dozens and multiyears of attacks like this," former New York mayor Rudy Giuliani said in 2005. Both Krauthammer and Giuliani concluded that the Bush administration deserved the credit. And maybe it did. After all, it's impossible to know conclusively what would have happened if the administration had taken different actions. But we do know the government did not smash "dozens and dozens" of sophisticated terrorist plots. We also know that the most recent Islamist terrorist attack in the United States prior to 9/11 was the bombing of the World Trade Center in 1993, so the fact that the United States went years without another attack was actually in line with experience. "Occam's razor" is the rule of logic that says a simpler explanation should be preferred over a more complex explanation, and here there is a very simple explanation: After 9/11, our perception of the threat was blown completely out of proportion. And for that we can thank, in part, the Stone Age wiring of our brains.

Bear in mind this fits perfectly with how people routinely think. When something big happens, we expect more of the same, but when that thing hasn't happened in ages, or never has, we expect it won't. Earthquakes ill.u.s.trate the point perfectly. When tectonic plates push against each other, pressure builds and builds, until the plates suddenly shudder forward in the spasm of brief, violent motion that ends when the built-up energy is dissipated. Then the whole process begins again. This means that the probability of a serious earthquake is generally lowest after an earthquake and it grows as time pa.s.ses. If people a.s.sessed the risk of earthquake rationally, sales of earthquake insurance should follow the same pattern-lowest after an earthquake but steadily growing as time pa.s.ses. Instead, the opposite happens. There is typically a surge in sales after an earthquake, which is followed by a long, slow drop-off as time pa.s.ses and memories fade. That's the availability heuristic at work.

The mental rut of the status quo is often bemoaned, particularly after there's a disaster and people ask, "Why didn't anyone see it coming?" It was a "failure of imagination" that blinded the U.S. government to the terrorist attacks of 9/11, concluded a national commission. "A failure of the collective imagination" prevented economists from foreseeing the credit crisis of 2008, a group of eminent British economists wrote in an open letter to Queen Elizabeth. The solution to this dearth of imagination seems obvious. After 9/11, the U.S. government a.s.sembled science fiction authors, Hollywood screenwriters, futurists, novelists, and a wide array of other creative people to dream up ways terrorists could strike. After the crash of 2008, people again wanted to understand what had gone wrong, and what would go wrong next, and, obligingly, the shelves of bookstores overflowed with imaginative descriptions of how the crash would be followed by catastrophic depression or worse.

But notice what's really happening here. Shocking terrorist attack? Didn't see it coming? Let's imagine more shocking terrorist attacks. Economic disaster? Big surprise, wasn't it? So let's imagine more economic disasters. This sort of reaction doesn't actually get us out of the mental rut of the status quo. It merely creates a new rut. And all that imagining carves it deep. In a 1976 experiment, psychologists asked Americans to imagine either Jimmy Carter or Gerald Ford winning that year's presidential election, and then they were asked to rate each man's chances. The researchers discovered that those who had imagined Jimmy Carter winning tended to give him a better shot at winning, while those who imagined Ford winning did the same for Ford. What this and similar experiments showed is that when we imagine an event, we create a vivid image that is easily recalled from memory. If we later try to judge how likely it is that the event will actually happen, that memory will drive up our estimate via the availability heuristic. And of course we will be unaware that our judgment was biased by the imagining we did earlier. Our conclusion will simply feel right.

So all that imagining we do after getting walloped by a surprise may not prepare us for the next surprise. In fact, it is likely to make us all the more convinced that tomorrow will be like today, only more so-setting us up for another shock if it's not.

EVERYONE KNOWS THAT!.

So far I've been discussing judgment as if it's something done locked away in a dark, lonely corner of the bas.e.m.e.nt. It's not, of course. People are connected. They talk, they swap information, and they listen very carefully to what everyone's saying because people are profoundly social animals. And experts are people too.

One of those experts is Robert s.h.i.+ller. An economist at Yale University, s.h.i.+ller is a leading scholar, a tenured professor, an innovator, and the author of the 2000 book Irrational Exuberance, which warned that the boom in tech stocks was really a bubble set to burst. Shortly after Irrational Exuberance was published, the tech bubble burst and the book became a best seller. Only a few years later, s.h.i.+ller worried that another bubble was inflating, this time in real estate. If regulators didn't act, it would keep growing, and the inevitable pop would be devastating. There could be "a substantial increase in the rate of personal bankruptcies, which could lead to a secondary string of bankruptcies of financial inst.i.tutions as well," he wrote in a 2005 edition of Irrational Exuberance. A recession would follow, perhaps even "worldwide." Thus, Robert s.h.i.+ller can reasonably claim to be one of the very few economists who predicted the disaster of 2008. Unlike anyone else, however, s.h.i.+ller was in a position to do something about the disaster he foresaw because, from 1990 until 2004, he was a member of a panel that advises the president of the Federal Reserve Bank of New York. And the president of the Federal Reserve Bank of New York is the vice chairman of the committee that sets interest rates. If interest rates had been raised in 2002 or 2003, the housing bubble likely would have stopped inflating and the disaster of 2008 might have been averted or at least greatly reduced in scale.

But when the advisory panel met in 2002 and 2003, s.h.i.+ller didn't shout and jump up and down on the table. "I felt the need to use restraint," he recalled. The consensus in the group was that there was no bubble and no need to raise interest rates. To suggest otherwise was distinctly uncomfortable. s.h.i.+ller did make his point, but "I did so very gently, and felt vulnerable expressing such quirky views. Deviating too far from consensus leaves one feeling potentially ostracized from the group, with the risk that one may be terminated." Don't be misled by the reference to "termination," which suggests that s.h.i.+ller's reluctance to speak out was solely the product of a conscious calculation of self-interest. There's much more to it than that, as s.h.i.+ller knows better than most. He's a pioneer of behavioral economics, the field that merges economics and psychology, and one thing psychology has demonstrated beyond dispute is that the opinions of those around us-peers, colleagues, co-workers, neighbors-subtly and profoundly influence our judgments.

In the 1950s, Solomon Asch, Richard Crutchfield, and other psychologists conducted an extensive series of experiments that revealed an unmistakable tendency to abandon our own judgments in the face of a group consensus-even when the consensus is blatantly wrong. In Asch's experiments, three-quarters of test subjects did this at least once. Overall, people ignored what they could see with their own eyes and adopted the group answer in one-third of all trials. They did this even though there were no jobs, promotions, or appointments at stake. They even did it when they were anonymous and there was no risk of embarra.s.sment. But the very fact that there was nothing at stake in these experiments may suggest that people didn't take them seriously. If there are no consequences to their judgment, why not shrug and go with the group? Another experiment, conducted in 1996, put that possibility to the test. This time, people were led to believe that their judgments would have important consequences for the justice system. Did this raising of the stakes reduce the rate of conformity? When the task was easy, yes, it did, although conformity did not disappear entirely. But when the task was hard and the stakes high, conformity shot up-and those who conformed were more certain their group-influenced judgment was right.

It's tempting to think only ordinary people are vulnerable to conformity, that esteemed experts could not be so easily swayed. Tempting, but wrong. As s.h.i.+ller demonstrated, "groupthink" is very much a disease that can strike experts. In fact, psychologist Irving Janis coined the term groupthink to describe expert behavior. In his 1972 cla.s.sic, Victims of Groupthink, Janis investigated four high-level disasters-the defense of Pearl Harbor, the Bay of Pigs invasion, and escalation of the wars in Korea and Vietnam-and demonstrated that conformity among highly educated, skilled, and successful people working in their fields of expertise was a root cause in each case.

In the mid-1980s, the j.a.panese future looked dazzling. After two decades of blistering growth, j.a.pan was a bullet train racing up the rankings of developed nations. Extend the trend lines forward and j.a.pan took the lead. The argument was compelling. Not only did it make sense rationally, it felt right. More and more experts agreed, and that fact helped persuade others. An "information cascade" developed, as the growing numbers of persuaded experts persuaded still more experts. By 1988, there was, as Paul Kennedy wrote, an expert "consensus" about j.a.pan's s.h.i.+ny future. In that environment, those who might not have been entirely convinced, or were worried about factors that might derail the bullet train, found themselves in the same position as Robert s.h.i.+ller sitting down with colleagues who all agreed there was no real estate bubble. It was worse for private economists and consultants. Tell clients what they and all informed people believe to be true and they will be pleased. We all enjoy having our beliefs confirmed, after all. And it shows that we, too, are informed people. But dispute that belief and the same psychology works against you. You risk saying good-bye to your client and your reputation. Following the herd is usually the safer bet.

Of course, people don't always bow to the consensus. In fact, when a strong expert consensus forms, critics inevitably pop up. Often, they do not express their disagreement "very gently," as Robert s.h.i.+ller did, but are loud and vehement, even extreme. As the housing bubble was building in the early years of this century, Peter Schiff was one such critic. In many TV appearances, Schiff, a money manager, insisted that there was a bubble, that it would soon burst, and the result would be catastrophic. "The United States economy is like the t.i.tanic," he said in 2006. Contrarians like Schiff are almost always outsiders, which is not a coincidence. They don't have a seat at the table and so they aren't subject to the social pressures identified by Asch and other psychologists. Which is not to say that the group consensus doesn't influence them. The frustration of having critical views shut out or, worse, ignored altogether can drive critics to turn up the volume. Calculation may also be involved. An outsider who wants the attention of insiders won't get it if he agrees with the group consensus or politely expresses mild disagreement. He must take great exception to the consensus and express his objection with strong language-and hope that subsequent events demonstrate his genius and make his name.

Thus, in a very real sense, the group opinion also influences the views of outsider-critics: It drives them in the opposite direction.

IT'S 2023 AND AN ASTEROID WIPES OUT AUSTRALIA. . . .

So are we forever doomed to be trapped inside the mental universe of the status quo? Futurists would say no, but there's plenty of evidence futurists are as stuck in the present as the rest of us. One a.n.a.lyst looked at articles published in Futures, an academic journal, to see what sort of relations.h.i.+p there was between the articles appearing in the journal and real-world events over the four decades Futures has been published. "One might hope for a causal correlation showing that a surge of articles about the economy precedes any significant change of the world economy," he wrote. If that were so, it would show that futurists are antic.i.p.ating developments and not merely reflecting the status quo. Alas, it was not to be. "Statistical correlation suggests the reverse since changes in the number of articles lag changes in economic growth rates." That's a polite way of saying the futurists consistently failed to see change coming, but when it arrived, they wrote about it.

Still, we keep trying. One popular method of getting out of the trap of the status quo is scenario planning.

As they were originally devised by Herman Kahn in the 1950s, scenarios were intended to deal with uncertainty by dropping the futile effort to predict. "Scenarios are not predictions," emphasizes Peter Schwartz, the guru of scenario planning. "They are tools for testing choices." The idea is to have a clever person dream up a number of very different futures, usually three or four. One may involve outcomes that seem likely to happen; another may be somewhat more unlikely; and there's usually a scenario or two that seem a little loopy. Managers then consider the implications of each, forcing them out of the rut of the status quo, and thinking about what they would do if confronted with real change. The ultimate goal is to make decisions that would stand up well in a wide variety of contexts.

No one denies there may be some value in such exercises. But how much value? What can they do? The consultants who offer scenario-planning services are understandably bullish, but ask them for evidence and they typically point to examples of scenarios that accurately foreshadowed the future. That is silly, frankly. For one thing, it contradicts their claim that scenarios are not predictions and shouldn't be judged as predictions. Judge them as predictions and all the misses would have to be considered, and the misses vastly outnumber the hits. It's also absurd because, given the number of scenarios churned out in a planning exercise, it is inevitable that some scenarios will "predict" the future for no reason other than chance.

Consultants also cite the enormous popularity of scenario planning as proof of its enormous value, as Peter Schwartz did when I asked him for evidence. "There's the number of companies using it," he said. "Most surveys indicate that something like 70 percent do scenario planning." That's interesting, but it's weak evidence. In 1991, most companies thought the j.a.panese were taking over; in 1999, most were sure Y2K was a major threat; in 2006, most thought real estate and mortgage-backed securities were low-risk investments. Sometimes all the smart people are wrong.

Lack of evidence aside, there are more disturbing reasons to be wary of scenarios. Remember that what drives the availability heuristic is not how many examples the mind can recall but how easily they are recalled. Even one example easily recalled will lead to the intuitive conclusion that, yes, this thing is likely to happen in the future. And remember that the example that is recalled doesn't have to be real. An imagined event does the trick too. And what are scenarios? Vivid, colorful, dramatic stories. Nothing could be easier to remember or recall. And so being exposed to a dramatic scenario about terrorists unleas.h.i.+ng smallpox- or the global economy collapsing or whatever-will make the depicted events feel much more likely to happen. If that brings the subjective perception into alignment with reality, that's good. But scenarios are not chosen and presented on the basis that they are likely. They're done to shake up people's thinking. As a result, the inflated perceptions raised by scenario planning may be completely unrealistic, leading people to make bad judgments.

And that's only one psychological b.u.t.ton pushed by scenarios. There is another. Discovered by Kahneman and Tversky, it goes by the clunky name of "the representativeness heuristic."

In 1982, the Cold War was growing increasingly frosty, as a result of the Soviet invasion of Afghanistan, the declaration of martial law in Poland, and the harder line taken by U.S. president Ronald Reagan. In this chilly atmosphere, Kahneman and Tversky attended the Second International Congress on Forecasting and put the a.s.sembled experts to the test. One group of experts was asked how likely it was that in 1983 there would be "a complete suspension of diplomatic relations between the USA and the Soviet Union." Another group was asked how likely it was that there would be a Soviet invasion of Poland that would cause "a complete suspension of diplomatic relations between the USA and the Soviet Union." Logically, the first scenario has to be more likely than the second because the second requires that the breakdown in diplomatic relations happens as a result of a Soviet invasion of Poland, whereas the first covers a breakdown of diplomatic relations for any reason . And yet the experts judged the second scenario to be more likely than the first.

What Kahneman and Tversky demonstrated, in this experiment and many others, is the operation of a mental shortcut using "representativeness," which is simply the presence of something typical of the category as a whole. Collectively, basketball players are very tall, so a representative basketball player is very tall. Just the words basketball player are enough to conjure an image of a tall man. And chances are the tall man you are imagining is black, at least if you are an American, because that's another feature of the representative basketball player in American culture. If that sounds like stereotyping, you get the idea. For better and worse, the brain is a relentless stereotyper, automatically and incessantly constructing categories with defining characteristics. These categories and characteristics are the basis of the representativeness heuristic.

In 1953, the Soviet Union put down a worker's uprising in Communist East Germany. In 1956, it crushed opposition in Communist Hungary. In 1968, it sent tanks into Communist Czechoslovakia. In the Western mind, invading satellite countries was typical Soviet behavior: It was "representative." And so, when the experts quizzed by Kahneman and Tversky read about a Soviet invasion of Poland, it made intuitive sense. Yes, the brain concluded, that sounds like something the Soviets would do. That feeling then boosted the experts' a.s.sessment of the likelihood of the whole scenario. But experts who were asked to judge a scenario that didn't mention a Soviet invasion of Poland did not get the same charge of recognition and the boost that came with it-and so they rated the scenario to be less likely.

Simple logic tells us that a complicated sequence of events like "A will happen, which will cause B, which will lead to C, which will culminate in D" is less likely to unfold as predicted than a simple forecast of "D will happen." After all, in the complicated scenario, a whole chain of events has to unfold as predicted or D won't happen. But in the simple forecast, there's only one link-D either will happen or won't-and only one chance of failure. So the simple forecast has to be more likely. But thanks to the representativeness heuristic, it's unlikely to feel that way. If any of the events in the complicated forecast seem "typical" or "representative," we will have a feeling-"That's right! That fits!"-and that feeling will influence our judgment of the whole scenario. It will seem more likely than it should. "This effect contributes to the appeal of scenarios and the illusory insight they often provide," Kahneman and Tversky wrote. And it's so easy to do. Just add details, color, and drama. And pile on the predictions. The actual accuracy will plummet but the feeling of plausibility will soar.

So there's the danger. Scenario planning-and any other sort of imaginative speculation about the future-can indeed push us out of the rut of the status quo. But it can also shove us right over to the other extreme, where we greatly overestimate the likelihood of change. The more sophisticated advocates of scenario planning argue that it's a matter of counterbalancing one set of psychological biases that blind us to change with another set that make us overestimate it: Get the balance right and you'll get a realistic appraisal of the future and solid decision making. It's an interesting theory, but there's little evidence on the matter, and what there is provides scant encouragement.

In the early 1990s, at a time when Quebecers increasingly supported the drive to separate their province from Canada, Philip Tetlock ran a scenario exercise with some of his experts. First, he asked them to judge the likelihood of outcomes ranging from the continuation of the status quo to the crumbling of Canada. Then the experts read scenarios describing each outcome in dramatic detail and they were asked again to judge how likely it was that they would happen. Tetlock found the scenarios were effective, in that they boosted the experts' estimates across the board. In fact, they were too effective. After reading the scenarios, the experts' estimates tended to be so high they didn't make sense: After all, it's not possible that there is a 75 percent chance that Quebec will break away and a 75 percent chance that it will not! Tetlock also discovered that scenarios of change tended to cause the biggest jump in estimated likelihood, and that the more flexible cognitive style of foxes caused them to get more carried away than hedgehogs-the one and only example of foxes falling into a trap hedgehogs avoided.

Tetlock confirmed these results with more experiments and although he still thinks scenarios may provide value in contingency planning, he is wary. Scenarios aren't likely to pry open the closed minds of hedgehogs, he says, but they may befuddle foxes. Unless and until contrary evidence arises, there doesn't seem to be a solution here.

FORGET WHAT WE SAID ABOUT THAT OTHER ASIAN COUNTRY AND LISTEN TO THIS. . . .

So we are left where we began. People who try to peer into the future-both experts and laypeople-are very likely to start with an unreasonable bias in favor of the status quo. Today's trends will continue, and tomorrow will be like today, only more so. With that belief in place, the confirmation bias that so misled Arnold Toynbee kicks in. Evidenc

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Future Babble Part 3 summary

You're reading Future Babble. This manga has been translated by Updating. Author(s): Dan Gardner. Already has 647 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

BestLightNovel.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to BestLightNovel.com