BestLightNovel.com

Future Babble Part 2

Future Babble - BestLightNovel.com

You’re reading novel Future Babble Part 2 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

One afternoon in the lobby of a London hotel, Peter Schwartz let me in on a little secret. "The truth is," he said, "in the oil industry today, the most senior executives don't even try to pretend they can predict the price of oil." As a former strategic planner for Sh.e.l.l whose work preparing the oil giant for the price collapse of the mid-1980s is famous in the industry, Schwartz knows oil. As a consultant to the biggest corporations at the highest levels, Schwartz also knows corporate executives, and he underscores his point by waving over a friend of his. Lord John Browne, the legendary former chief executive officer of British Petroleum, worked all his life in the oil business, and he is convinced the price of oil is fundamentally unpredictable. "I can forecast confidently that it will vary. After that, I can gossip with you. But that's all it is, because there are too many factors which go into the dynamics of the pricing of oil."

Still, the oil forecasting industry keeps growing because lots of people are prepared to pay for something top oil executives consider worthless. "There's a demand for the forecasts, so people generate them," Schwartz says with a shrug.

BUT WHAT ABOUT THE PREDICTABLE PEAK?.

Anyone familiar with the history of oil forecasts will object. What about peak oil? Yes, predictions about prices have failed over and over. But one oil forecaster made an important prediction that proved exactly right.

That forecaster was M. King Hubbert, a geophysicist who worked with Sh.e.l.l and, later, the U.S. Geological Survey. Hubbert wrote a paper in 1957 that predicted overall petroleum production in the United States would peak sometime between the late 1960s and early 1970s, after which it would irreversibly decline. Experts scoffed. They stopped scoffing when American oil production peaked fourteen years later.



Hubbert's methods were relatively simple. When an oil field is discovered, its production rises steadily until it peaks and starts to fall as smoothly as it rose. Although the field may never run completely dry, what matters is that more drilling and pumping will not significantly change the downward slope of production. Eventually, a chart of the field's production will look like a cla.s.sic bell curve. If this happens to a single field, Hubbert reasoned, it can happen to a group of fields, or a whole oil-producing region. Or a nation. By examining reserves and production rates, Hubbert calculated the moment when the chart of American oil production would hit its peak and begin its long, slow decline. And he was right.

If American production can peak and decline, so can the world's, and that is the sense in which the term peak oil is generally used today. No one disputes that peak oil will come. Oil is a finite resource and so a peak in production is inevitable. What matters is the timing. Some a.n.a.lysts think we've already reached the peak. Many others see it coming within a few years, or perhaps in a decade or two. Some insist peak oil lies in a future far too distant to worry about, but the ranks of these optimists have thinned in recent years. This is a debate with enormous ramifications. Hitting peak oil in a world of expanding economies whose expansion depends on oil would shake the foundations of the global economy.

"Peak oil" advocates are convinced that we are at, or near, the top of the bell curve, but it's important to understand that Hubbert's prediction for American production was based on a linear equation and some big a.s.sumptions. For one thing, it took as a given that demand growth wouldn't change in a big way. Nor would technology. And of course it a.s.sumed that Hubbert's estimate of total reserves in the ground was right. These a.s.sumptions turned out to be right, in that case. But will they always be? We can answer that question by noting that Hubbert applied his methods to global oil production. It would peak in 1995, he predicted. The decline that followed would be rapid. By 2010, it would be down a terrifying 17 percent. "The end of the Oil Age is in sight," Hubbert proclaimed in 1974.

The fault was not Hubbert's. A very long list of experts got the same call wrong. An international group of a.n.a.lysts brought together by MIT, for example, concluded two and a half years of work in 1977 with a declaration that global oil production would peak around 1990 "at the latest," although the group thought it more likely that the peak would be reached in the early 1980s. No wonder Jimmy Carter was gloomy.

Lord John Browne only chuckles when I mention the latest panic over peak oil. "In my career this must be about number seven," he says.

ASKING THE RIGHT QUESTION.

So why can't we predict the price of oil? That's the wrong question. What we should ask is, in a nonlinear world, why would we think oil prices can be predicted? Practically since the dawn of the oil industry in the nineteenth century, experts have been forecasting the price of oil. They've been wrong ever since. And yet this dismal record hasn't caused us to give up on the enterprise of forecasting oil prices.

Vast numbers of intelligent people continue to spend their days a.n.a.lyzing data and crafting forecasts that are no more likely to be right than all those that came before. Na.s.sim Taleb, author of The Black Swan, recalled giving a lecture to employees of the U.S. government about the futility of forecasting. Afterward, he was approached by someone who worked for an agency doomed, like Sisyphus, to do the impossible. "In January, 2004, his department was forecasting the price of oil for 25 years later at $27 a barrel, slightly higher than what it was at the time. Six months later, around June 2004, after oil doubled in price, they had to revise their estimate to $54," Taleb wrote. "It did not dawn on them that it was ludicrous to forecast a second time given that their forecast was off so early and so markedly, that this business of forecasting had to be somehow questioned. And they were looking 25 years ahead!" That twenty-five-year prediction was actually modest, believe it or not. The International Energy Agency routinely issues thirty-year forecasts.

We never learn. When the terrifying forecasts of the 1970s were followed by the return of cheap oil in the 1980s, it wasn't the concept of forecasting that was humiliated and discarded. It was only those particular forecasts. New forecasts sprang up. When prices stayed low year after year, the consensus of the late 1990s was that any price increases would quickly be offset by conservation and stepped-up production. Thus, the era of cheap oil would last far into the future.

It lasted until 2004.

In 2005, as oil prices climbed steadily and sharply, Steve Forbes, publisher of Forbes magazine, said it was a bubble that would soon pop. "I'll make a bold prediction," he said. "In 12 months, you're going to see oil down to $35 to $40 a barrel." The price kept rising. By 2007, a new consensus had emerged: Oil was going higher. And it did. In the first half of 2008, oil pushed above the previously unimaginable level of $140 a barrel. "The Age of Oil is at an end," declared environmental writer Timothy Egan in The New York Times, echoing, whether he knew it or not, M. King Hubbert's declaration of thirty-five years earlier. Experts who had led in forecasting the rising trend became media stars, quoted everywhere, and what they had to say was not good: The price would continue to climb. It will break the $200 mark soon, predicted Arjun Murti at Goldman Sachs. Jeff Rubin, the chief economist at CIBC World Markets, agreed. "It's going to go higher. It might go way higher," investment banker and energy a.n.a.lyst Matthew Simmons said on CNBC. "It's not going to collapse." Simmons said that in July 2008, when oil was selling at $147 a barrel.

It didn't go higher. In September 2008, financial markets melted down, precipitating a dramatic slowing in the global economy. The experts hadn't foreseen that. The decline in growth drove down demand for oil and what Matthew Simmons said would not happen did. By December, oil traded at $33 a barrel. If any a.n.a.lyst had made that call six months earlier, when the price was more than four times higher, he would have found himself out of a job.

At the time I'm writing these words-early 2010-the price has risen, then fallen, then risen again. It's now a little more than $80 a barrel. Where will it go from here? I don't know. But plenty of other people have no doubt at all. "Petroleum is a finite resource that is going to $200 a barrel by 2012," wrote a business columnist with the confidence of someone predicting that trees will lose their leaves in the autumn. The British newspaper The Independent declared, "The era of cheap oil has come to an end," without mentioning that this was not cheap oil's first death notice. A poll conducted in June 2009 revealed that business executives are even more sure of themselves than journalists: Asked what the price of oil would be in five years, only 5 percent answered, "Don't know." Asked about the price in ten years, 10 percent answered, "Don't know." Even when they were asked about the price twenty years in the future, a mere one-third of business executives doubted their powers of divination.

Maybe the era of Mad Max really is coming, finally. Or maybe cheap oil will rise from the dead once again. Or maybe new technologies will surprise us all and create a future quite unlike anything we imagine. The simple truth is no one really knows, and no one will know until the future becomes the present. The only thing we can say with confidence is that when that time comes, there will be experts who are sure they know what the future holds and people who pay far too much attention to them.

3.

In the Minds of Experts.

It is a singular fact that the Great Pyramid always predicts the history of the world accurately up to the date of publication of the book in question, but after that date it becomes less reliable.

-BERTRAND RUSSELL.

Arnold Toynbee was brilliant. About that, even his critics agreed. The British historian's magnum opus, A Study of History, was stuffed with so many historical details drawn from so many times and places it seemed Toynbee knew more history than anyone on the planet. Even more dazzling was the central revelation of A Study of History: Toynbee claimed to have discovered an identical pattern of genesis, growth, breakdown, and disintegration in the history of every civilization that had ever existed. The implication was obvious. If there is a universal pattern in the past, it must be woven into the present. And the future. In the late 1940s and the 1950s, as the United States and the world entered a frightening new era of atom bombs and Cold War, A Study of History became a ma.s.sive best seller. Arnold Toynbee was celebrated and revered as the man who could see far into the frightening maelstrom ahead. He was no mere historian or public intellectual. He was a prophet-"a modern St. Augustine," as a reviewer put it in The New York Times in 1949.

But for all Toynbee's brilliance, A Study of History never won the respect of historians. They didn't see the pattern Toynbee saw. Instead, what they saw was a man so obsessed with an idea-a hedgehog, to put it in this book's terms-that he had devoted his energy, knowledge, and decades of toil to the construction of a ten-volume illusion. In the blunt words of the Dutch historian Pieter Geyl, "The author is deceiving himself."

Arnold Joseph Toynbee was born April 14, 1889, to a proper Victorian family that valued education and religious piety above all else. Young Arnold did not disappoint. In 1902, he won a scholars.h.i.+p to the ill.u.s.trious Winchester College, where he was immersed in the history, language, and literature of ancient Greece and Rome. The experience shaped him profoundly. The ancient world became as comfortable and familiar to him as the Britain of cricket pitches and Empire. It was his foundation, his universal frame of reference. He even dreamed in Latin.

In 1906, Toynbee landed a scholars.h.i.+p to Balliol College, Oxford, and proceeded to win awards at a satisfyingly brisk pace. An appointment to the faculty naturally followed graduation.

Toynbee's tidy world ended with the outbreak of the First World War. Rejected from military service on medical grounds, he conducted political a.n.a.lysis for the British government, an involvement with current affairs he maintained through the rest of his life. It was a natural fit. Tying together past, present, and future was something Toynbee did intuitively. "What set me off," he wrote decades later in an essay explaining where he got the idea to write A Study of History, "was a sudden realization, after the outbreak of the First World War, that our world was just then entering an experience that the Greek World had been through in the Peloponnesian War."

In the summer of 1920, a friend gave Toynbee a copy of The Decline of the West by the German Oswald Spengler. The book was a sensation in postwar Germany, where defeat had been followed by revolutionary turmoil and the sense that the world as people knew it was collapsing. Spengler captured the gloomy mood perfectly. All civilizations rise and fall as predictably and inescapably as the swing of a clock's pendulum, he wrote. The West was old and doomed to decay, senility, and death. There would be no more science and art. No creation, innovation, and joy. "The great masters are dead," Spengler proclaimed. The West could do nothing now but dig its grave.

"As I read those pages teeming with firefly flashes of historical insight," Toynbee later recalled, "I wondered at first whether my whole inquiry had been disposed of by Spengler before even the question, not to speak of the answers, had fully taken shape in my own mind." But Toynbee was appalled by the absence of concrete evidence in The Decline of the West. Spengler's argument consisted of nothing more than a.s.sertions, Toynbee realized. "You must take it on trust from the master. This arbitrary fiat seemed disappointingly unworthy of Spengler's brilliant genius; and here I became aware of a difference of national traditions. Where the German a priori method drew a blank, let us see what could be done by English empiricism. Let us see alternative possible explanations in the light of the evidence and see how they stood the ordeal." Toynbee would do what Spengler had done, but scientifically.

One evening in 1921, leaving Turkey aboard the fabled Orient Express, Toynbee took out a fountain pen and sketched an outline of what would become A Study of History. Although Toynbee had yet to study non-Western history seriously, his outline confidently stated that the course taken by all civilizations follows a pattern: Birth was followed by differentiation, expansion, breakdown, empire, universal religion, and finally, interregnum. Toynbee's terminology changed a little over the years, and the list of laws and regularities in history steadily expanded, adding layers of complexity, but the basic scheme never changed from the publication of the first three volumes in 1934 to the release of the final four in 1954.

With grand outline in hand, Toynbee set out to study Chinese, j.a.panese, Indian, Incan, and other non-Western histories. Over and over again, he found he was right: They did indeed follow the pattern. The parallels were most p.r.o.nounced in the terminal stage of "disintegration," he found. In what he called the "Time of Troubles," a dispossessed minority founds a new religion, there are increasingly violent wars, and the civilization rallies to form a "universal state." This is followed by a brief respite-an "Indian Summer"-from the internal decline. But the clashes resume and decay worsens. Gradually, the universal state collapses, and the civilization with it. But that is not the end. For the new religion continues to grow, holding out the promise of renewal in some distant future.

It's not hard to see Greece, Rome, and the Christian church in Toynbee's allegedly universal pattern of history. But did the other twenty civilizations identified by Toynbee all follow the same course? He insisted they did. And his spectacular erudition-only Arnold Toynbee could write a sentence like "If Austerlitz was Austria's Cynosephalae, Wagram was her Pydna"-cowed the average reader. The man seemed to know everything about everywhere. He was Wikipedia made flesh. Who could doubt him?

But historians did doubt Arnold Toynbee. Pieter Geyl and others who took the trouble to carefully sift through Toynbee's vast heap of evidence found he routinely omitted inconvenient facts, twisted others, and even fabricated out of whole cloth. A blatant example was his handling of Mohammed and the Islamic explosion in the seventh century, in which a handful of peripheral tribes on the Arabian peninsula suddenly swept across much of North Africa and the Middle East. This was a problem for Toynbee's system because it created a single government-the Umayyad Caliphate-ruling over a vast swath of territory. That's a "universal state" in Toynbee's terms. But in Toynbee's scheme, universal states come about only when a civilization is old and on its way down. Yet here was a universal state that seemed to spring up out of the sand. Holding a square peg, Toynbee pounded it into his round hole: "He declared that the Arab conquerors, inspired by Mohammed's newly minted revelation, were 'unconscious and unintended champions' of a 'Syriac' civilization that had gone underground a thousand years before at the time of Alexander's conquest," wrote William H. McNeill. "No one before Toynbee had conceived of a Syriac civilization, and it seems safe to a.s.sume that he invented the entire concept in order to be able to treat the Ummayad Caliphate as a universal state with a civilization of its own."

With few exceptions, Toynbee's fellow historians thought his project was absurd. The criticisms began with the publication of the first works in A Study of History. They grew louder as more books appeared. "His methods, he never ceases to tell us, are empirical," wrote Hugh Trevor-Roper, one of the major historians of the twentieth century. "In fact, wherever we look, it is the same. Theories are stated-often interesting and suggestive theories; then facts are selected to ill.u.s.trate them (for there is no theory which some chosen facts cannot ill.u.s.trate); then the magician waves his wand, our minds are dazed with a ma.s.s of learned detail, and we are told that the theories are 'empirically' proved by the facts and we can now go on to the next stage in the argument. But in truth this is neither empiricism nor proof, nor even argument: it is a game anyone can play, a confusion of logic with speculation." Another eminent historian was even more cutting. "This is not history," declared A. J. P. Taylor.

To be fair to Toynbee, historians were a tough audience. Most were-and are-deeply skeptical of the notion that there are universal patterns and "laws" to be discovered in history. Some go further and argue that all events are unique and history is simply "one d.a.m.ned thing after another"-an idea H.A.L. Fisher put more elegantly when he wrote, "I can see only one emergence following upon another, as wave follows upon wave, only one great fact with respect to which, since it is unique, there can be no generalizations, only one safe rule for the historian, that he should recognize in the development of human destinies the play of the contingent and the unforeseen." (Although he wasn't aware of it, the very reason Toynbee was in Turkey on the night he drafted the outline of A Study of History supports Fisher's view that "the contingent and the unforeseen" played leading roles in history: As a correspondent for the Guardian newspaper, Toynbee had been reporting on the disastrous Greek military campaign whose origins lay, as we saw in the last chapter, in a monkey bite.) The chorus of catcalls from historians, which reached a crescendo in the mid-1950s, shook Toynbee. But it didn't make the slightest difference to how the public received Toynbee and his work. When volumes one to three were released in 1934, and again following the release of volumes four to six in 1939, A Study of History was acclaimed, particularly in the United Kingdom. Sales were brisk. In the depths of the Depression, with British power waning, totalitarianism rising, and another horrific war increasingly likely, pessimism flourished. Like Spengler before him, Toynbee captured the mood perfectly. Although he explicitly dealt with the prospects of modern Western civilization in only one of the last volumes of A Study of History, his writing was covered in a pall of gloom and he left no doubt about where, in the grand pattern of civilizations, the West found itself. Breakdown was now long past and we were deep into disintegration. The end was approaching, as it had for Rome all those centuries before.

In 1942, when Toynbee traveled to the United States on behalf of the British government, he expounded on his theories, and what they meant for the coming postwar future, with American officials and notables. One was Henry Luce, the publisher of Time, Life, and Fortune magazines, whose circulation and influence were immense. Luce was deeply impressed. A devout Christian and a pa.s.sionate advocate of American leaders.h.i.+p in world affairs, Luce was thrilled by Toynbee's message because, gloomy as it was, it was not without hope. Western civilization had not yet brought forth a "universal state" and the "Indian Summer" that follows, Toynbee noted. So that must be what lies ahead. Who could bring about such a universal state? Britain was finished. The n.a.z.is or the Soviets could, but that would be a horror. No, it must be done by the United States.

Luce asked Toynbee to speak to the editors of Time and they, in turn, made his views a fixture of the most important magazine in the United States. In March 1947, when a mercifully abridged version of A Study of History was released, Toynbee's somber face graced Time's cover and an effusive story detailed his great work for the ma.s.s American audience. "The response has been overwhelming," Time's editors wrote in the next edition. Academics, governors, congressmen, journalists, and "plain citizens" wrote in unprecedented numbers. The military asked for seventeen hundred reprints "for distribution to Armed Forces chaplains everywhere." The abridgement of A Study of History became a best seller and "Toynbee's name," a writer in Time recalled, "tinkled among the martini gla.s.ses of Brooklyn as well as Bloomsbury."

Once again, timing was everything. The prewar order was shattered, a terrifying new weapon had entered the world, and it seemed obvious that the United States could not go back to isolationism. But what should America do instead? Friction with the Soviet Union-which some had taken to calling a "Cold War"-was growing steadily. "Western man in the middle of the twentieth century is tense, uncertain, adrift," wrote Arthur Schlesinger Jr. in the famous opening to 1949's The Vital Center. "We look upon our epoch as a time of troubles, an age of anxiety. The grounds of our civilization, of our cert.i.tude, are breaking up under our feet, and familiar ideas and inst.i.tutions vanish as we reach for them, like shadows in the falling dusk." Arnold Toynbee was the man for the moment.

For literary critics, philosophers, theologians, politicians, writers, journalists, and others interested in big ideas, Toynbee's vision was electrifying. "If our world civilization survives its threatened ordeals, A Study of History will stand out as a landmark, perhaps even a turning point," wrote the critic Lewis Mumford. Toynbee was hailed as "the most renowned scholar in the world" and "a universal sage." "There have been innumerable discussions of Toynbee's work in the press, in periodicals, over radio and television, not to mention countless lectures and seminars," marveled the anthropologist Ashley Montagu in 1956. "Through the agency of all these media Toynbee has himself actively a.s.sisted in the diffusion of his view." Toynbee loved his fame, not so much for the money and adoration that went with it-or at least, not only for that-but for the opportunities to expound on the state of the world and where it was headed.

Toynbee's vision of the future never wavered, as might be expected given his belief that there is a universal pattern woven into all civilizations. If there is such a pattern, after all, it suggests a deterministic process is at work: All civilizations must and will follow the same path, no exceptions. Oswald Spengler had no trouble with such determinism, and he bluntly concluded that an old civilization could no more avoid collapse than an old man could avoid death. But not Toynbee. It chafed against his Christian conception of free will, which insists that people are free to choose their actions, and so, even as he promoted the idea of a universal pattern in the life of civilizations and made predictions for the future based on that pattern, Toynbee insisted choices matter. And in the Atomic Age, the choice was between a universal state, followed by a profound religious revival or a war that would bring the violent end of humanity. One or the other. Nothing else was possible.

Toynbee repeated this general prognosis constantly. Occasionally, he was more specific. For a 1962 volume on the population explosion, he foresaw the creation, by the end of the century, of an international agency with unchecked power to control the production and distribution of food. It would be "the first genuine executive organ of world government that mankind will create for itself." Other iterations of his vision were more ambitious. In a 1952 lecture, Toynbee sketched the world of 2002. "The whole face of the planet will have been unified politically through the concentration of irresistible military power in some single set of hands," he declared. Those hands wouldn't inevitably be American but he thought that most likely. Nor was it clear whether the unification would come about by world war. But "if a modern westernizing world were to be unified peacefully, one could imagine, in 2002, a political map not unlike the Greco-Roman world in A.D. 102. . . ." Nominally, the universal state would be democratic but the public would no longer exercise real control over the government, and the government-faced with severe overcrowding and resource shortages-would regulate every aspect of its citizens' lives. People would accept this control as the price of order and prosperity, but the suppression of freedom "on the material plane" would generate an explosion of freedom "on the spiritual plane." Humanity "will turn back from technology to religion," Toynbee predicted. The leaders of the future would no longer be men of business and power; they would be spiritual guides. "There will be no more Fords and Napoleons, but there may still be St. Francises and John Wesleys." As for the origins of this religious revival, "it might not start in America or in any European or Western country, but in India. Conquered India will take her matter-of-fact American conqueror captive. . . . The center of power will ebb back from the sh.o.r.es of the Atlantic to the Middle East, where the earliest civilizations arose 5,000 or 6,000 years ago."

In his later years, Toynbee's support for American leaders.h.i.+p faded but his belief that a universal state must come into being, and soon, was unshakable. In 1966, he mused about the possibility of a "Russo-American consortium" and suggested that if the Cold War antagonists couldn't get the job done, China would. In any event, freedom would certainly be extinguished, perhaps brutally. "I can imagine," he told a j.a.panese interviewer in 1970, "the world being held together and kept at peace in the year 2000 by an atrociously tyrannical dictators.h.i.+p which did not hesitate to kill or torture anyone who, in its eyes, was a menace to the unquestioning acceptance of its absolute authority." As horrible as this version of the universal state sounded, Toynbee insisted it would be for the best. In an age of nuclear weapons and overpopulation, it is simply impossible to have freedom, peace, and national sovereignty at the same time. Humanity "has to choose between political unification and ma.s.s suicide."

In 1961, long after A Study of History had provoked the condemnation of historians and made Toynbee wealthy and famous, he published a final volume, simply ent.i.tled Reconsiderations. In it, Toynbee conceded much and failed to respond to more. "By the time Toynbee had agreed with some points made by his critics, met them halfway on others, and left questions unresolved in still other instances, little was left of the original, and no new vision of human history as a whole emerged from Reconsiderations," wrote his biographer, the historian William H. McNeill. Of course, Toynbee didn't consider this to be final proof that the pattern he saw in history was an illusion and his whole project a waste of a brilliant mind. But that's what it was.

The collapse of Toynbee's vision was not the end of his renown, because neither Toynbee nor his adoring public noticed that it had collapsed. Toynbee continued to publish books and commentary at a furious pace, and demand for his views about the present and future never flagged. He even won new acclaim in j.a.pan, where "Toynbee societies" sprang up and the great man was invited to lecture the royal family. "No other historian, and few intellectuals of any stripe," concluded McNeill, "have even approached such a standing."

But when Toynbee died in 1975, his fame and influence were buried with him. The grand schema of A Study of History left no lasting mark on historical research and his visions of the future all came to naught. The "thick volumes of A Study of History sit undisturbed on the library shelves," Hugh Trevor-Roper wrote in 1989. "Who will ever read them? A few Ph.D. students, perhaps, desperate for a subject."

And so we are left with a riddle. Here was a man who probably knew more history than anyone alive. His knowledge of politics and current affairs was almost as vast. He brimmed with intelligence, energy, and imagination. And yet, his whole conception of the past and present was based on a mirage, and his supposed visions of the future were no more insightful than the ramblings of a man lost and wandering beneath a desert sun.

How could such a brilliant man have been so wrong?

ENTER THE KLUGE.

The answer lies in Arnold Toynbee's brain. It was one of the finest of its kind, which is saying something because any human brain is a truly marvelous thing. But no brain is perfect, not even the brain of a genius like Toynbee.

The brain was not designed by a team of engineers. It was not betatested, reworked, and released with a big ad campaign; it evolved. When the ancestors of today's humans parted ways with the ancestors of today's chimpanzees some five to seven million years ago, the protohuman brain was much smaller than the modern brain's fourteen hundred cubic centimeters. Around 2.5 million years ago, and again 500,000 years ago, our ancestors' brains went through growth spurts. The final ballooning occurred some 150,000 to 200,000 years ago. Throughout all this vast stretch of time-and in the much longer years when the brains of our ancestors' ancestors were forming-evolutionary pressures shaped the brain's development.

Genes normally replicate and make exact copies of themselves, but occasionally they misfire and produce mutations. If a mutation makes the person who has it significantly less likely to survive and reproduce, it will die off along with the unlucky person who got it. But mutations that a.s.sist survival and reproduction spread. They may even, eventually, become universal features of the species. This is true of mutations involving muscles, bones, and organs. And it's true of mutations involving the brain.

So positive changes proliferate. Mistakes are removed. And we get smarter and smarter. It's all so simple, neat, and efficient.

And yet the human brain is anything but "simple, neat, and efficient." Borrowing a term from engineering, psychologist Gary Marcus has dubbed the brain a "kluge"-an inelegant but effective solution to a problem. When the carbon dioxide filters on board the Apollo 13 capsule failed, engineers at mission control dreamed up a replacement made out of "a plastic bag, a cardboard box, some duct tape, and a sock." That's a kluge.

"Natural selection, the key mechanism of evolution, is only as good as the random mutations that arise," Marcus writes. "If a given mutation is beneficial, it may propagate but the most beneficial mutations imaginable, alas, never appear." As a result, an evolutionary solution to an environmental problem that is flawed or suboptimal but nonetheless does the job-a kluge, in other words-may spread and become standard operating equipment for the species. Once in place, the new equipment may be used to deal with other problems if, once again, it does the job adequately. And when new challenges arise, it may be the platform on which new less-than-perfect solutions will be built-thus multiplying the quirks and oddities. This is how we got spines that allow us to walk upright but are so flawed they routinely leave us bent over with back pain; vision marred by a built-in blind spot caused by the absurd design of our retina; and wisdom teeth that emerge to inflict pain for no particular reason. And then there is the brain. Imagine its ma.s.s, complexity, and general kluginess growing as our ancestors encountered one problem after another, across unfathomable spans of time, and it becomes obvious why the brain is anything but simple, neat, and efficient.

It's also critical to remember that natural selection operates in response to pressures in a particular environment. Change the environment and a solution may no longer be so helpful. Consider pale skin. It was a useful adaptation for human populations living at high lat.i.tudes, where sunlight is weaker, because it allowed the body to maximize production of vitamin D. But that advantage is strictly limited to the environment in which pale skin evolved. Not only is pale skin unnecessary at lower lat.i.tudes, where the sun's rays are stronger, it puts people at greater risk of skin cancer. The fact that evolutionary adaptations are specific to the environments in which they evolved didn't matter much throughout most of human history, for the simple reason that the environments in which we lived changed slowly. But as a result of the explosion of technology and productivity of the last several centuries, most people live in human-constructed environments that are dramatically different from the natural environments in which their ancestors lived-producing such novel sights as pasty-faced Englishmen clambering aboard airplanes for tropical destinations where the lucky vacationers will lie in the sun, sip fruity drinks, and boost their risk of skin cancer. From the perspective of one person, several centuries is a very long time, but in biological terms, it is a blink of a chimpanzee's eye. Human evolution doesn't move at anything like that speed and thus we are left with one of the defining facts of modern life: We live in the Information Age but our brains are Stone Age.

These two facts-the brain's kluginess and the radically changed environment in which we live-have a vast array of consequences. And almost all of us, almost always, are blissfully unaware of them.

Consider an experiment in which psychologists dropped 240 wallets on various Edinburgh streets. In each wallet, there was a personal photo, some ID, an old raffle ticket, a members.h.i.+p card or two, and a few other minor personal items. There was no cash. The only variation in the wallets was the photograph, which could be seen through a clear plastic window. In some, it showed a smiling baby. In others, there was a puppy, a family, or an elderly couple. A few of the wallets had no photo at all. The researchers wanted to know how many of the wallets would be dropped in mailboxes, taken to the police, or otherwise returned. More specifically, the researchers wanted to know if the content of the photograph in each wallet would make a difference. It shouldn't, of course. A lost wallet is important to whoever loses it and returning it is a bother no matter what's in it. In strictly rational terms, the nature of the photograph is irrelevant.

And yet psychologist Richard Wiseman discovered that the photograph made an enormous difference. Only 15 percent of those without one were returned. A little more than one-quarter of the wallets with a picture of an elderly couple were returned, while 48 percent of the wallets with a picture of a family, and 53 percent with the photo of a puppy, were returned. But the baby walloped them all; an amazing 88 percent of wallets with pictures of infants were returned.

This doesn't make sense-until we consider the "two-system" model of decision making. Researchers have demonstrated that we have not one mind making decisions such as "Should I bother sending this wallet back to its owner?" We have two. One is the conscious mind, and since that mind is, by definition, aware only of itself, we think of it as being the single, unified, complete ent.i.ty that is "me." Wrong. Quite wrong, in fact. Most of what the brain does happens without our having any conscious awareness of it, which means this "unconscious mind" is far more influential in our decision making than we realize.

The two minds work very differently. Whereas the conscious mind can slowly and carefully reason its way to a conclusion-"On the one hand, returning the wallet is the nice thing to do, but when I weigh that against the time and bother of returning it . . ."-the unconscious mind delivers instantaneous conclusions in the form of feeling, hunches, and intuitions. The difference in speed is critical to how the two systems work together. "One of psychology's fundamental insights," wrote psychologist Daniel Gilbert, "is that judgments are generally the products of nonconscious systems that operate quickly, on the basis of scant evidence, and in a routine manner, and then pa.s.s their hurried approximations to consciousness, which slowly and deliberately adjusts them." The unconscious mind is fast so it delivers first; the conscious mind then lumbers up and has a look at the unconscious mind's conclusion.

When someone spots a wallet on the streets of Edinburgh, picks it up, and decides to return it, she thinks that's all there is to the story. But far more happened. Even before her conscious thoughts got rolling, unconscious systems in her brain took a look at the situation and fired off a conclusion. That conclusion was the starting point for her conscious thoughts.

One unconscious mental system equates a photographed object with the real thing. That sounds mad-only a deranged person thinks a photo of a puppy is a puppy-until you recall that in the environment in which our brains evolved, there were no images of things that were not what they appeared to be. If something looked like a puppy, it was a puppy. Appearance equals reality. In the ancient environment in which our brains evolved, that was a good rule, which is why it became hardwired into the brain and remains there to this day. Of course, the reader will object that people do not routinely confuse photos of puppies with puppies. This is true, fortunately, but that's only because other brain systems intervene and correct this mistake. And a correction is not an erasure. Thus there remains a part of our brain that is convinced an image of something actually is that something. This quirk continues to have at least a little influence on our behavior, as jilted girlfriends reveal every time they tear up photos of the cad who hurt them or parents refuse to throw duplicate photos of their children in the trash. Still doubt this? Think of a lovely, tasty piece of fudge. But now imagine this fudge is shaped like a coil of dog poo. Still want to eat it? Right. That's exactly the reaction psychologists Paul Rozin and Carol Nemeroff got when they asked people to eat fudge shaped like dog poo, among other experiments involving a gap between image and reality. "In these studies, subjects realized that their negative feelings were unfounded but they felt and acknowledged them anyway."

We really like babies. The sight of a chubby little infant gurgling and grinning is enough to make even Scrooge smile. Babies are the best. And for good reason. In evolutionary terms, nothing is more important than reproducing. Among our ancestors, parents who didn't particularly care if their babies were well-fed, healthy, happy, and safe were much less likely to see those babies become adults with children of their own. So that att.i.tude was going nowhere. But those who felt a surge of pleasure, compa.s.sion, and concern at the very sight of their darling little ones would take better care of them and be more likely to bounce grandchildren on their knees. Thus the automatic emotional response every normal person feels at the sight of a baby became hardwired, not only among humans but in every species that raises its young to maturity: Never come between a bear and her cubs.

But there's a problem here. Evolution is ruthless. It puts a priority on your reproduction, which means it cares about your offspring, not somebody else's. And yet that's not how we respond to babies. The sight of any gurgling and grinning infant makes us feel all warm and compa.s.sionate. Why is that? Our compa.s.sionate response to babies is a kluge. In prehistoric environments, we would seldom encounter a baby that was not our own, or that of our kin or our neighbor. Thus an automatic surge of compa.s.sion in response to the sight of any baby was not a perfect response-in ruthless reproduce-above-all terms-but it didn't cause us to do anything too foolish. So it did the job; it was good enough.

Now let's go back to those wallets on the streets of Edinburgh. Someone comes along, spots one, picks it up-and finds there is no picture. What does she do? Well, for her, there's not much more than rational calculation to go on. She knows the owner would probably like the wallet back, but there's no money in it so the loss wouldn't be too bad. And besides, returning it would be a ha.s.sle. Hence, only 15 percent of these wallets are returned. But another person picks up a different wallet, looks in it, and sees a photo of an elderly couple. This humanizes the problem-literally so, for the brain system that mistakes a photo of an elderly couple for the elderly couple themselves. An intuitive impulse is elicited, a feeling, a sense of compa.s.sion. Then the conscious brain steps in and thinks about the situation. Result: 25 percent of these wallets are returned, a significant increase.

The photo of a family generates a stronger unconscious response and a 48 percent return rate. And the baby, of course, produces an amazing 85 percent return rate. But what about the photo of the puppy? At 53 percent, its return rate is roughly equal to the photo of a family and double that of the elderly couple. And it's not even human! One might think that makes no sense from an evolutionary perspective, and yet, it does. The puppy and the baby may be different species but both have big eyes, a little mouth and chin, and soft features. Our automatic response to babies is triggered by these features, so anything or anyone that presents them can elicit a similar response. It's not a coincidence that all the animals and cartoon characters we find cute and adorable-from baby seals to Mickey Mouse-have the same features. Nor is it a coincidence that, as psychologists have shown, people often stereotype "baby-faced" adults as innocent, helpless, and needy. It's our kluge at work.

SEEING THINGS.

It would be nice if feeling compa.s.sion for puppies and baby seals were the worst thing that happens when cognitive wires get crossed. Unfortunately, it's not. Thanks to the brain's evolutionary character, we often make mistakes about far more consequential matters.

In the last years of the Second World War, Germany pounded London with V-1 and V-2 rockets. These "flying bombs" were a horrible new weapon, unlike anything seen before. At first, Londoners didn't know what to make of the threat. All they knew was that, at any moment, with little or no warning, a ma.s.sive explosion would erupt somewhere in the city. But gradually people began to realize that the rocket strikes were cl.u.s.tered in certain parts of the city, while others were spared. Rumors spread. n.a.z.i spies were directing the missiles, some said. The spies must live in the parts of the city that weren't being hit. But what was the German strategy? The East End was being particularly hard hit and the East End is working cla.s.s. Aha! The Germans must be trying to inflame cla.s.s resentment in order to weaken the war effort.

It was a compelling explanation and yet it was wrong. As terrifying as the rockets were, they lacked precision guidance equipment and the best the Germans could do was point them at London and let them explode where they might. In 1946, statistician R. D. Clarke made a simple one-page calculation that compared the extent of cl.u.s.tering in the flying-bomb attacks to the cl.u.s.tering that could be expected if the bombs had been randomly distributed. There was a near-perfect match.

We have a hard time with randomness. If we try, we can understand it intellectually, but as countless experiments have shown, we don't get it intuitively. This is why someone who plunks one coin after another into a slot machine without winning will have a strong and growing sense-the "gambler's fallacy"-that a jackpot is "due," even though every result is random and therefore unconnected to what came before. Similarly, someone asked to put dots on a piece of paper in a way that mimics randomness will distribute them fairly evenly across the page, so there won't be any cl.u.s.ters of dots or large empty patches-an outcome that is actually very unlikely to happen in a true random distribution. And people believe that a sequence of random coin tosses that goes THTHHT is far more likely than the sequence THTHTH, even though they are equally likely.

Many people experienced this intuitive failure listening to an iPod. When it's set on "shuffle," it's supposed to choose and play songs randomly. But it often doesn't seem random. You may have heard six in a row from one artist and wondered if the program is biased in favor of that guy. Or maybe it gave top billing to your favorites and you suspected it's actually mimicking your nonrandom choices. Or perhaps-as some conspiracy-minded bloggers insisted-it seemed to favor songs from record companies that have a close relations.h.i.+p with Apple, the maker of the iPod. Peppered with complaints and accusations, Apple subsequently reprogrammed the shuffle feature: The idea was to make it "less random to make it feel more random," Steve Jobs said.

People are particularly disinclined to see randomness as the explanation for an outcome when their own actions are involved. Gamblers rolling dice tend to concentrate and throw harder for higher numbers, softer for lower. Psychologists call this the "illusion of control." We may know intellectually that the outcome is random and there's nothing we can do to control it, but that's seldom how we feel and behave. Psychologist Ellen Langer revealed the pervasive effect of the illusion of control in a stunning series of experiments. In one, people were asked to cut cards with another person to see who would draw the higher card and to make bets on the outcome. The outcome is obviously random. But the compet.i.tor people faced was in on the experiment, and his demeanor was carefully manipulated. Sometimes he was confident and calm; sometimes he was nervous. Those who faced a nervous compet.i.tor placed bigger bets than those who squared off against a confident opponent. Langer got the same result in five other experiments testing the "illusion of control"-including an experiment in which people put a higher value on a lottery ticket they chose at random than a ticket they were given, and another in which people rated their chances of winning a random-outcome game to be higher if they were given a chance to practice than if they had not played the game before.

Disturbing as these findings were, it was another of Langer's experiments that fully revealed how deluded the brain can be. Yale students were asked to watch someone flip a coin thirty times. Before each flip, the students were asked to predict whether the flip would come up heads or tails; after each, they were told whether they had "won" or "lost." In reality, the results were rigged so there would always be fifteen wins and fifteen losses, but some of the students would get a string of wins near the beginning while others first encountered a string of losses. At the end of the thirty flips, students were asked how many wins they thought they got in total, how good they thought they were at predicting coin tosses, and how many wins they thought they would get if they did the test again with a hundred flips in total. Langer discovered a clear tendency: Students who got a string of wins at the beginning thought they did better than those who didn't; they said their ability to predict was higher; and they said they would score significantly more wins in a future round of coin flipping. So the string of early wins had triggered the illusion of control. Students then focused on subsequent wins and paid little attention to losses, which led them to the false conclusion that they had notched more wins than losses and that they could do it again.

Langer's results are particularly startling when we consider the full context of the experiment. These are top-tier students at one of the world's best universities. They're in a clinical environment in which they believe their intelligence is being tested in some way. Under the circ.u.mstances, Langer noted, they "are likely to be 'superrational.'" And this is hardly a tricky task. A flipped coin is the very symbol of randomness and any educated person knows it is absurd to think skill has anything to do with calling "heads" or "tails." And yet Langer's test subjects still managed to fool themselves.

Langer's research inspired dozens more studies like it. Psychologists Paul Presson and Victor Bena.s.si of the University of New Hamps.h.i.+re brought it all together and noticed that although psychologists use the term illusion of control, much of the research wasn't about "controlling" an outcome. As in Langer's coin-flipping experiment, it was really about predicting outcomes. They also found the illusion is stronger when it involves prediction. In a sense, the "illusion of control" should be renamed the "illusion of prediction."

This illusion is a key reason experts routinely make the mistake of seeing random hits as proof of predictive ability. Money manager and Forbes columnist Kenneth Fisher recalled attending a conference at which audience members were invited to predict how the Dow would do the next day. At the time, the index hovered around 800, so Fisher guessed it would drop 5.39 points. "Then I noticed the gent next to me jotting down a 35-point plunge," Fisher recalled in 1987. "He said he hadn't the foggiest idea what might happen," but he certainly had a strategy. "If you win," the man told Fisher, "the crowd will think you were lucky to beat everyone else who bets on minor moves. But if my extreme call wins, they'll be dazzled." The next day the Dow dropped 29 points. "That afternoon, folks bombarded the winner for details on how he had foreseen the crash. He obliged them all, embellis.h.i.+ng his 'a.n.a.lysis' more with each telling. That night, when I saw him alone, he had convinced himself that he had known all along, and became indignant when I reminded him that his call was based on showmans.h.i.+p."

Blame evolution. In the Stone Age environment in which our brains evolved, there were no casinos, no lotteries, and no iPods. A caveman with a good intuitive grasp of randomness couldn't have used it to get rich and marry the best-looking woman in the tribe. It wouldn't have made him healthier or longer-lived, and it wouldn't have increased his chances of having children. In evolutionary terms, it would be a dud. And so an intuitive sense of randomness didn't become hardwired into the human brain and randomness continues to elude us to this day.

The ability to spot patterns and causal connections is something else entirely. Recognizing that the moon waxes and wanes at regular intervals improved the measurement of time, which was quite handy when someone figured out that a certain patch of berries is ripe at a particular period every summer. It was also good to know that gazelles come to the watering hole when the rains stop, that people who wander in the long gra.s.s tend to be eaten by lions, and a thousand other useful regularities. Pattern recognition was literally a matter of life and death, so natural selection got involved and it became a hardwired feature of the human brain.

And not only the human brain. Birds and animals also benefit from spotting patterns, and thus their cognitive wiring makes them adept at seeing connections. Sometimes they are too good at it. When B. F. Skinner put pigeons in his famous "Skinner box" and gave them food at randomly selected moments, the pigeons quickly connected the appearance of the food to whatever they happened to be doing when it appeared. A pigeon that happened to be thrusting its head into a corner, for example, ate the food, then went back to the lucky corner and resumed thrusting its head, over and over, expecting more food to drop. It would be nice to blame this behavior on the limited intelligence of pigeons, but they are far from the only species that draws false connections between unrelated events. Humans do it all the time. Skinner believed it was a root cause of superst.i.tion. "The birds behaved as if they thought that their habitual movement had a causal influence on the reward mechanism, which it didn't," wrote biologist Richard Dawkins. "It was the pigeon equivalent of a rain dance."

Rain dances are ineffective, but they aren't harmful. Someone who does a dance, gets rained on, and concludes that dancing causes rain has made a serious mistake, but he won't increase his chances of an early death if he dances when he wants rain. That's typical of false positives. Seeing patterns that aren't there isn't likely to make a big difference to a person's chances of surviving and reproducing-unlike failing to see patterns that do exist. This profound imbalance is embedded in our cognitive wiring. We consistently overlook randomness but we see patterns everywhere, whether they are there or not. The stars may be scattered randomly across the night sky, but people see bears, swans, warriors, and the countless other patterns we call constellations. We see faces in clouds, rocks, and the moon. We see ca.n.a.ls on Mars and the Virgin Mary on burnt toast. Of course, we also see a great many patterns that really are there, often with astonis.h.i.+ng speed and accuracy. But the cost of this ability is a tendency to see things that don't exist.

Although humans may share this tendency with other animals, at least to some extent, there is something quite different about the human quest to spot patterns. In a cla.s.sic experiment that has been conducted with many variations, people sit before a red light and a green light. The researchers ask them to guess which of the two lights will come on next. At first, there isn't much to go on. There seems to be no pattern. And indeed, there isn't a pattern. The flas.h.i.+ng of the lights is random, although the test subjects aren't told this. But as lights continue to flash and time pa.s.ses, it becomes apparent that there is one regularity: The red light is coming on much more often than the green. In fact, the distribution is 80 percent red, 20 percent green. Faced with this situation, people will tilt their guesses to match the frequency with which the lights are coming on-so they'll guess red about 80 percent of the time and green 20 percent. In effect, they are trying to match the "pattern" of the flashes. But that's impossible because it's random. Needless to say, people don't do very well on this test.

But pigeons do. So do rats and other animals. Put to the same test, they follow a different strategy. Since there are more red flashes than green, they simply choose red over and over. That yields far better results. You might say it's the rational thing to do. But we rational humans don't do it.

Why not? That's the question University of California neuroscientist Michael Gazzaniga explored in a fascinating experiment. For decades, Gazzaniga has worked with people who have had the connection between the right and left hemispheres of their brain severed, usually as a form of treatment for severe epilepsy. These "split-brain patients" function surprisingly well under most circ.u.mstances. But because the hemispheres control different aspects of perception, thought, and action, severing the two does produce some startling results. Most important for researchers is the fact that it is possible to communicate with one hemisphere of the brain-by revealing information to one eyeball but not the other, for example-while keeping the other in the dark.

When Gazzaniga and his colleagues put the left hemispheres of split-brain patients to the red-green test, he got the usual results: They tried to figure out the pattern and ended up doing poorly. But right hemispheres did something startling: Like rats and other animals, they guessed red over and over again and thus got much better results. For Gazzaniga, this was important proof of an idea he has pursued for many years. In the left hemisphere of the brain-and only the left hemisphere-is a neural network he calls "the Interpreter." The Interpreter makes sense of things. After the brain experiences perceptions, emotions, and all the other processes that operate at lightning speed, the Interpreter comes along and explains everything. "The left hemisphere's capacity of continual interpretation means it is always looking for order and reason," Gazzaniga wrote, "even when they don't exist."

The Interpreter is ingenious. And relentless. It never wants to give up and say, "This doesn't make sense," or "I don't know." There is always an explanation. In one experiment, Gazzaniga showed an image to the left hemisphere of a split-brain patient and another to the right hemisphere. An array of photos was spread out on a table. The patient was asked to pick the photo that was connected to the image they had seen. In one trial, the left hemisphere of the patient was shown an image of a chicken claw; on the table was a photo of a chicken. The right hemisphere was shown a snow scene; on the table was a photo of a snow shovel. When the patient's left hand-which is controlled by the right hemisphere-pointed to the shovel, Gazzaniga asked the left hemisphere why. It had no idea, of course. But it didn't say that. "Oh, that's simple," the patient answered confidently. "The chicken claw goes with the chicken and you need a shovel to clean out the chicken shed."

For humans, inventing stories that make the world sensible and orderly is as natural as breathing. That capacity serves us well, for the most part, but when we are faced with something that isn't sensible and orderly, it's a problem. The spurious stories that result can seriously lead us astray, and, unfortunately, more information may not help us. In fact, more information makes more explanations possible, so having lots of data available can actually empower our tendency to see things that aren't there. Add a computer and things only get worse. "Data mining" is now a big problem for precisely this reason: Statisticians know that with plenty of numbers and a powerful computer, statistical correlations can always be found. These correlations will often be meaningless, but if the human capacity for inventing explanatory stories is not restrained by constant critical scrutiny, they won't appear meaningless. They will look like hard evidence of a compelling hypothesis-just as the apparent cl.u.s.tering of rocket strikes in London looked like evidence that the n.a.z.is were targeting certain neighborhoods in order to advance their cunning strategy.

We can all fall victim to this trap, but it's particularly dangerous for experts. By definition, experts know far more about their field of expertise than nonexperts. They have read all the books, and they have ma.s.ses of facts at their fingertips. This knowledge can be the basis of real insight, but it also allows experts to see order that isn't there and to explain it with stories that are compelling, insightful, and false. In a PBS interview, Jeff Greenfield, an American journalist, recalled how he and other pundits were tripped up during the presidential election of 1988. Vice President George H.W. Bush wouldn't win, they believed. The reason was an obscure fact known only to political experts: "No sitting vice-president has been elected since Martin Van Buren." Aha! A meaningful pattern! Or so it seemed. But as it turned out, Bush didn't lose and the pundits would have been better off if they had never heard of Martin Van Buren.

This is the quicksand that consumed Arnold Toynbee. His lifelong project began with an intuition-a "flash of perception," he called it-that the trajectory of Western history was following that of ancient Greece and Rome. After spotting that pattern, Toynbee elaborated on it and committed it to paper in 1921. When, in the course of writing A Study of History, Toynbee was confronted with information that didn't fit his tidy scheme-such as the sudden appearance of the Islamic "universal state"-he was in the position of the hapless split-brain patient whose hand was pointed at a photo of a shovel for some reason. It didn't make sense; it didn't fit the pattern. So Toynbee's left hemisphere got busy. Drawing on his intelligence and his vast store of knowledge, Toynbee created ingenious stories that explained the seemingly inexplicable and maintained order in his mental universe.

Arnold Toynbee wasn't deluded despite his brilliance. He was deluded because of it.

ALWAYS CONFIDENT, ALWAYS RIGHT.

Is absinthe a precious stone or a liquor? You probably know the right answer. But how certain are you that your answer is right? If you are 100 percent certain, you are dead sure. There's no way you can be wrong. Ninety percent certainty is a little lower but still quite confident. Eighty percent a little lower still. But if you only give yourself a 50 percent chance of being right, it's a toss-up, a random guess, and you're not confident at all. In 1977, psychologists Paul Slovic, Sarah Lichtenstein, and Baruch Fischhoff used a series of questions and a rating system like this one in order to test the confidence people have in their own judgments. What they were looking for was not how many questions people got right or wrong. They were interested in calibration: When people said they were 100 percent confident, were they right 100 percent of the time? Were they right in 70 percent of the cases in which they gave a 70 percent confidence rating? That's perfect calibration-proof that they are exactly as confident as they should be.

The researchers found that no one was perfectly calibrated. In fact, their confidence was consistently skewed. When the questions were easy, people were a little underconfident. But as the questions got harder, they became more sure of themselves and underconfidence turned into overconfidence. Incredibly, when people said they were 100 percent sure they were right, they were actually right only 70 to 80 percent of the time.

This pattern has turned up in a long list of studies over the years. And, no, it's not just undergrads lacking in knowledge and experience. Philip Tetlock discovered the same pattern in his work with expert predictions. Other researchers have found it in economists, demographers, intelligence a.n.a.lysts, doctors, and physicists. One study that directly compared experts with laypeople found that both expected experts to be "much less overconfident"-but both were, in fact, equally overconfident. Piling on information doesn't seem to help, either. In fact, knowing more can make things worse. One study found that as clinical psychologists were given more information about a patient, their confidence in their diagnosis rose faster than their accuracy, resulting in

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Future Babble Part 2 summary

You're reading Future Babble. This manga has been translated by Updating. Author(s): Dan Gardner. Already has 479 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

BestLightNovel.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to BestLightNovel.com