Complexity - A Guided Tour - BestLightNovel.com
You’re reading novel Complexity - A Guided Tour Part 13 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
A rule describing the abc abd change has been built: "Replace letter-category of rightmost letter by successor." The current version of Copycat a.s.sumes that the example change consists of the replacement of exactly one letter, so rule-building codelets fill in the template "Replace _______ by _______," choosing probabilistically from descriptions that the program has attached to the changed letter and its replacement, with a probabilistic bias toward choosing more abstract descriptions (e.g., usually preferring rightmost letter to C).
FIGURE 13.6.
The temperature has fallen to 53, resulting from the increasing perceptual organization reflected in the structures that have been built.
Figure 13.6: Two-hundred twenty-five codelets have run. The letter-to-letter cj correspondence has been defeated by the letter-to-group cJ correspondence. Reflecting this, the rightmost rightmost mapping has been joined by a letter group mapping underlying the correspondence. The cJ correspondence is stronger than the cj correspondence because the former covers more objects and because the concept group is highly active and thus seen as highly relevant to the problem. However, in spite of its relative weakness, the cj correspondence is again being considered by a new team of codelets.
Meanwhile, the rr group has been built. In addition, its length (represented by the 2 next to the R) has been noticed by a codelet (a probabilistic event). This event activated the node length, creating pressure to notice other groups' lengths.
A new rule, "Replace the letter category of the rightmost letter by 'D,' " has replaced the old one at the top of the screen. Although this rule is weaker than the previous one, compet.i.tions between rival structures (including rules) are decided probabilistically, and this one simply happened to win. However, its weakness has caused the temperature to increase to 58.
If the program were to stop now (which is quite unlikely, since a key factor in the program's probabilistic decision to stop is the temperature, which is still relatively high), the rule would be adapted for application to the string mrrjjj as "Replace the letter category of the rightmost group by 'D,' " obeying the slippage letter group spelled out under the cJ correspondence. This yields answer mrrddd, an answer that Copycat does indeed produce, though on very rare occasions.
FIGURE 13.7.
Codelets that attempt to create an answer run frequently throughout the program (their attempts are not displayed here) but are not likely to succeed unless the temperature is low.
Figure 13.7: Four hundred eighty codelets into the run, the rule "Replace letter-category of rightmost letter by successor" has been restored after it out-competed the previous weaker rule (a probabilistic event). However, the strong cJ correspondence was broken and replaced by its weaker rival, the cj correspondence (also a probabilistic event). As a consequence, if the program were to stop at this point, its answer would be mrrjjk, since the c in abc is mapped to a letter, not to a group. Thus the answer-building codelet would ignore the fact that b has been mapped to a group. However, the (now) candidate correspondence between the c and the group J is again being strongly considered. It will fight again with the cj correspondence, but will likely be seen as even stronger than before because of the parallel correspondence between the b and the group R.
In the Slipnet the activation of length has decayed since the length description given to the R group hasn't so far been found to be useful (i.e., it hasn't yet been connected up with any other structures). In the Works.p.a.ce, the salience of the group R's length description 2 is correspondingly diminished.
The temperature is still fairly high, since the program is having a hard time making a single, coherent structure out of mrrjjj, something that it did easily with abc. That continuing difficulty, combined with strong focused pressure from the two sameness groups that have been built inside mrrjjj, caused the system to consider the a priori very unlikely idea of making a single-letter sameness group. This is represented by the dashed rectangle around the letter m.
FIGURE 13.8.
Figure 13.8: As a result of these combined pressures, the M sameness group was built to parallel the R and J groups in the same string. Its length of 1 has been attached as a description, activating length, which makes the program again consider the possibility that group length is relevant for this problem. This activation now more strongly attracts codelets to the objects representing group lengths. Some codelets have already been exploring relations between these objects and, probably due to focused pressures from abc to see successors.h.i.+p relations.h.i.+ps, have built a successors.h.i.+p link between the 1 and the 2.
A consistent trio of letter group correspondences have been made, and as a result of these promising new structures the temperature has fallen to the relatively low value of 36, which in turn helps to lock in this emerging view.
If the program were to halt at this point, it would produce the answer mrrkkk, which is its most frequent answer (see figure 13.12).
Figure 13.9: As a result of length's continued activity, length descriptions have been attached to the remaining two groups in the problem, jjj and abc, and a successors.h.i.+p link between the 2 and the 3 (for which there is much focused pressure coming from both abc and the emerging view of mrrjjj) is being considered. Other less likely candidate structures (a bc group and a cj correspondence) continue to be considered, though at considerably less urgency than earlier, now that a coherent perception of the problem is emerging and the temperature is relatively low.
Figure 13.10: The link between the 2 and the 3 was built, which, in conjunction with focused pressure from the abc successor group, allowed codelets to propose and build a whole-string group based on successors.h.i.+p links, here between numbers rather than between letters. This group is represented by a large solid rectangle surrounding the three sameness groups. Also, a correspondence (dotted vertical line to the right of the two strings) is being considered between the two whole-string groups abc and mrrjjj.
FIGURE 13.9.
FIGURE 13.10.
Ironically, just as these sophisticated ideas seem to be converging, a small renegade codelet, totally unaware of the global movement, has had some good luck: its bid to knock down the cJ correspondence and replace it with a cj correspondence was successful. Of course, this is a setback on the global level; while the temperature would have gone down significantly because of the strong mrrjjj group that was built, its decrease was offset by the now nonparallel set of correspondences linking together the two strings. If the program were forced to stop at this point, it would answer mrrjjk, since at this point, the object that changed, the c, is seen as corresponding to the letter j rather than the group J. However, the two other correspondences will continue to put much pressure on the program (in the form of codelets) to go back to the cJ correspondence.
FIGURE 13.11.
Figure 13.11: Indeed, not much later in the run this happens: the cj correspondence has been broken and the cJ correspondence has been restored. In addition, the proposed whole-string correspondence between abc and mrrjjj has been built; underlying it are the mappings whole whole, successor-group successor-group, right right (direction of the links underlying both groups), successor successor (type of links underlying both groups), letter-category length, and 3 3 (size of both groups).
The now very coherent set of perceptual structures built by the program resulted in a very low temperature (11), and (probabilistically) due to this low temperature, a codelet has succeeded in translating the rule according to the slippages present in the Works.p.a.ce: letter group and letter-category length (all other mappings are ident.i.ty mappings). The translated rule is "Replace the length of the rightmost group by its successor," and the answer is thus mrrjjjj.
It should be clear from the description above that because each run of Copycat is permeated with probabilistic decisions, different answers appear on different runs. Figure 13.12 displays a bar graph showing the different answers Copycat gave over 1,000 runs, each starting from a different random number seed. Each bar's height gives the relative frequency of the answer it corresponds to, and printed above each bar is the actual number of runs producing that answer. The average final temperature for each answer is also given below each bar's label.
The frequency of an answer roughly corresponds to how obvious or immediate it is, given the biases of the program. For example, mrrkkk, produced 705 times, is much more immediate to the program than mrrjjjj, which was produced only 42 times. However, the average final temperature on runs producing mrrjjjj is much lower than on runs producing mrrkkk (21 versus 43), indicating that even though the latter is a more immediate answer, the program judges the former to be a better answer, in terms of the strength and coherence of the structures it built to produce each answer.
FIGURE 13.12. A bar graph plotting the different answers Copycat gave over 1,000 runs, each starting from a different random number seed.
Summary.
Via the mechanisms ill.u.s.trated in this run of the program, Copycat avoids the Catch-22 of perception: you can't explore everything, but you don't know which possibilities are worth exploring without first exploring them. You have to be open-minded, but the territory is too vast to explore everything; you need to use probabilities in order for exploration to be fair. In Copycat's biologically inspired strategy, early on there is little information, resulting in high temperature and high degree of randomness, with lots of parallel explorations. As more and more information is obtained and fitting concepts are found, the temperature falls, and exploration becomes more deterministic and more serial as certain concepts come to dominate. The overall result is that the system gradually changes from a mostly random, parallel, bottom-up mode of processing to a deterministic, serial, focused mode in which a coherent perception of the situation at hand is gradually discovered and gradually "frozen in." As I ill.u.s.trated in chapter 12, this gradual transition between different modes of processing seems to be a feature common to at least some complex adaptive systems.
a.n.a.logies such as those between Copycat and biological systems force us to think more broadly about the systems we are building or trying to understand. If one notices, say, that the role of cytokines in immune signaling is similar to that of codelets that call attention to particular sites in an a.n.a.logy problem, one is thinking at a general information-processing level about the function of a biological ent.i.ty. Similarly, if one sees that temperature-like phenomena in the immune system-fever, inflammation-emerge from the joint actions of many agents, one might get some ideas on how to better model temperature in a system like Copycat.
Finally, there is the ever-th.o.r.n.y issue of meaning. In chapter 12 I said that for traditional computers, information is not meaningful to the computer itself but to its human creators and "end users." However, I would like to think that Copycat, which represents a rather nontraditional mode of computation, perceives a very primitive kind of meaning in the concepts it has and in a.n.a.logies it makes. For example, the concept successor group is embedded in a network in which it is linked to conceptually similar concepts, and Copycat can recognize and use this concept in an appropriate way in a large variety of diverse situations. This is, in my mind, the beginning of meaning. But as I said in chapter 12, meaning is intimately tied up with survival and natural selection, neither of which are relevant to Copycat, except for the very weak "survival" instinct of lowering its temperature. Copycat (and an even more impressive array of successor programs created in Hofstadter's research group) is still quite far from biological systems in this way.
The ultimate goal of AI is to take humans out of the meaning loop and have the computer itself perceive meaning. This is AI's hardest problem. The mathematician Gian-Carlo Rota called this problem "the barrier of meaning" and asked whether or when AI would ever "crash" it. I personally don't think it will be anytime soon, but if and when this barrier is unlocked, I suspect that a.n.a.logy will be the key.
CHAPTER 14.
Prospects of Computer Modeling.
BECAUSE COMPLEX SYSTEMS ARE TYPICALLY, as their name implies, hard to understand, the more mathematically oriented sciences such as physics, chemistry, and mathematical biology have traditionally concentrated on studying simple, idealized systems that are more tractable via mathematics. However, more recently, the existence of fast, inexpensive computers has made it possible to construct and experiment with models of systems that are too complex to be understood with mathematics alone. The pioneers of computer science-Alan Turing, John von Neumann, Norbert Wiener, and others-were all motivated by the desire to use computers to simulate systems that develop, think, learn, and evolve. In this fas.h.i.+on a new way of doing science was born. The traditional division of science into theory and experiment has been complemented by an additional category: computer simulation (figure 14.1). In this chapter I discuss what we can learn from computer models of complex systems and what the possible pitfalls are of using such models to do science.
What Is a Model?
A model, in the context of science, is a simplified representation of some "real" phenomenon. Scientists supposedly study nature, but in reality much of what they do is construct and study models of nature.
Think of Newton's law of gravity: the force of gravity between two objects is proportional to the product of their ma.s.ses divided by the square of the distance between them. This is a mathematical statement of the effects of a particular phenomenon-a mathematical model. Another kind of model describes how the phenomenon actually works in terms of simpler concepts-that is, what we call mechanisms. In Newton's own time, his law of gravity was attacked because he did not give a mechanism for gravitational force. Literally, he did not show how it could be explained in terms of "size, shape, and motion" of parts of physical objects-the primitive elements that were, according to Descartes, necessary and sufficient components of all models in physics. Newton himself speculated on possible mechanisms of gravity; for example, he "pictured the Earth like a sponge, drinking up the constant stream of fine aethereal matter falling from the heavens, this stream by its impact on bodies above the Earth causing them to descend." Such a conceptualization might be called a mechanistic model. Two hundred years later, Einstein proposed a different mechanistic model for gravity, general relativity, in which gravity is conceptualized as being caused by the effects of material bodies on the shape of four-dimensional s.p.a.ce-time. At present, some physicists are touting string theory, which proposes that gravity is caused by miniscule, vibrating strings.
FIGURE 14.1. The traditional division of science into theory and experiment has been complemented by a new category: computer simulation. (Drawing by David Moser.) Models are ways for our minds to make sense of observed phenomena in terms of concepts that are familiar to us, concepts that we can get our heads around (or in the case of string theory, that only a few very smart people can get their heads around). Models are also a means for predicting the future: for example, Newton's law of gravity is still used to predict planetary orbits, and Einstein's general relativity has been used to successfully predict deviations from those predicted orbits.
Idea Models.
For applications such as weather forecasting, the design of automobiles and airplanes, or military operations, computers are often used to run detailed and complicated models that in turn make detailed predictions about the specific phenomena being modeled.
In contrast, a major thrust of complex systems research has been the exploration of idea models: relatively simple models meant to gain insights into a general concept without the necessity of making detailed predictions about any specific system. Here are some examples of idea models that I have discussed so far in this book: Maxwell's demon: An idea model for exploring the concept of entropy.
Turing machine: An idea model for formally defining "definite procedure" and for exploring the concept of computation.
Logistic model and logistic map: Minimal models for predicting population growth; these were later turned into idea models for exploring concepts of dynamics and chaos in general.
Von Neumann's self-reproducing automaton: An idea model for exploring the "logic" of self-reproduction.
Genetic algorithm: An idea model for exploring the concept of adaptation. Sometimes used as a minimal model of Darwinian evolution.
Cellular automaton: An idea model for complex systems in general.
Koch curve: An idea model for exploring fractal-like structures such as coastlines and snowflakes.
Copycat: An idea model for human a.n.a.logy-making.
Idea models are used for various purposes: to explore general mechanisms underlying some complicated phenomenon (e.g., von Neumann's logic of self-reproduction); to show that a proposed mechanism for a phenomenon is plausible or implausible (e.g., dynamics of population growth); to explore the effects of variations on a simple model (e.g., investigating what happens when you change mutation rates in genetic algorithms or the value of the control parameter R in the logistic map); and more generally, to act as what the philosopher Daniel Dennett called "intuition pumps"-thought experiments or computer simulations used to prime one's intuitions about complex phenomena.
Idea models in complex systems also have provided inspiration for new kinds of technology and computing methods. For example, Turing machines inspired programmable computers; von Neumann's self-reproducing automaton inspired cellular automata; minimal models of Darwinian evolution, the immune system, and insect colonies inspired genetic algorithms, computer immune systems, and "swarm intelligence" methods, respectively.
To ill.u.s.trate the accomplishments and prospects of idea models in science, I now delve into a few examples of particular idea models in the social sciences, starting with the best-known one of all: the Prisoner's Dilemma.
Modeling the Evolution of Cooperation.
Many biologists and social scientists have used idea models to explore what conditions can lead to the evolution of cooperation in a population of self-interested individuals.
Indeed, living organisms are selfish-their success in evolutionary terms requires living long enough, staying healthy enough, and being attractive enough to potential mates in order to produce offspring. Most living creatures are perfectly willing to fight, trick, kill, or otherwise harm other creatures in the pursuit of these goals. Common sense predicts that evolution will select selfishness and self-preservation as desirable traits that will be pa.s.sed on through generations and will spread in any population.
In spite of this prediction, there are notable counterexamples to selfishness at all levels of the biological and social realms. Starting from the bottom, sometime in evolutionary history, groups of single-celled organisms cooperated in order to form more complex multicelled organisms. At some point later, social communities such as ant colonies evolved, in which the vast majority of ants not only work for the benefit of the whole colony, but also abnegate their ability to reproduce, allowing the queen ant to be the only source of offspring. Much later, more complex societies emerged in primate populations, involving communal solidarity against outsiders, complicated trading, and eventually human nations, governments, laws, and international treaties.
Biologists, sociologists, economists, and political scientists alike have faced the question of how such cooperation can arise among fundamentally selfish individuals. This is not only a question of science, but also of policy: e.g., is it possible to engender conditions that will allow cooperation to arise and persist among different nations in order to deal with international concerns such as the spread of nuclear weapons, the AIDS epidemic, and global warming?
FIGURE 14.2. Alice and Bob face a "Prisoner's Dilemma." (Drawing by David Moser.).
THE PRISONER'S DILEMMA.
In the 1950s, at the height of the Cold War, many people were thinking about how to foster cooperation between enemy nations so as to prevent a nuclear war. Around 1950, two mathematical game theorists, Merrill Flood and Melvin Drescher, invented the Prisoner's Dilemma as a tool for investigating such cooperation dilemmas.
The Prisoner's Dilemma is often framed as follows. Two individuals (call them Alice and Bob) are arrested for committing a crime together and are put into separate rooms to be interrogated by police officers (figure 14.2). Alice and Bob are each separately offered the same deal in return for testifying against the other. If Alice agrees to testify against Bob, then she will go free and Bob will receive a sentence of life in prison. However, if Alice refuses to testify but Bob agrees to testify, he will go free and she will receive the life sentence. If they both testify against the other, they each will go to prison, but for a reduced sentence of ten years. Both Alice and Bob know that if neither testifies against the other they can be convicted only on a lesser charge for which they will go to jail for five years. The police demand a decision from each of them on the spot, and of course don't allow any communication between them.
If you were Alice, what would you do?
You might reason it out this way: Bob is either going to testify against you or keep quiet, and you don't know which. Suppose he plans to testify against you. Then you would come out best by testifying against him (ten years vs. life in prison). Now suppose he plans to keep quiet. Again your best choice is to testify (go free vs. five years in prison). Thus, regardless of what Bob does, your best bet for saving your own skin is to agree to testify.
The problem is that Bob is following the exact same line of reasoning. So the result is, you both agree to testify against the other, giving each of you a worse outcome than if you had both kept quiet.
Let me tell this story in a slightly different context. Imagine you are the U.S. president. You are considering a proposal to build a powerful nuclear weapon, much more powerful than any that you currently have in your stockpile. You suspect, but don't know for sure, that the Russian government is considering the same thing.
Look into the future and suppose the Russians indeed end up building such a weapon. If you also had decided to build the weapon, then the United States and Russia would remain equal in fire power, albeit at significant cost to each nation and making the world a more dangerous place. If you had decided not to build the weapon, then Russia would have a military edge over the United States.
Now suppose Russia does not build the weapon in question. Then if you had decided to build it, the United States would have a military edge over Russia, though at some cost to the nation, and if you had decided not to build, the United States and Russia would remain equal in weaponry.
Just as we saw for Bob and Alice, regardless of what Russia is going to do, the rational thing is for you to approve the proposal, since in each case building the weapon turns out to be the better choice for the United States. Of course the Russians are thinking along similar lines, so both nations end up building the new bomb, producing a worse outcome for both than if neither had built it.
This is the paradox of the Prisoner's Dilemma-in the words of political scientist Robert Axelrod, "The pursuit of self-interest by each leads to a poor outcome for all." This paradox also applies to the all too familiar case of a group of individuals who, by selfishly pursuing their own interests, collectively bring harm to all members of the group (global warming is a quintessential example). The economist Garrett Hardin has famously called such scenarios "the tragedy of the commons."
The Prisoner's Dilemma and variants of it have long been studied as idea models that embody the essence of the cooperation problem, and results from those studies have influenced how scholars, businesspeople, and governments think about real-world policies ranging from weapons control and responses to terrorism to corporate management and regulation.
The Dilemma is typically formulated in terms of a two-person "game" defined by what mathematical game theorists call a payoff matrix-an array of all possible outcomes for two players. One possible payoff matrix for the Prisoner's Dilemma is given in figure 14.3. Here, the goal is to get as many points (as opposed to as few years in prison) as possible. A turn consists of each player independently making a "cooperate or defect" decision. That is, on each turn, players A and B independently, without communicating, decide whether to cooperate with the other player (e.g., refuse to testify; decide not to build the bomb) or to defect from the other player (e.g., testify; build the bomb). If both players cooperate, each receives 3 points. If player A cooperates and player B defects, then player A receives zero points and player B gets 5 points, and vice versa if the situation is reversed. If both players defect, each receives 1 point. As I described above, if the game lasts for only one turn, the rational choice for both is to defect. However, if the game is repeated, that is, if the two players play several turns in a row, both players' always defecting will lead to a much lower total payoff than the players would receive if they learned to cooperate. How can reciprocal cooperation be induced?
FIGURE 14.3. A payoff matrix for the Prisoner's Dilemma game.
Robert Axelrod, of the University of Michigan, is a political scientist who has extensively studied and written about the Prisoner's Dilemma. His work on the Dilemma has been highly influential in many different disciplines, and has earned him several prestigious research prizes, including a MacArthur foundation "genius" award.
Axelrod began studying the Dilemma during the Cold War as a result of his own concern over escalating arms races. His question was, "Under what conditions will cooperation emerge in a world of egoists without central authority?" Axelrod noted that the most famous historical answer to this question was given by the seventeenth-century philosopher Thomas Hobbes, who concluded that cooperation could develop only under the aegis of a central authority. Three hundred years (and countless wars) later, Albert Einstein similarly proposed that the only way to ensure peace in the nuclear age was to form an effective world government. The League of Nations, and later, the United Nations, were formed expressly for this purpose, but neither has been very successful in either establis.h.i.+ng a world government or instilling peace between and within nations.
Robert Axelrod. (Photograph courtesy of the Center for the Study of Complex Systems, University of Michigan.).
Since an effective central government seems out of reach, Axelrod wondered if and how cooperation could come about without one. He believed that exploring strategies for playing the simple repeated Prisoner's Dilemma game could give insight into this question. For Axelrod, "cooperation coming about" meant that cooperative strategies must, over time, receive higher total payoff than noncooperative ones, even in the face of any changes opponents make to their strategies as the repeated game is being played. Furthermore, if the players' strategies are evolving under Darwinian selection, the fraction of cooperative strategies in the population should increase over time.
COMPUTER SIMULATIONS OF THE PRISONER'S DILEMMA.
Axlerod's interest in determining what makes for a good strategy led him to organize two Prisoner's Dilemma tournaments. He asked researchers in several disciplines to submit computer programs that implemented particular strategies for playing the Prisoner's Dilemma, and then had the programs play repeated games against one another.