BestLightNovel.com

Complexity - A Guided Tour Part 7

Complexity - A Guided Tour - BestLightNovel.com

You’re reading novel Complexity - A Guided Tour Part 7 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

So far all the complexity measures I have discussed have been based on information or computation theoretic concepts. However, these are not the only possible sources of measures of complexity. Other people have proposed concepts from dynamical systems theory to measure the complexity of an object or process. One such measure is the fractal dimension of an object. To explain this measure, I must first explain what a fractal is.

The cla.s.sic example of a fractal is a coastline. If you view a coastline from an airplane, it typically looks rugged rather than straight, with many inlets, bays, prominences, and peninsulas (Figure 7.2, top). If you then view the same coastline from your car on the coast highway, it still appears to have the exact same kind of ruggedness, but on a smaller scale (Figure 7.2, bottom). Ditto for the close-up view when you stand on the beach and even for the ultra close-up view of a snail as it crawls on individual rocks. The similarity of the shape of the coastline at different scales is called "self-similarity."

The term fractal was coined by the French mathematician Benoit Mandelbrot, who was one of the first people to point out that the world is full of fractals-that is, many real-world objects have a rugged self-similar structure. Coastlines, mountain ranges, snowflakes, and trees are often-cited examples. Mandelbrot even proposed that the universe is fractal-like in terms of the distribution of galaxies, cl.u.s.ters of galaxies, cl.u.s.ters of cl.u.s.ters, et cetera. Figure 7.3 ill.u.s.trates some examples of self-similarity in nature.

Although the term fractal is sometimes used to mean different things by different people, in general a fractal is a geometric shape that has "fine structure at every scale." Many fractals of interest have the self-similarity property seen in the coastline example given above. The logistic-map bifurcation diagram from chapter 2 (figure 2.6) also has some degree of self-similarity; in fact the chaotic region of this (R greater than 3.57 or so) and many other systems are sometimes called fractal attractors.

Mandelbrot and other mathematicians have designed many different mathematical models of fractals in nature. One famous model is the so-called Koch curve (Koch, p.r.o.nounced "c.o.ke," is the name of the Swedish mathematician who proposed this fractal). The Koch curve is created by repeated application of a rule, as follows.



FIGURE 7.2. Top: Large-scale aerial view of Ireland, whose coastline has self-similar (fractal) properties. Bottom: Smaller-scale view of part of the Irish coastline. Its rugged structure at this scale resembles the rugged structure at the larger scale. (Top photograph from NASA Visible Earth [http://visibleearth.nasa.gov/]. Bottom photograph by Andreas Borchet, licensed under Creative Commons [http://creativecommons.org/licenses/by/3.0/].) FIGURE 7.3. Other examples of fractal-like structures in nature: A tree, a snowflake (microscopically enlarged), a cl.u.s.ter of galaxies. (Tree photograph from the National Oceanic and Atmospheric Administration Photo Library. Snowflake photograph from [http://www.SnowCrystals.com], courtesy of Kenneth Libbrecht. Galaxy cl.u.s.ter photograph from NASA s.p.a.ce Telescope Science Inst.i.tute.) 1. Start with a single line.

_______________________________.

2. Apply the Koch curve rule: "For each line segment, replace its middle third by two sides of a triangle, each of length 1 / 3 of the original segment." Here there is only one line segment; applying the rule to it yields: 3. Apply the Koch curve rule to the resulting figure. Keep doing this forever. For example, here are the results from a second, third, and fourth application of the rule: This last figure looks a bit like an idealized coastline. (In fact, if you turn the page 90 degrees to the left and squint really hard, it looks just like the west coast of Alaska.) Notice that it has true self-similarity: all of the subshapes, and their subshapes, and so on, have the same shape as the overall curve. If we applied the Koch curve rule an infinite number of times, the figure would be self-similar at an infinite number of scales-a perfect fractal. A real coastline of course does not have true self-similarity. If you look at a small section of the coastline, it does not have exactly the same shape as the entire coastline, but is visually similar in many ways (e.g., curved and rugged). Furthermore, in real-world objects, self-similarity does not go all the way to infinitely small scales. Real-world structures such as coastlines are often called "fractal" as a shorthand, but it is more accurate to call them "fractal-like," especially if a mathematician is in hearing range.

Fractals wreak havoc with our familiar notion of spatial dimension. A line is one-dimensional, a surface is two-dimensional, and a solid is three-dimensional. What about the Koch curve?

First, let's look at what exactly dimension means for regular geometric objects such as lines, squares, and cubes.

Start with our familiar line segment. Bisect it (i.e., cut it in half). Then bisect the resulting line segments, continuing at each level to bisect each line segment: Each level is made up of two half-sized copies of the previous level.

Now start with a square. Bisect each side. Then bisect the sides of the resulting squares, continuing at each level to bisect every side: Each level is made up of four one-quarter-sized copies of the previous level.

Now, you guessed it, take a cube and bisect all the sides. Keep bisecting the sides of the resulting cubes: Each level is made up of eight one-eighth-sized copies of the previous level.

This sequence gives a meaning of the term dimension. In general, each level is made up of smaller copies of the previous level, where the number of copies is 2 raised to the power of the dimension (2dimension). For the line, we get 21 = 2 copies at each level; for the square we get 22 = 4 copies at each level, and for the cube we get 23 = 8 copies at each level. Similarly, if you trisect instead of bisect the lengths of the line segments at each level, then each level is made up of 3dimension copies of the previous level. I'll state this as a general formula: Create a geometric structure from an original object by repeatedly dividing the length of its sides by a number x. Then each level is made up of xdimension copies of the previous level.

Indeed, according to this definition of dimension, a line is one-dimensional, a square two-dimensional and a cube three-dimensional. All good and well.

Let's apply an a.n.a.logous definition to the object created by the Koch rule. At each level, the line segments of the object are three times smaller than before, and each level consists of four copies of the previous level. By our definition above, it must be true that 3dimension is equal to 4. What is the dimension? To figure it out, I'll do a calculation out of your sight (but detailed in the notes), and attest that according to our formula, the dimension is approximately 1.26. That is, the Koch curve is neither one- nor two-dimensional, but in between. Amazingly enough, fractal dimensions are not integers. That's what makes fractals so strange.

In short, the fractal dimension quantifies the number of copies of a self-similar object at each level of magnification of that object. Equivalently, fractal dimension quantifies how the total size (or area, or volume) of an object will change as the magnification level changes. For example, if you measure the total length of the Koch curve each time the rule is applied, you will find that each time the length has increased by 4/3. Only perfect fractals-those whose levels of magnification extend to infinity-have precise fractal dimension. For real-world finite fractal-like objects such as coastlines, we can measure only an approximate fractal dimension.

I have seen many attempts at intuitive descriptions of what fractal dimension means. For example, it has been said that fractal dimension represents the "roughness," "ruggedness," "jaggedness," or "complicatedness" of an object; an object's degree of "fragmentation"; and how "dense the structure" of the object is. As an example, compare the coastline of Ireland (figure 7.2) with that of South Africa (figure 7.4). The former has higher fractal dimension than the latter.

One description I like a lot is the rather poetic notion that fractal dimension "quantifies the cascade of detail" in an object. That is, it quantifies how much detail you see at all scales as you dive deeper and deeper into the infinite cascade of self-similarity. For structures that aren't fractals, such as a smooth round marble, if you keep looking at the structure with increasing magnification, eventually there is a level with no interesting details. Fractals, on the other hand, have interesting details at all levels, and fractal dimension in some sense quantifies how interesting that detail is as a function of how much magnification you have to do at each level to see it.

This is why people have been attracted to fractal dimension as a way of measuring complexity, and many scientists have applied this measure to real-world phenomena. However, ruggedness or cascades of detail are far from the only kind of complexity we would like to measure.

FIGURE 7.4. Coastline of South Africa. (Photograph from NASA Visible Earth [http://visibleearth.nasa.gov].).

Complexity as Degree of Hierarchy.

In Herbert Simon's famous 1962 paper "The Architecture of Complexity" Simon proposed that the complexity of a system can be characterized in terms of its degree of hierarchy: "the complex system being composed of subsystems that, in turn, have their own subsystems, and so on." Simon was a distinguished political scientist, economist, and psychologist (among other things); in short, a brilliant polymath who probably deserves a chapter of his own in this book.

Simon proposed that the most important common attributes of complex systems are hierarchy and near-decomposibility. Simon lists a number of complex systems that are structured hierarchically-e.g., the body is composed of organs, which are in turn composed of cells, which are in turn composed of celluar subsystems, and so on. In a way, this notion is similar to fractals in the idea that there are self-similar patterns at all scales.

Near-decomposibility refers to the fact that, in hierarchical complex systems, there are many more strong interactions within a subsystem than between subsystems. As an example, each cell in a living organism has a metabolic network that consists of a huge number of interactions among substrates, many more than take place between two different cells.

Simon contends that evolution can design complex systems in nature only if they can be put together like building blocks-that is, only if they are hierachical and nearly decomposible; a cell can evolve and then become a building block for a higher-level organ, which itself can become a building block for an even higher-level organ, and so forth. Simon suggests that what the study of complex systems needs is "a theory of hierarchy."

Many others have explored the notion of hierarchy as a possible way to measure complexity. As one example, the evolutionary biologist Daniel McShea, who has long been trying to make sense of the notion that the complexity of organisms increases over evolutionary time, has proposed a hierarchy scale that can be used to measure the degree of hierarchy of biological organisms. McShea's scale is defined in terms of levels of nestedness: a higher-level ent.i.ty contains as parts ent.i.ties from the next lower level. McShea proposes the following biological example of nestedness: Level 1: Prokaryotic cells (the simplest cells, such as bacteria) Level 2: Aggregates of level 1 organisms, such as eukaryotic cells (more complex cells whose evolutionary ancestors originated from the fusion of prokaryotic cells) Level 3: Aggregates of level 2 organisms, namely all multicellular organisms Level 4: Aggregates of level 3 organisms, such as insect colonies and "colonial organisms" such as the Portuguese man o' war.

Each level can be said to be more complex than the previous level, at least as far as nestedness goes. Of course, as McShea points out, nestedness only describes the structure of an organism, not any of its functions.

McShea used data both from fossils and modern organisms to show that the maximum hierarchy seen in organisms increases over evolutionary time. Thus this is one way in which complexity seems to have quantifiably increased with evolution, although measuring the degree of hierarchy in actual organisms can involve some subjectivity in determining what counts as a "part" or even a "level."

There are many other measures of complexity that I don't have s.p.a.ce to cover here. Each of these measures captures something about our notion of complexity but all have both theoretical and practical limitations, and have so far rarely been useful for characterizing any real-world system. The diversity of measures that have been proposed indicates that the notions of complexity that we're trying to get at have many different interacting dimensions and probably can't be captured by a single measurement scale.

PART II.

Life and Evolution in Computers.

Nature proceeds little by little from things lifeless to animal life in such a way that it is impossible to determine the exact line of demarcation.

-Aristotle, History of Animals.

[W]e all know intuitively what life is: it is edible, lovable, or lethal.

-James Lovelock, The Ages of Gaia.

CHAPTER 8.

Self-Reproducing Computer Programs.

What Is Life?

CHAPTER 5 DESCRIBED SOME OF THE HISTORY of ideas about how life has evolved. But a couple of things were missing, such as, how did life originate in the first place? And what exactly const.i.tutes being alive? As you can imagine, both questions are highly contentious in the scientific world, and no one yet has definitive answers. Although I do not address the first question here, there has been some fascinating research on it in the complex systems community.

The second question-what is life, exactly?-has been on the minds of people probably for as long as "people" have existed. There is still no good agreement among either scientists or the general public on the definition of life. Questions such as "When does life begin?" or "What form could life take on other planets?" are still the subject of lively, and sometimes vitriolic, debate.

The idea of creating artificial life is also very old, going back at least two millennia to legends of the Golem and of Ovid's Pygmalion, continuing in the nineteenth-century story of Frankenstein's monster, all the way to the present era of movies such as Blade Runner and The Matrix, and computer games such as "Sim Life."

These works of fiction both presage and celebrate a new, technological version of the "What is life?" question: Is it possible for computers or robots to be considered "alive"? This question links the previously separate topics of computation and of life and evolution.

You can ask ten biologists what are the ten key requisites for life and you'll get a different list each time. Most are likely to include autonomy, metabolism, self-reproduction, survival instinct, and evolution and adaptation. As a start, can we understand these processes mechanistically and capture them in computers?

Many people have argued a vehement "no" for the following reasons: Autonomy: A computer can't do anything on its own; it can do only what humans program it to do.

Metabolism: Computers can't create or gather their own energy from their environment like living organisms do; they have to be fed energy (e.g., electricity) by humans.

Self-reproduction: A computer can't reproduce itself; to do so it would have to contain a description of itself, and that description would have to contain a description of itself, and so on ad infinitum.

Survival instinct: Computers don't care whether they survive or not and they don't care how successful they are. (For example, in a lecture I attended by a prominent psychologist, the speaker, discussing the success of computer chess programs, a.s.serted that "Deep Blue may have beat Kasparov, but it didn't get any joy out of it.") Evolution and adaptation: A computer can't evolve or adapt on its own; it is restricted to change only in ways specified ahead of time by its programmer.

Although these arguments are still believed by many people, all of them have been claimed to be disproven in one way or another in the field of artificial life, whose purview is the simulation and "creation" of life on computers. In this chapter and the next I focus on those issues most closely related to Darwinism-self-reproduction and evolution.

Self-Reproduction in Computers.

The self-reproduction argument is the most mathematical one: it states that self-reproduction in a computer would lead to an infinite regress.

Let's investigate this issue via the simplest version of the computer self-reproduction problem: write a computer program that prints out an exact copy of itself and nothing else.

I've written the following programs in a simple computer language that even nonprogrammers should be able to understand. (It's actually a pseudo language, with a few commands that most real languages wouldn't have, but still plausible and thrown in to make things simpler.) Here's a first attempt. I start out with the name of the program: program copy.

Now I need to add an instruction to print out the name of the program: program copy print("program copy").

The print command simply prints on the computer screen the characters between the first and last quotation marks, followed by a carriage return. Now I need to add an instruction to print out that second line: program copy print("program copy") print(" print("program copy")").

Note that, since I want the program to print out an exact copy of itself, the second print command has to include the four s.p.a.ces of indentation I put before the first print command, plus the two quotation marks, which are now themselves being quoted (the print command prints anything, including quotation marks, between the outermost quotation marks). Now I need another line to print out that third line: program copy print("program copy") print(" print("program copy")") print(" print(" print("program copy")")").

and so forth. By now you can probably see how this strategy, in which each command prints an exact copy of the command preceding it, leads to an infinite regress. How can this be avoided? Before reading on, you might spend a few moments trying to solve this puzzle.

This simple-sounding problem turns out to have echos in the work of Kurt G.o.del and Alan Turing, which I described in chapter 4. The solution also contains an essential means by which biological systems themselves get around the infinite regress. The solution was originally found, in the context of a more complicated problem, by the twentieth-century Hungarian mathematician John von Neumann.

Von Neumann was a pioneer in fields ranging from quantum mechanics to economics and a designer of one of the earliest electronic computers. His design consisted of a central processing unit that communicates with a random access memory in which both programs and data can be stored. It remains the basic design of all standard computers today. Von Neumann was also one of the first scientists who thought deeply about connections between computation and biology. He dedicated the last years of his life to solving the problem of how a machine might be able to reproduce itself; his solution was the first complete design for a self-reproducing machine. The self-copying computer program I will show you was inspired by his "self-reproducing automaton" and ill.u.s.trates its fundamental principle in a simplified way.

FIGURE 8.1. A simplified picture of computer memory, with numbered locations 15 and beyond, four of which contain lines of a program. The instruction pointer points to the instruction currently being executed by the computer. The lines sometimes contain leading s.p.a.ces, which are ignored when the instruction is executed.

Before showing you the self-copying program, I need to explain a few more things about the programming language I will be using.

Consider the picture of computer memory given in figure 8.1. Computer memory, in our highly simplified case, consists of numbered locations or "addresses," here numbered 15 and beyond. Each location contains some text. These lines of text can be interpreted by the computer as commands in a program or as data to be used by a program. The program currently stored in memory, when executed, will print h.e.l.lo, world! Goodbye.

To accomplish this, the computer has an "instruction pointer"-a number also stored in memory, which always is equal to the memory location of the instruction currently being executed by the computer. The instruction pointer-let's call it ip for short-is initially set to the memory location containing the program's first line. We say it "points to" that instruction. At each step in the computation the instruction pointed to by ip is carried out, and ip is increased by 1.

For example, in figure 8.1, the value of ip is 2, and we say that ip is pointing to the line print("h.e.l.lo, world!").

We call ip a variable since its value changes ("varies") as the computation is carried out.

We also can define another variable, line[n], as equal to the string of characters in memory location n. For example, the command print(line[2]) will print print("h.e.l.lo, world!") Finally, our language contains a loop command. For example, the following lines of computer code, x = 0 loop until x = 4 { print("h.e.l.lo, world!") x = x + 1 }.

will print h.e.l.lo, world! h.e.l.lo, world! h.e.l.lo, world! h.e.l.lo, world!

The commands inside the two curly brackets get repeated until the loop condition (here x = 4) is true. The variable x is used as a counter-it starts off at zero and is increased by 1 each time the loop is performed. When it gets to 4, the loop stops.

Now we are ready for the self-copying program, which appears in its entirety in figure 8.2. The best way to understand a computer program is to hand-simulate it; that is, to go through it line by line keeping track of what it does.

Suppose this program has been loaded into computer memory as shown in figure 8.2, and suppose a user comes along and types selfcopy on the computer's command line. This signals the computer to start interpreting the program called selfcopy. The interpreter-part of the computer's operating system-sets the instruction pointer to 1, which points to the name of the program. The ip then moves down, line by line, executing each instruction.

FIGURE 8.2. A self-copying program.

In memory location 2 a variable L is set to ip1. Recall that ip is the location of the instruction currently being executed. So when line 2 is executed, ip is set to 2 and L is set to 2 1 = 1. (Note that L will now stay equal to 1 until it is reset, even though ip changes as each instruction is executed.) Next, a loop is entered, which will be iterated until line[L] is equal to the character string end. Remember that line[L] is equal to the string located in memory location L. Right now, L is set to 1, so line[L] is equal to the string program selfcopy. This is not equal to the string end, so the loop is continued. In the loop, line[L] is printed and L is incremented. First, with L = 1, program selfcopy is printed; then L is set to 2.

Now, line[L] is the second line of the program, namely L = ip - 1. Again, this string is not equal to end, so the loop is continued. In this way, each line of the program is printed out. A particularly interesting line is line 5: when line 5 is being executed with L = 5, the instruction print(line[L]) prints itself out. When L = 9 and line[L] is equal to end, the loop ends. At this point, lines 18 have been printed. The instruction pointer moves to line 8 (the instruction immediately following the loop), which, when executed, prints out the string "end," completing the self-copying.

The essence of self-copying in this program is to use the same information stored in memory in two ways: first as instructions to be executed, and second as data to be used (i.e., printed) by those instructions. This dual use of information is what allows us to avoid an infinite regress of the kind ill.u.s.trated earlier by my first attempt at a self-copying program.

The Deeper Meaning of the Self-Reproducing.

Computer Program.

The dual use of information is also at the heart of G.o.del's paradox, embodied by his self-referential sentence "This statement is not provable."

This is a bit tricky to understand. First, let's note that this sentence, like any other English sentence, can be looked at in two ways: (1) as the literal string of letters, s.p.a.ces, and punctuation contained in the sentence, and (2) as the meaning of that literal string, as interpreted by an English speaker.

To be very clear, let's call the literal string of characters S. That is, S = "This statement is not provable." We can now state facts about S: for example, it contains twenty-six letters, four s.p.a.ces, and one period.

Let's call the meaning of the sentence M. We can rewrite M as follows: "Statement S is not provable." In a way, you can think of M as a "command" and of S as the "data" for that command. The weird (and wonderful) thing is that the data S is the same thing as the command M. The chief reason G.o.del was able to translate his English sentence into a paradox in mathematics was that he was able to phrase M as a mathematical statement and S as a number that encoded the string of characters of that mathematical statement.

This is all very tricky. This kind of distinction between literal strings of characters and the meaning of those strings, and the paradoxes that self-reference can produce, are discussed in a detailed and very entertaining way in Douglas Hofstadter's book G.o.del, Escher, Bach: an Eternal Golden Braid.

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Complexity - A Guided Tour Part 7 summary

You're reading Complexity - A Guided Tour. This manga has been translated by Updating. Author(s): Melanie Mitchell. Already has 577 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

BestLightNovel.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to BestLightNovel.com