BestLightNovel.com

Incomplete Nature Part 2

Incomplete Nature - BestLightNovel.com

You’re reading novel Incomplete Nature Part 2 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

The term homunculus has also made its way into contemporary science in other, less problematic ways. For example, neurologists use the term to describe the maplike patterns of tactile and motor responsive areas on the surface of the brain because they preserve a somewhat distorted topography of human body regions. Running vertically up and down the midline on either side of the cerebral cortex there are regions that topographically map tactile responsiveness of body surfaces and movement control of muscles. Anatomy and psychology texts often depict these regions as distorted body projections, with disproportionately enlarged mouth, face, hand, and foot for the opposite half of the body. Similar topographic projections are characteristic of the other major senses as well. Thus, the cortical area that receives retinal inputs is organized as a retinotopic map (a map that reflects position in the visual field), and the cortical area that receives inputs originating in the inner ear or cochlea is organized as a tonotopic map (a map organized from low- to high-frequency sounds).

This recapitulation of body organization in the brain is actually a bit more fragmentary and irregular than usually depicted. In tactile and motor areas, for example, the maps are not always continuous (for comparison, think of flat maps of the continents that are "cut" in ways that minimize distortion of continent size and shape). In the brain, the relative size distortions of certain homunculus features often reflect the density of innervation of the corresponding peripheral structure. So the center of gaze which settles on the most receptor-dense region of the retina (called the fovea) is proportionately much larger in its representation in the primary visual cortex than is the periphery of the visual field, which in actual dimensions is much larger though less dense in receptors. Similarly, those touch and movement systems with the most precise tactile and movement precision, respectively, are also represented in the cerebral cortex by disproportionately larger areas. So these distorted homunculi provide a useful heuristic for representing brain organization.

Unfortunately, the existence of these homuncular representations can also be an invitation to misleading shortcuts of explanation. This is because a map is a representation, and it can be tempting to mistake a map for the territory, in the sense of treating it as intrinsically interpreted. Although a map is similar in structure to what it represents, it is not intrinsically meaningful. It requires reading and comparison-but who or what does this in the brain? Using a map of a city, for example, requires knowing that certain lines correspond to roads, that distances are uniformly and systematically reduced to some tiny fraction of the real distances, and so forth. Tracing directions on the map is a reduced version of navigating through the city. In this sense, carrying the map is carrying a miniature version-a model-of the city. It is not surprising that people often feel as though finding a miniature map of the body in the brain somehow explains how sensation works. The correspondence is indeed important to its functionality, but this tells us nothing about how it is interpreted. Even mapping maps to other maps-for example, between the retinotopic map of vision and the somatotopic map of the body-won't get us far. It just pa.s.ses the explanatory buck. We need to know something about the processes that const.i.tute the interpretation. This homuncular form tells us only that the representations of different positions on a body surface are not jumbled or radically decoupled in the brain. It tells us nothing about the implicit homunculus that is required to interpret these maps.

In recent scientific literature, the concept of a homunculus has come to stand for something more subtle, and related to the misuse of teleological a.s.sumptions that Skinner criticizes. This is the sense of the concept I focus on here. Homunculi are the unacknowledged gap-fillers that stand behind, outside, or within processes that a.s.sume teleological properties such as those exhibited by life and mind, while pretending to be explanations of these properties. Treating aspects of brain function as if they are produced by little demonic helpers begs the main questions that need answering. Doing so cryptically, disguised as something else, actually impedes our efforts to understand these functions. Consider this a.n.a.logy from Christof Koch and Francis Crick for the neural basis of perceptual consciousness: A good way to begin to consider the overall behavior of the cerebral cortex is to imagine that the front of the brain is "looking at" the sensory systems, most of which are at the back of the brain.4 Does the front of the brain "looking at" the back of the brain improve significantly on the following statement, written more than a century earlier by Samuel Butler in Erewhon: What is a man's eye but a machine for the little creature that sits behind in his brain to look through?

These sorts of blatantly homuncular rhetorical framings are often followed by caveats arguing that they are mere anthropomorphic heuristics, later to be replaced with actual mechanistic details; but even beginning this way opens the door for homuncular connotations to be attributed to whatever mechanism is offered. Are Koch and Crick arguing that the frontal cortex has its own form of internal visual perception to "see" what the visual system is merely registering? Of course, that can't be what they mean, but in even caricaturing the explanation in these terms, they invite their readers to frame the question in terms of looking for an ultimate homunculus.



The ancestors of today's scientific homunculi were G.o.ds, demiG.o.ds, elves, fairies, demons, and gremlins that people held responsible for meaningful coincidences, human disasters, and unexpected deviations from the norm. Malevolent spirits, acts of sorcery, poltergeists, fates, and divine plans imbued natural events with both agency and meaningfulness. In past millennia and in technologically undeveloped societies, homunculus accounts were routinely invoked in cases where mechanical causes were obscure or where physical correlations appeared meaningfully linked. Although these agents from a netherworld have no legitimate place in contemporary science, they remain alive and well hiding at the frayed edges of our theories and popular culture, wearing various new guises. These modern theoretical gremlins often sit at their virtual control panels behind names for functions that carry out some teleologically glossed task, such as informing, signaling, adapting, recognizing, or regulating some biological or neurological process, and they lurk in the shadows of theoretical a.s.sumptions or unnoticed transitions in explanation where mechanistic descriptions get augmented by referring to functions and ends.

Homunculi figure more explicitly in the beginnings of science. Not surprisingly, considering the alchemical preoccupation with the purification of metals as both a goal of science and a route to the purification of the spirit, and possibly the secret of immortality, the spontaneous generation of life from non-life was a central concern of alchemy. This preoccupation was entangled in other confusions about the nature of the reproductive process, particularly the a.s.sumption of a privileged role for the male principle (in s.e.m.e.n) in forming matter (provided by the female) into an organism. For example, alchemical notebooks contained recipes for the spontaneous production of simple animals and even animate human forms. The presumably not-quite-fully-human products of these imagined experiments were thus also referred to as homunculi.

The use of this concept that I wish to focus on is, however, more abstract than a miniature human or even a mental self. This broader conception might be usefully compared to Aristotle's notion of an entelechy. He argued that there must necessarily be some active principle present within each organism that guides growth and development to an adult form. This term modifies the Greek root tele with the prefix en- to indicate an internalized end. An entelechy can be understood both as a general template for the final result and the active principle that determines its development toward that end. A horse embryo develops into a horse and an acorn develops into an oak tree because, according to Aristotle, they are each animated by a distinctive end-directed tendency that is intrinsic to their material forms. Aristotle saw this as indisputable evidence that living processes are goal-directed and not merely pa.s.sively reactive or accidental, as are many non-living processes. And since these target qualities are not present in the earliest stages of development, this entelechy must be present only as a potential at this stage. In this respect it is not a specific material attribute, but something like a purpose or agency embodied in some way within the matter of the undeveloped organism. An entelechy is, then, a general formative potential that tends to realize itself.

Aristotle's notion was probably in part influenced by his mentor Plato's notion of an ideal form that exists independent of any specific embodiment and that gets expressed in material form, though imperfectly. Thus, all physical spheres are imperfect spheres in some respect, deviating from the geometric ideal. Spherical objects are an expression of the influence of this ideal, to which they partially conform. Aristotle parted from his mentor on this point, arguing that all form is embodied form, and that the conception of disembodied ideals is a generalization from these exemplars, but not their cause. His notion of entelechy is often compared to the modern metaphor of a developmental program. This currently fas.h.i.+onable account of the causes of organism development likens the genome to a computer algorithm. A computer program is a set of instructions to be followed, and in this respect is in itself a pa.s.sive artifact until embodied. In contrast, Aristotle considered the entelechy to be an active principle, intrinsic to the material substance of the organism. Millennia later, a disembodied conception of this self-actualizing living potential was also developed. This view was called "vitalism," and although it retained the idea of an end-directed essence implicit in the organism, it also retained the Platonic focus on a disembodied source for this force.

Vitalists argued that living processes could not be understood in merely mechanistic terms. The early twentieth-century biologist Hans Driesch, for example, argued that the laws of chemistry and physics were incapable of explaining this end-directedness of organisms. Thus something else, something non-material, a vital force or energy, was presumed necessary to endow merely mechanistic matter with the end-directedness that characterizes life. Indeed, given what was known at the time, the critique could not be easily dismissed; but the presumed alternative which vitalists offered, that living matter is distinguished from non-living matter by virtue of being infused with some ineffable and non-physical energy or essence-an elan vital-provided no basis for empirical study. In this respect, it too offered a homunculus argument in the broader sense of the term as I am using it. It is homuncular to the extent that the elan vital confers the property of living end-directedness, but simply posits the existence of this physically atypical complex property in this non-substance. So, curiously, both strict preformationism and strict vitalism are at core homuncular theories, since each locates the entelechy of living organisms in some essence that pre-exists its expression (whether in physical form or formative essence).

Whether a little man in my head, a demon who causes the crops to wither, the active principle that actualizes the adult body form, or an eternal and ineffable source of human subjectivity, there is an intuitive tendency to model processes and relations.h.i.+ps that seem to violate mechanistic explanations on human agency. Because of this dichotomy, it is tempting to reify this distinction in the image of the difference that we are hoping to account for: the difference between a person and inanimate matter, between the apparent freedom of thought and the rigid predictability of clockwork. This is a difference that we directly encounter in our own experience, and see reflected in other people. It has the phenomenology of emerging as if without antecedent cause. So it is natural that this mirror image should appear intuitively acceptable to account for these kinds of phenomena. For this reason, I think it is appropriate that the homunculus should symbolize this critical essence of a human life and of mind. It is the model on which the elan vital, the life principle, the cryptic source of animacy and agency, and the source of meaning and value are all based on. In Enlightenment terms, it is Descartes' cogito, the rational principle, or one's conscience and "inner voice." In spiritual traditions, it is the eternal soul, the Hindu atman, or the ghost that persists after the material body succ.u.mbs. These many images all point to something more fundamental that all share in common with some of the most basic phenomena in our lives: meaning, mental representation, agency, inclination toward a purpose or goal, and the normative value of that goal. Some of these attributes are characteristically human, but many are also characteristic of animate life in general. What they share in common is something more basic than mind. It is a property that distinguishes the living from the non-living.

THE VESSEL OF TELEOLOGY.

For all of these reasons, I see the homunculus as the enigmatic symbol of the most troubling and recalcitrant challenge to science: the marker for the indubitable existence of ententional phenomena in all their various forms, from the simplest organic functions to the most subtle subjective a.s.sessments of value. Their existence has posed an unresolved paradox to scientists and philosophers since the beginnings of Western thought, and the tendrils of this paradox remain entangled in some of the most technologically developed branches of modern science, where a failure to successfully integrate these properties within current theoretical systems continuously interferes with their growth. The homunculus symbolizes ententional phenomena as they are contrasted against the background of mechanistic theories describing otherwise typical physical causes and effects. Homuncular concepts and principles are the residue left behind, placeholders for something that is both efficacious and yet missing at the same time.

Historically, human beings invoke homunculi or their abstract surrogates whenever it is necessary to explain how material bodies or natural events exhibit active ententional principles. For the most part, the fact that they are in "man-a.n.a.logues" often goes unnoticed. The tendency not to recognize homuncular explanations as placeholders probably derives from the familiarity of interacting with other persons whose thoughts and motivations are hidden from us. But a willingness to leave the causality of these sorts of phenomena only partially explained probably also stems from a deep intuition that a radical discontinuity distinguishes the behavior of material systems influenced by purposes and meanings from those that are not. Homunculi mark this boundary. What makes ententional processes so cryptic is that they exhibit properties that appear merely superimposed on materials or physical events, as though they are something in addition to and separate from their material-physical embodiment.

This feature epitomizes human behaviors and human-designed artifacts and machines, especially when these involve meaningful relations.h.i.+ps or actions designed to achieve a specific consequence. Human beliefs and purposes can shape events in ways that often have little direct relations.h.i.+p to current physical conditions and are produced in opposition to physical tendencies that are intrinsically most likely. So it is only natural that we should consider that a humanlike guiding principle is the most obvious candidate for events that appear to exhibit these discontinuities. Of course, eventually we need to explain how such a discontinuity of causality could arise in the first place, without any appeal to prior homunculi, or else just give up and decide that it is beyond any possible explanation.

Faith that these properties are not forever beyond the reach of scientific explanation is drawn from the fact that the range of phenomena where homuncular attributions/explanations are still common has been steadily shrinking. With the rise in prominence of scientific explanations, more and more of these processes have succ.u.mbed to non-teleological explanation. So it is legitimate to speculate about an extrapolated future in which all teleological explanations have been replaced by complex mechanical explanations.

Why should ententional explanations tend to get eliminated with the advance of technological sophistication? The simple answer is that they are necessarily incomplete accounts. They are more like promissory notes standing in for currently inaccessible explanations, or suggestive shortcuts for cases that at present elude complete a.n.a.lysis. It has sometimes been remarked that teleological explanations are more like accusations or a.s.signments of responsibility rather than accounts of causal processes. Teleological explanations point to a locus or origin but leave the mechanism of causal efficacy incompletely described. Even with respect to persons, explaining their actions in terms of motives or purposes is effectively independent of explaining these same events in terms of neurological or physiological processes and irrespective of any physical objects or forces.

Like an inscrutable person, an ententional process presents us with a point at which causal inquiry is forced to stop and completely change its terms of a.n.a.lysis. At this point, the inquiry is forced to abandon the mechanistic logic of ma.s.ses in motion and can proceed only in terms of functions and adaptations, purposes and intentions, motives and meanings, desires and beliefs. The problem with these sorts of explanatory principles is not just that they are incomplete, but that they are incomplete in a particularly troubling way. It is difficult to ascribe energy, materiality, or even physical extension to them.

In this age of hard-nosed materialism, there seems to be little official doubt that life is "just chemistry" and mind is "just computation." But the origins of life and the explanation of conscious experience remain troublingly difficult problems, despite the availability of what should be more than adequate biochemical and neuroscientific tools to expose the details. So, although scientific theories of physical causality are expected to rigorously avoid all hints of homuncular explanations, the a.s.sumption that our current theories have fully succeeded at this task is premature.

We are taught that Galileo and Newton slew the Aristotelean homunculus of a prime mover, Darwin slew the homunculus of a divine watchmaker, Alan Turing slew the homunculus of disembodied thought, and that Watson and Crick slew the homunculus of the elan vital, the invisible essence of life. Still, the specter of homunculus a.s.sumptions casts its shadow over even the most technologically sophisticated and materialistically framed scientific enterprises. It waits at the door of the cosmological Big Bang. It lurks behind the biological concepts of information, signal, design, and function. And it bars access to the workings of consciousness.

So the image of the homunculus also symbolizes the central problem of contemporary science and philosophy. It is an emblem for any abstract principle in a scientific or philosophical explanation that imports an una.n.a.lyzed attribution of information, sentience, reference, meaning, purpose, design, self, subjective experience, value, and so on-attributes often a.s.sociated with mental states-into scientific explanations. I will call such theories homuncular insofar as these attributes are treated as primitives or una.n.a.lyzed "black boxes," even if this usage is explicitly designated as a placeholder for an a.s.sumed, yet-to-be-articulated mechanism. The points in our theories where we must s.h.i.+ft into homuncular terminology can serve as buoys marking the shoals where current theories founder, and where seemingly incompatible kinds of explanation must trade roles. Homunculi both indicate the misapplication of teleological principles where they don't apply and offer clues to loci of causal phase transitions where simple physical accounts fail to capture the most significant features of living processes and mental events.

HIDING FINAL CAUSE.

As suggested in the previous chapter, homunculi are placeholders for sources of what Aristotle called "final causality": that for the sake of which something occurs. Aristotle described this form of causality as "final" because it is causality with respect to ends; it is something that occurs or is produced because of the consequences that are likely to result. Superficially, this applies to all sorts of human actions, indeed, everything that is done for some purpose. All designed objects, all purposeful performances, all functions, all things produced for communicating, and ultimately all thoughts and ideas are phenomena exhibiting final causality in some form. They are produced in order to bring about a consequence that contributes to something else. As a result, they exist with respect to something that does not exist yet, and for something that they are not. They exist in futuro, so to speak, and in this sense they are incomplete in a crucial respect.

As we have seen, Aristotle compared final causality to three other ways of understanding the concept of cause: material cause, efficient cause, and formal cause. Since the Renaissance, the concept of efficient cause has become the paradigm exemplar for all fully described conceptions of cause in the natural sciences, and Aristotle's other modes of causality have fallen into comparative neglect. However, in human affairs, from psychology to anthropology to law, some version of final causality is still the presumed fundamental influence. One's personal experience of causal agency directed to achieve some end is the archetypical expression of final causality. Although some philosophers dispute the veracity of this experience, it is hard to argue with the very strong intuition that much of our behavior is directed to change things with respect to achieving some desired state, or to keep them from changing from such a state. And where there is some argument against it, it is not based upon a denial of this experience, but rather on the argument that it is in some way logically incoherent, and therefore that our experience is in some sense false.

In its literal sense, final causality is logically incoherent. The material future does not determine what goes on in the present, but a conceivable future as represented can function to influence present circ.u.mstances with respect to that possibility. But we are not on any more solid footing attributing this causal influence to a represented future. That which does the representing is intrinsically homuncular as well, and the represented content is neither this homunculus nor the signs that do the representing. The content of a thought has no more of a material existence than a possible future.

There is one important difference between ancient and modern scientific homunculi: those that show up in modern scientific theories are by definition not supposed to exist. They are understood as markers of something yet to be explained. So this denial, paired with the inclusion of cryptically homuncular terminology, indicates the locus of a particular type of conceptual problem. As a result, homunculus has become a pejorative term that is applied to errors of reasoning in the natural sciences which invoke intentional properties where otherwise physical-chemical relations.h.i.+ps should be brought to bear. Appealing to the schemes, plans, and whims of invisible demons allowed our ancestors to make sense of obscure natural processes and explain meaningful coincidences and bad luck. But appealing to an agency that is just beyond detection to explain why something happened is an intellectual dead end in science. It locates cause or responsibility without a.n.a.lyzing it. When this unexplained cause is treated as though it has the attributes of an intentional agent, it can masquerade as a power to explain what even mechanical explanations cannot. This is because an intentional agent can bring together diverse mechanisms to correspond with its interpretations of possible outcomes or meanings. But in the natural sciences, whose purpose is to leave the least number of gaps in an explanation, the introduction of such a wild card is for the most part counterproductive.

There is one exception: when it is done with the full disclosure that it is merely a marker for unfinished a.n.a.lysis. If investigators are unable to fill in all the pieces of a multipart scientific puzzle, it is considered a legitimate tactic to temporarily set aside one aspect of the problem, by a.s.suming that it somehow "just gets done," in order to be able to work on another part of the puzzle. Here, an explicitly posited man-a.n.a.logue stands in for the yet-to-be-completed a.n.a.lysis. The unexplained aspects are simply treated as though they are given as "accomplished," and are hopefully not critical to explaining those aspects of the puzzle that are the focus of explanatory effort. These are provisional homunculi, recognized for what they are, and slated for replacement as soon as possible. What is not accepted scientific practice is to allow a homuncular concept to play a central explanatory role. When the surrogates for little men in the works handle all the important jobs, suspension of disbelief has allowed what is most in need of explanation to be bracketed out of consideration.

G.o.dS OF THE GAPS.

A politically contentious contemporary example of the explicit use of homunculus thinking is the so-called Intelligent Design (ID) explanation for evolution. Politically, ID is a thinly veiled battle by Christian religious fundamentalists to sneak vestiges of biblical creationism or supernaturalism into the educational system. As a cover story for this infiltration effort, ID inherits homunculus a.s.sumptions distilled from a long history of beliefs in G.o.ds and G.o.ddesses thought to hold the strings of fate. But although I see ID as a troublesome mix of proselytization, ideological propaganda, and disingenuous politics, I will not comment further on these judgments, and only focus on its rhetorical use of a homunculus argument. Ultimately, it is not the religious implications nor the criticisms of theory that trouble scientists most about ID claims, but rather the deep incompatibility of this use of a homunculus argument with the very foundations of science. This threatens not just evolutionary theory but also the very logic and ethic of the scientific enterprise.

The theory of Intelligent Design almost entirely comprises critiques of contemporary explanations of evolution by natural selection. The claim is that explanatory weaknesses in current Darwinian-based theory are so serious that they demand another sort of explanation as well, or in its place. What ID advocates consider to be the most devastating criticism is that Darwinian approaches appear to them insufficient to account for certain very complex structures and functions of higher organisms, which they a.s.sert are irreducibly complex. I will return to the irreducible complexity claim in later chapters. What is at stake, however, is not whether the current form of evolutionary theory is entirely adequate to account for every complex aspect of life. It is whether there is a point at which it is legitimate to argue that the (presumed) incompleteness of a mechanistic theory requires that explicit homunculus arguments be included.

Evolutionary biology is still a work in progress, and so one should not necessarily expect that it is sufficiently developed to account for every complicated phenomenon. But evolutionary biologists and intelligent design (ID) advocates treat this incomplete state of the art in radically different ways. From the point of view of ID, evolutionary theory suffers from a kind of incompleteness that amounts to an unsalvageable inadequacy. Although it is common for theoretical proponents of one scientific theory to prosecute their case by criticizing their major compet.i.tors, and thus by indirect implication add support to their own favored theory, there is something unusual about ID's particular variant of that strategy. It is in effect a metacriticism of scientific theory in general, because it attempts to define a point at which the ban against homuncular explanations in science must be lifted.

The Intelligent Designer is a permanently unopenable black box. The work of scientific explanation cannot, by a.s.sumption, penetrate beyond this to peer into its (His) mechanism and origins. So this argument is an implicit injunction to stop work when it comes to certain phenomena: to s.h.i.+ft from an a.n.a.lytical logic of causes and effects to an ententional logic of reasons and purposes. In this respect, it has the same effect as would a scientist's claim that demonic interference caused a lab experiment to fail. This would hardly be considered a useful suggestion, much less a legitimate possibility. Taken literally, it undermines a basic a.s.sumption of the whole enterprise of experimental scientific research. Demons can, by definition, act on whim, interfering with things in such a way as to modify physical conditions without detection. Were demonic influence to be even considered, it would render experimentation pointless. What obtained one day would not necessarily obtain on another day, despite all other things being equal. There may indeed be unnoticed influences at work that cause currently unpredictable variations in experimental outcome, but that is taken as evidence that control of the experimental conditions is insufficient or consistency of a.n.a.lysis is wanting. Such problems motivate an effort to discover these additional intervening variables and thus bring them under control. If the scientists were to instead believe in demonic intervention, their response would be quite different. They might try to placate the demon with gifts or sacrifices, or ask a sorcerer or priest to perform an exorcism. Of course, I am deliberately creating an absurd juxtaposition. The culture of science and the culture of belief in this sort of supernatural being do not easily occupy the same intellectual s.p.a.ce. As this fanciful example shows, engaging in the one interpretive stance requires abandoning the other.

This does not mean that they aren't ever juxtaposed in everyday life. People tend to be masters at believing incompatible things and acting from mutually exclusive motivations and points of view. Human cognition is fragmented, our concepts are often vague and fuzzy, and our use of logical inference seldom extends beyond the steps necessary to serve an immediate need. This provides an ample mental ecology in which incompatible ideas, emotions, and reasons can long co-exist, each in its own relatively isolated niche. Such a mix of causal paradigms may be invoked in myths and fairy tales, but even here such an extreme discontinuity is seldom tolerated. Science and philosophy compulsively avoid such discontinuities. More precisely, there is an implicit injunction woven into the very fabric of these enterprises to discover and resolve explanatory incompatibilities wherever possible, and otherwise to mark them as unfinished business. Making do with placeholders creates uneasiness, however, and the longer this is necessary, the more urgent theoretical debate or scientific exploration is likely to be.

So where homunculi show up in science and philosophy they are almost always smuggled in unnoticed-even by the smuggler-and they are seldom proposed outright. In the cases where homunculi sneak in unnoticed, they aren't always full man-a.n.a.logues either, but rather more general teleological faculties and dispositions that are otherwise left una.n.a.lyzed. Invoking only fragments of human capacity (rather than all features of intentionality and agency) makes it easier for homuncular a.s.sumptions to enter unnoticed. This may even contribute to the misleading impression that it achieves explanatory advance. But even fractional homuncular capacities can end up carrying most of the explanatory load when invoked to identify intentional and teleological features of a system. Such minimalistic homunculi can still serve as cryptic stand-ins for quite complex functions.

This possibility of fractionating homuncular a.s.sumption requires that we come up with a more general definition of what const.i.tutes a homunculus than merely a "man-a.n.a.logue." To serve this purpose, I will define a homunculus argument as one in which an ententional property is presumed to be "explained" by postulating the existence of a faculty, disposition, or module that produces it, and in which this property is not also fully understood in terms of non-ententional processes and relations.h.i.+ps. This does not require that ententional properties must be reduced to non-ententional properties, only that there can be no residual incompatibility between them. Describing exactly how this differs from reductive explanation will be a topic for later chapters, but suffice it to say that this requires ententional properties to be constructible from (and thus emergent from) relations.h.i.+ps among non-ententional properties.

One of my favorite parodies of this trick of providing a pseudoexplanation by simply postulating the existence of a locus for what needs explaining was provided by the great French playwright Moliere in his Le Malade Imaginaire. One scene depicts a medical student being examined by his mentors. One examiner asks if the student can explain why opium induces sleep. He replies that it is because opium contains a "soporific factor," and all the examiners praise his answer. Moliere is, of course, lampooning the unscientific and pretentious physicians of his day. This is not an explanation but an evasion. Within the play, no one notices that the answer merely rephrases the point of the question as an una.n.a.lyzed essence. It just s.h.i.+fts the question one step back. Opium is a substance that causes sleep because it contains a substance that causes sleep. One cannot fail to notice the ironic double entendre in this example. This sort of answer exemplifies critical reason having fallen asleep!

PREFORMATION AND EPIGENESIS.

Historically, homuncular explanations have not always been a.s.sociated with immaterial causality. In the early days of biology, a homuncular interpretation of organism development, on the model of Aristotle's entelechy but framed in explicitly material terms, was proposed as an alternative to a disembodied developmental principle. The preformationist theory of biological development was the paradigm materialist alternative to the then competing "epigenetic" view. Epigenetics was criticized for its presumably mystical a.s.sumptions. Epigeneticists argued that the form of a developing plant or animal was not materially present at conception, and that an organism's form was acquired and shaped by some intangible vital influence as it grew and matured. Epigenesis was criticized for being comparatively unscientific because it postulated a non-physical source of form, often described in terms of a vital essence. Both the preformationists and the epigeneticists recognized that organisms followed definite trajectories of development whose form must in some sense be present from the beginning. But whereas preformationists argued that this form was present in reduced form in the fertilized egg (or just in the sperm), the epigeneticists argued that it was only present in some virtual or potential form, and that this potential was progressively imposed upon the developing embryo, like a lump of clay spinning on a potter's wheel slowly taking shape from the potter's guiding touch. So the idea of a pre-existing form guiding development was not so much the distinguis.h.i.+ng character dividing preformation from epigenesis as it was a difference of opinion as to its locus. Preformationists imagined the locus to be a physical feature of the organism, just in need of expansion; epigeneticists imagined it to be non-material.

Two linked problems posed by the hypothesis of epigenesis made it harder to defend than preformationism. First, if the controlling form were not located within the gametes, the fertilized egg, or the developing embryo (presumably at all stages), where was it, and how did it impose itself on the matter of the developing body? If not physically present in material form, how could it shape a physical process? Indeed, what sense could be made of a Platonic disembodied form? Second, if the mature form is not intrinsic to its matter in some way, how is it that organisms of the same species always give rise to offspring with the same form? Though the a.n.a.logy to an artifact shaped by human hands and human intentions was intuitively appealing and preserved a role for a designer, intelligent life principle, or vital force (elan vital), it inevitably appealed to non-material influences, and by the nineteenth century the appeal to mystical sources of influence was a perspective that many natural philosophers of the era were eager to avoid. Ultimately, preformationism became difficult to support in view of growing evidence from the new fields of microscopy and embryology showing that gametes, zygotes, and even the early stages of developing embryos were essentially without any trace of the form of the resulting organism.

This cla.s.sic version of preformationism had another Achilles' heel. Even had embryology delivered different results, this would still have posed a serious logical problem: a version of the same problem that Skinner pointed out for mental homunculi. The need for a preformed antecedent to explain the generation of a new organism requires a prior preformed antecedent to that antecedent, and so on, and so on, homuncular sperm in the body of each homuncular sperm. Or else the antecedent must be always already present, in which case it could not be in material form, because ultimately it would be reduced to invisibility. In light of this, it is not an explanation of how the form arose so much as a way of avoiding dealing with the origins question altogether.

A related theory called "sensationalism" also flourished at the beginning of the nineteenth century. Charles Darwin's grandfather, Erasmus Darwin, enthusiastically promoted this view. Like epigenesis, sensationalism argued that developing embryos incrementally acquired their distinctive forms, beginning from an undifferentiated fertilized egg. But whereas epigeneticists appealed to an intrinsic life force to explain the drive toward adult form, sensationalists argued that bodies were actively molded by fetal experience during development. In this way, a.n.a.logous to empiricist theories of mind, the developing embryo began as a sort of undifferentiated lump of clay on which this experience would sculpt its influence. Sensationalists could explain the initial similarity of child to parent as a consequence of experiences imposed by the parent body, for example, on the embryo in the womb or on the gamete prior to fertilization, and then later as a consequence of being cast in the same ecological role in the world, with the same distinctive demands imposed by the material, spatial, and energetic demands of that niche. In other words, the form of an organism was thought to be the result of its sensitivity and responses to the world. Form was imposed from without, though the material of the organism played a constraining and biasing role. Unfortunately, this view also appeared contrary to biological observation. Diverse alterations of incubation and developmental conditions had little effect on the basic body plan. There was unquestionably something intrinsic to organisms that dictated body form.

Though theories of evolution were not directly invoked as deciding influences in these debates, the rise of evolutionary theories and the decline of the preformationist/epigeneticist debate ran in parallel, and influenced each other. Probably the first evolutionist to propose a comprehensive solution to this problem was Jean-Baptiste de Lamarck. He glimpsed a middle path with his theory of evolution in which body plans were intrinsically inherited, but could be subtly modified from generation to generation as organisms interacted with their environment. The evolutionist approach thus allowed materially embodied, preformed, inherited tendencies (even if not expressed in the early embryo) to be the determinant of body form without implicating an infinite regress of preformed antecedents. This, for the first time, offered a means to integrate a materialist-preformationist theory with an environmental source of structural influence. It replaced the creative contribution of the epigeneticists' immaterial influence with the concrete influence of organism-environment interactions.

Charles Darwin's theory of natural selection provided a further tweak to this middle path by potentially eliminating even the strivings and action of the organism. Lamarck's view implicitly held that strivings to seek pleasurable experiences, avoid pain, and so forth were critical means by which the organism-environment interaction could produce novel forms. But these strivings too had the scent of homuncular influence; wouldn't these predispositions need to precede evolution in some way? Darwin's insight into these matters was to recognize that the strivings of organisms were not necessary to explain the origins of their adaptive forms. Even if these traits were the consequence of chance mechanisms produced irrespective of any function potential, if they coincidentally just happened to aid reproduction they would be preferentially inherited in future generations.

In this way, Darwinism also dealt a blow to strict preformationism. Living forms were not always as they had been, and so nothing was ultimately preformed. What was preformed in one generation was a product of the interaction of something preformed in previous generations, modified by interaction with the environment. Thus, the form-generating principles of organism development could have arisen incrementally, and even if only cryptically embodied in the embryo, they could be understood as physically present. This completed the synthesis, preserving the materialist principles of preformationism while introducing a creative dynamic in the form of natural selection acting on accidental variation. The battle over the origins of living form therefore s.h.i.+fted from development to phylogenetic history. This appeared to remove the last hint of prior intelligence, so that no designing influence or even organism striving need be considered. Darwinism thus appeared to finally offer a homunculus-free theory of life.

Darwin's contribution did not, however, finally settle the debate over theories of the genesis of organism form. Indeed, the theory of natural selection made no specific predictions concerning the process of embryogenesis and was agnostic about the mechanisms of reproduction and development. The debate was reignited at the end of the nineteenth century by two developments. First was the proposal by the German evolutionist Ernst Haeckel that phylogenetic evolution was driven by the progressive addition of developmental stages to the end of a prior generation's developmental sequence.5 Second was the rediscovery of the genetic basis of biological inheritance. Haeckel's theory had an intrinsic directionality implicit in its additive logic, and suggested that evolution might be driven by a sort of developmental drive. The genetic theory of inheritance additionally suggested that the maturation of organic form might be embodied in a sort of chemical blueprint. Both ideas were influential throughout much of the early twentieth century. Only in recent decades have they been shown to be fallacious and oversimplified, respectively.

In hindsight, however, both preformation and vitalism have contributed something to our modern understanding of embryogenesis, even though their cla.s.sic interpretations have been relegated to historical anachronism. The unfertilized egg begins as a relatively undifferentiated cell. The process of fertilization initiates a slight reorganization that polarizes the location of various molecular structures within that cell. Then this cell progressively divides until it becomes a ball of cells, each with slightly different molecular content depending on what part of the initial zygote gave rise to them. As they continue to divide, and depending on their relative locations and interactions with nearby cells, their cellular progeny progressively differentiate into the many thousands of cell types that characterize the different organs of the body. Although the features that distinguish these different, specialized cell types depend on gene expression, the geometry of the developing embryo plays a critical role in determining which genes in which cells will be expressed in which ways. This interdependency is best exemplified by the potential of embryonic stem cells at the blastula (initial ball-like) stage of development to become any cell type of the body. Taken from one blastula and inserted anywhere else in another, these cells will develop in accordance with their insertion point in the second new embryo. So at early stages of development, individual cells do not have preformed developmental fates. Their ultimate fates are determined collectively, so to speak. Thus, although this developmental potential reflects a kind of preformed organism plan attributable to the genetic code contained within each cell, this alone is insufficient to account for the details of development. The term epigenesis is retained to describe the contribution of such extragenomic influences as the cell-cell interactions that alter which genes are or are not active in a given cell line and the geometric relations between cells that determine which cells can interact.

So, although many popular accounts of DNA treat it as though it contains the equivalent of a blueprint for building the body or else a program or set of construction instructions, this modern variant of preformationism is a considerable oversimplification. It ignores the information ultimately embodied in the elaborate patterns of interaction among cells and how these affect which genes get expressed. The vast majority of this structural information is generated anew in each organism as an emergent consequence of a kind of ecology of developing cells. The structural information pa.s.sed on by the DNA nevertheless contributes sufficient constraints and biases embodied in protein structure to guarantee that in this information generation process, humans always give rise to humans and flies to flies. But it also guarantees that every fly and every human is unique in myriad ways-even identical twins. This is because genes aren't like a.s.sembly instructions or computer programs. There is no extrinsic interpreter or a.s.sembly mechanism. Patterns of gene expression depend on patterns of embryo geometry, and changes of gene expression influence embryo geometry in cycle upon cycle of interactions.

MENTALESE.

A more seriously entertained, widely accepted, and hotly debated modern variant of a homuncular explanation is the concept of a language faculty, conceived as the repository of the principles that predetermine what sort of forms any natural language is able to a.s.sume. This set of principles has been given the name "universal grammar," because it is postulated that they must be expressed in any naturally arising human language. Although contemporary proponents of this view will bristle at the comparison, I believe that it is not merely a caricature to call this a preformationist and indeed a homuncular theory.

The first modern form of this sort of theory was developed by the linguist Noam Chomsky in the late 1950s and early 1960s. His intention was to develop a systematic formal description of the principles necessary to predict all and only the cla.s.s of sentences that a normal speaker of the language would recognize as well formed, or grammatically correct. This is complicated by the fact that language is generative and open-ended. There is no clear upper limit to the variety of possible grammatical sentences. So it was a finite faculty, capable of infinite productivity. The remarkable success of this modeling strategy, though tweaked again and again over the intervening decades, demonstrates that to a great extent languages are able to be modeled as axiomatic systems derived from a modest set of core design principles. But although this approach has provided a very precise representation of the kind of rule-governed system that speakers would need to have available if they were in fact to use a rule-governed approach to producing and interpreting speech, it does not guarantee that this is the only way to produce and interpret speech.

One reason not to expect a neat mapping between this formalism and the implementation of linguistic processes in the brain is that human brain architecture is the product of dozens of millions of years of vertebrate evolution, adapting to physiological and behavioral challenges whose attributes had little or nothing in common with the otherwise anomalous structure of language. This neural architecture has changed only subtly in human evolution (producing no unprecedented human-specific anatomical structures). This suggests that the otherwise anomalous logic of linguistic communication is implemented by a system organized according to very ancient biological design principles, long preceding the evolution of language.

This mismatch with the evolutionary logic of neural function is not an insurmountable problem. But an effort to reify grammatical logic as existing in this algorithmic form in the mind ceases to use it as a placeholder for a neural-behavioral-intentional process and instead treats it as a preformed homunculus. A set of algorithms that has been explicitly devised by testing its predictions against the ma.s.sive corpus of utterances judged to be grammatical should by design be predictive of these utterances. It is in this sense merely a representation of this behavioral predisposition in a more compact form, that is, a methodological stand-in for a human speaker's competence. Postulating that this algorithmic system is in the mind in this sense is a redescription relocated: a homuncular move that mistakes a map for the territory.

If such a rule system comes preformed in each human infant, no linguistic, social-psychological, or developmental explanation is required. The rules of sentence formation are known before any word is learned. In Steven Pinker's widely acclaimed book The Language Instinct, for example, he begins his a.n.a.logy for explaining the relations.h.i.+p between thought and language by pointing out that People don't think in English or Chinese or Apache; they think in a language of thought. This language of thought probably looks a bit like all of these language. . . . But compared with any given language, mentalese [this hypothetical language of thought] must be richer in some ways and simpler in others.

Here the problem to be explained has found its way into the explanation. With an elaboration of this a.n.a.logy, he makes this clear.

Knowing a language, then, is knowing how to translate mentalese into strings of words and vice versa. . . . Indeed, if babies did not have a mentalese to translate to and from English, it is not clear how learning English could take place, or even what learning English would mean.6 If thought equals language, then all that needs to be explained is the translation. But wait a minute! Exactly who or what is producing and comprehending mentalese? Modeling thought on language thus implicitly just pushes the problem back a level. In this sense, it is a.n.a.logous to the caricature of perception offered by Koch and Crick earlier in the chapter. For a language to be "spoken" in the head, an unacknowledged homunculus must be interpreting it. Does this internal speaker of mentalese have to translate mentalese into homunculese? Is mentalese self-interpreted? If so, it is not like a language, and it is unclear what this would mean anyway.

The notion that words of a spoken language correspond to and are derived from something like words in a language of thought is not new. A similar idea was critical to the development of the logic of the late Middle Ages at the hands of such scholars as Roger Bacon and William of Occam. By dispensing with the messiness of reference to things in the world, inference could be modeled as formal operations involving discrete signs. This was a critical first step toward what today we understand as the theory of computation. In its modern reincarnation, pioneered by philosophers like Jerry Fodor, the mental language hypothesis turns this around and models cognition on computation. And by similarly ignoring problems of explaining representation and interpretation, it effectively s.h.i.+fts the problem of explaining the mysteries of language from psychology and linguistics to evolutionary biology and neuroscience. Of course, the a.n.a.lytical tools of evolutionary biology and neuroscience do not include propositions, meanings, or symbolic reference. Indeed, nearly all of the fundamental principles that must be included in this preformed language of thought have intentional properties, whereas brain structures do not. So it effectively a.s.serts that the grammar homunculus is crucial to language, but irrelevant to explaining it.

MIND ALL THE WAY DOWN?.

In A Brief History of Time, the physicist Stephen Hawking recounts an old anecdote (attributed to many different sources from William James to Bertrand Russell) about an exchange that supposedly took place at the end of a lecture on cosmology (or a related subject). Hawking tells it like this: At the end of the lecture, a little old lady at the back of the room got up and said: "What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise." The scientist gave a superior smile before replying, "What is the tortoise standing on?" "You're very clever, young man, very clever," said the old lady. "But it's turtles all the way down!"7 One way to avoid the challenge of explaining the intentional phenomena that characterize mental processes is to a.s.sume that they intrinsically const.i.tute part of the physical substrate of the world. The suspicion that attenuated mental attributes inhere in every grain of sand, drop of water, or blade of gra.s.s has an ancient origin. It is probably as old as the oldest spiritual belief systems, tracing its roots to animistic beliefs present in nearly all the world's oldest spiritual traditions. As a systematic metaphysical perspective, it is known as panpsychism.

Panpsychism a.s.sumes that a vestige of mental phenomenology is present in every physical event, and therefore suffused throughout the cosmos. The brilliant Enlightenment thinkers Spinoza and Leibniz, among others, espoused this belief, and in nineteenth-century philosophy Gustav Fechner, Arthur Schopenhauer, William Clifford, Wilhelm Wundt, Ernst Haeckel, Josiah Royce, and William James, just to name a few, all promoted some form of it. Although panpsychism is not as influential today, and effectively plays no role in modern cognitive neuroscience, it still attracts a wide following, mostly because of a serendipitous compatibility with certain interpretations of quantum physics.

Although it is common for panpsychism to be framed in terms of a form of mentality that permeates the universe, panpsychist a.s.sumptions can be framed in more subtle and general ways. Using the broader conception of ententional relations.h.i.+ps that we have been developing here-in which conscious states are only one relatively complex form of a more general cla.s.s of phenomena const.i.tuted by their relations.h.i.+p to something explicitly absent and other-we can broaden the conception of panpsychism to include any perspective which a.s.sumes that ententional properties of some kind are ultimate const.i.tuents of the universe. This leaves open the possibility that most physical events have no conscious aspect, and yet may nonetheless exhibit properties like end-directedness, information, value, and so forth. In this view, physical events, taking place outside of organisms with brains, are treated as having a kind of unconscious mentality.

Many past panpsychic theories have espoused something less than the claim that all physical processes are also mental processes. For example, the evolutionary biologist Ernst Haeckel attributed variously attenuated forms of mentality to even the most simple life forms, but he apparently stopped short of attributing it to non-living processes, and William James, in a perspective dubbed "neutral monism," argued that a middle ground between full panpsychism and simple materialism offered a way out of Descartes' dualist dilemma. Neutral monism suggests that there is an intrinsic potential for mentality in all things, but that it is only expressed in certain special physical contexts, like the functioning of a brain. Of course, this begs the question of whether this property inheres in some ultimate substance of things or is merely an expression of these relations.h.i.+ps alone-a question we will turn to below.

In quantum physics, many scientists are willing to argue that the quantum to cla.s.sical transition-in which quantum probabilities become cla.s.sical actualities-necessarily involves both information and observation, in the form of measurement. One famous interpretation of quantum mechanics, formulated in Copenhagen in the early twentieth century by Niels Bohr and Werner Heisenberg, specifically argued that events lack determinate existence until and unless they are observed. This has led to the popularity of many unfortunate overinterpretations that give human minds the status of determining what exists and what does not exist. Of course, a literal interpretation of this would lead to vicious regress, because if things aren't real until we observe them, then our own existence as things in the world is held hostage to our first being able to observe ourselves. My world may have come into existence with my birth and first mental awakening, but I feel pretty sure that no physicist imagines that the world only came into existence with the first observers. But what could it mean for the universe to be its own observer irrespective of us?

A less exotic homuncularization of quantum theory simply treats quantum events as intrinsically informational. Overinterpreted, this could lead to the proposal that the entire universe and its physical interactions are mindlike. Taking one step shy of this view, the quantum computation theorist Seth Lloyd describes the universe as an immense quantum computer. He argues that this should be an uncontroversial claim: The fact that the universe is at bottom computing, or is processing information, was actually established in the scientific sense back in the late 19th century by Maxwell, Boltzmann, and Gibbs, who showed that all atoms register bits of information. When they bounce off each other, these bits flip. That's actually where the first measures of information came up, because Maxwell, Boltzmann, and Gibbs were trying to define entropy, which is the real measure of information.8 Although James Clerk Maxwell, Ludwig Boltzmann, and Josiah Gibbs were collectively responsible for developing thermodynamic theory in the nineteenth century before the modern conceptions of computing and information were formulated, they did think of thermodynamic processes in terms of order and the information necessary to describe system states. But Lloyd writes almost as though they had modern conceptions of computing and information, and conceived of thermodynamic processes as intrinsically a form of information processing. Intended as a heuristic caricature, this way of telling the story nonetheless misrepresents the history of the development of these ideas. Only after the mid-twentieth-century work of Alonzo Church and Alan Turing, among others, showed that most physical processes can be a.s.signed an interpretation that treats them as performing a computation (understood in its most general sense) did it become common to describe thermodynamic processes as equivalent to information processing (though, importantly, it is equivalence under an interpretation). But from this a.s.sumption it can be a slippery slope to a more radical claim. We get a hint of this when Lloyd says that "here is where life shows up. Because the universe is already computing from the very beginning when it starts, starting from the Big Bang, as soon as elementary particles show up."9 On the a.s.sumption that organisms and brains are also merely computers, it may seem at first glance that Lloyd views teleological processes as mere complications of the general information processing of the universe. But here Lloyd walks a tightrope between panpsychism and its opposite, the eliminative view that computation is only physics-a physics that has no room for ententional properties. This is because he deploys a special technical variant of the concept of information (to be discussed in detail in later chapters), which might better be described as the potential to convey information, or simply as order or difference. Being able to utilize one physical attribute or event as information about another requires some definite physical distinctions, and to the extent that the quantum-cla.s.sical transition involves the production of definite physical distinctions, it also involves the necessary foundation for the possibility of conveying information. But this notion of information excludes any implication of these distinctions being about anything. Of course, this is the ententional property that distinguishes information from mere physical distinction.

Minds are, indeed, interpreters of information about other things. They are not merely manipulators of physical tokens that might or might not be a.s.signed an interpretation. But the bits of information being manipulated in a computing process are only potentially interpretable, and in the absence of an interpreting context (e.g., a humanly defined use), they are only physical electrical processes. Making sense of this added requirement without invoking homuncular a.s.sumptions will require considerable unpacking in later chapters of this book. For now, suffice it to say that potential information is not intrinsically about anything. So whether or not the conception of the universe as a computer is truly a homuncular theory turns on whether one thinks that minds are doing more than computing, and whether in making this a.s.sumption, mind is projected down into the quantum fabric of things or explained away as an illusory epiphenomenon of the merely physical process of computing.

Physicists struggling to make sense of the Big Bang theory of the origins of the universe have also flirted with a homuncular interpretation of the orderliness of things. It turns out that a handful of physical constants-for example, the fine-structure constant, the value of the strong nuclear force, the gravitational constant-must be in exquisite balance with one another in order to have produced a universe like ours, with its longevity, heavy atoms, complex molecules, and other features that lead to the possibility of life. In other words, because we are here to attest to this fact, precisely balanced fine-tuning of these constants is also a necessary fact. This is generally referred to as the anthropic principle.

a.s.suming that things could be otherwise and that no fact excludes the possibility of an infinity of "settings" of these constants, however, makes it either highly providential or extraordinarily lucky that the universe favors life as we know it. Some, who see it as providential, espouse a cosmic variant of the Intelligent Design theme, arguing that the astronomically improbable conditions of our existence must result from some ultimate design principle or designer, for which

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Incomplete Nature Part 2 summary

You're reading Incomplete Nature. This manga has been translated by Updating. Author(s): Terrence W. Deacon. Already has 791 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

BestLightNovel.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to BestLightNovel.com