The Primacy of Grammar - BestLightNovel.com
You’re reading novel The Primacy of Grammar Part 22 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
Chapter 7.
throughout, the theory must postulate some device to that eect. A similar demand seems to arise for much larger (and later) structures if the interpretive role of musical devices such as recapitulation, coda, and cadence (Katz and Pesetsky 2009) is to be explained by a theory. Let us a.s.sume that these demands are met when internal Merge generates (covert) copies as a last resort at the relevant positions.
Be that as it may, in an optimal design, the copies created by internal Merge to meet FMI conditions will also be used by the rhythmic system when required. Much of the rhythmic structure of music seems to act as guide to memory-''reminders''-of past events; in Indian music, the return to the first beat (som) of a beat cycle typically coincides with a return to the tonic (or, its neighbourhood) after meeting (some of ) the conditions of the raaga during the earlier cycle. In that sense, the requirements of the raaga coincide with the rhythmic structure. Beyond this very general intuition, however, currently it is unclear if the computational requirement met with operations of internal Merge coincides with the simultaneous satisfaction of external requirements of rhythm and interpretation.
a.s.suming internal Merge to be in place, we will expect some economy conditions to govern computations in music as well. In the language case, as we saw, a small cla.s.s of principles of ecient computation (PCE) not only enforce optimal computation, (ideally) they so constrain the operations of Merge that only interpretable structures meet the interface conditions; the rest is filtered out. We also saw that PCEs are linguistically nonspecific, that is, PCEs are purely computational principles, in the sense outlined. If FM is optimally designed with Merge in place, the system will require some economy principles that enable Merge-generated structures to meet legibility conditions optimally. Could PCEs (of language) be those principles?
I have already used least eort and last resort considerations in a general way while speculating on the organization of music. Some version of the least eort principle MLC seems to be operative in the fact that an unstable pitch tends to anchor on a proximate, more stable, and immediately subsequent pitch, as noted. The other least eort principle FI is observed in facts such as a pitch ''in the cracks'' between two legitimate pitches Da and E will be heard as out of tune (Jackendo and Lerdahl 2006, 47). In general, the phenomena of ''dissonance'' and intonation seem to require FI since no note by itself is either dissonant or out of tune. If these speculations make sense, then, as with Merge, we will expect these economy conditions to be available in an abstract manner A Joint of Nature 229.
across FL and FM. Specific resources internal to a domain will then be used to implement them: in FL, PCEs constrain feature movement; in FM, they control tonal motion. Is this view valid?
7.3.
''Laws of Nature''
In a way, the answer is trivially in the positive. According to Chomsky, PCEs are ''laws of nature'' in that they are general properties of organisms, perhaps on par with physical principles such as least-energy requirement or minimal ''wire length,'' as noted. PCEs thus apply to, say, music because they apply everywhere. Therefore, a.s.suming that Merge applies to music, CHL satisfies SMH at once! Apparently, then, the Minimalist Program makes it rather easy for SMH to obtain.
The sweeping generality enforced by PCEs could be viewed as a ground for casting doubt on-hence, an objection to-the substantive character of SMH. SMH is supposed to be a substantive proposal in that it attempts to capture ( just) the computational convergence between language and music. Under Chomsky's proposal, the cherished restricted character of SMH collapses. Let C be any cognitive system of any organism. If PCEs cover all organisms, and if C contains Merge, then a strong C-language hypothesis (SCH) holds. Since we have not specified the scope of Merge so far, there is no reason why Merge can not obtain widely. Insofar as it does, SCH also holds widely. SMH, then, is just an instance of SCH; there is nothing specifically ''languagelike'' about music that SMH promised to cover.
Unlike the issue of Merge, the issue is no longer whether some component of CHL-that is, PCEs-applies to music. That is apparently already granted under Chomsky's generalization. The issue is rather whether PCEs can be prevented from applying to systems outside the hominid set.
The discussion, therefore, will be concerned more with the general organization of cognitive systems of organisms than with (human) music and language.
7.3.1.
Forms of Explanation In his recent writings, Chomsky has drawn attention to two competing perspectives in biology: the ''standard view'' that biological systems are ''messy,'' and an alternative view that biological systems are optimally designed. It seems that, currently, Chomsky's views on this topic are moving away from the standard view and towards the alternative. For example, in Chomsky 2006b, he actually criticized British geneticist Gabriel 230
Chapter 7.
Dover who held that ''biology is a strange and messy business, and 'perfection' is the last word one would use to describe how organisms work.''
Interestingly, Chomsky himself held this view just a few years ago: ''Bio-logical systems usually are . . . bad solutions to certain design problems that are posed by nature-the best solution that evolution could achieve under existing circ.u.mstances, but perhaps a clumsy and messy solution''
(Chomsky 2000a; also see 1995b, 2002).
From what I can follow, the s.h.i.+ft in perspective was essentially motivated by an extremely plausible methodological idea which emerged when it became reasonably established that large parts of grammatical computation can be explained by PCEs alone; capturing the rest of the parts, such as c-command and cla.s.sical island constraints on extraction, then become research problems (Chomsky 2006d). UG specifies the initial state of FL with linguistically specific items-that is, elements of UG belong to the faculty of language proper. Suppose now we want to attach an explanation of how FL evolved. Clearly, the more things UG contains, the more dicult it is to explain why things are specifically that way. It follows that ''the less attributed to genetic information (in our case, the topic of UG) for determining the development of an organism, the more feasible the study of its evolution'' (Chomsky 2006a). If so, then there is a need to reduce UG to the narrowest conception. What can we take away from the things just listed and a.s.sign it elsewhere plausibly?
As we saw, what we cannot take away from UG, according to Chomsky, includes at least (some prominent parts of ) the human lexicon and Merge. That leaves PCEs, the principles of ecient computation. If PCEs also belong to UG, principled explanation has to be found as to why they are so specifically located. The explanation will not be needed if it can be suggested that they are available to the faculty of language in any case as part of the general endowment of organisms. From what I can follow, this suggestion has been advanced along the following steps.
The first step is methodological: the ''Galilean style'' of explanation in science begins by a.s.suming that nature-or, at least, the aspects of nature we can fruitfully study-is perfect. The second step consists in showing that general principles such as least energy requirement (least-eort principles) have played a major role in formulation of scientific theories, including reflections on biological phenomena. A variety of natural phenomena seems to require essentially the same form of explanation to the eect that nature functions under optimal conditions. This include phenomena such as the structure of snowflakes, icosahedral form of polio-virus sh.e.l.ls, dynamics of lattice in superconductors, minimal search A Joint of Nature 231.
operations in insect navigation, stripes on a zebra, location of brains at the front of the body axis, and so on. The third step shows that PCEs are optimal conditions of nature, given that language is a natural object.
Although almost everything just listed is under vigorous discussion, I will simply a.s.sume that each of these steps has been successfully advanced.11 There is no doubt that the range of discoveries listed above has played a major role in drawing attention to the alternative view of biological forms. On the basis of this evidence, I will a.s.sume that nature, including the biological part of nature, is perfect; therefore, human language, also a part of nature, has a perfect design. I can aord to a.s.sume all this because it still will not follow that PCEs are general properties of organic systems such as insects, not to mention inorganic systems such as snowflakes. For that ultimate step, we need to s.h.i.+ft from historical parallels and a.n.a.logies, however plausible, to theory.
Basically, s.h.i.+fting of PCEs to the third factor conflates the distinction between a (general) form of explanation and an explanation of a specific (range of ) phenomena. Suppose the preferred form of explanation is the ''Galilean style''-mathematico-deductive theories exploiting symmetries, least-eort conditions, and so on. We may a.s.sume that physics since Galileo has adopted the Galilean style. But that did not prevent two of the most sophisticated and recent theories, namely, relativity theory and quantum theory, to dier sharply about the principles operating in dierent parts of nature. The separation between these two theories is pretty fundamental such that it divides nature into two parts, obeying dierent principles. As noted in section 1.3.2, this divide can only be bridged by unification, perhaps in a ''new'' physics, as Roger Penrose suggests. Until that happens, the two general theories of nature are best viewed as two separate bodies of doctrines, while both adopt the (same) Galilean style.
Faced with the overwhelming complexity and variety in the organic world, a Galilean form of explanation is harder to achieve in biology, explaining the wide prevalence of the standard view sketched above. Application of Galilean style to this part of nature thus requires at least two broad steps. First we show that the apparent diversity of forms in a given range of phenomena can in fact be given a generative account of from some simple basis: the condition of explanatory adequacy. Next, we go beyond explanatory adequacy to show that the generative account can be formulated in terms of symmetry and least-eort considerations already noted in the nonorganic part of nature. The distinction between form of explanation and specific explanation seems to apply to each of these steps.
232.
Chapter 7.
Consider the idea that the ''innate organizing principles [of UG] determine the cla.s.s of possible languages just as the Urform of Goethe's biological theories defines the cla.s.s of possible plants and animals''
(Chomsky, cited in Jenkins 2000, 147). If the parallel between UG and Urform of plants is intended to highlight the general scientific goal of looking for generative principles in each domain, it satisfies the first step.
It is totally implausible if the suggestion is that a given Urform applies across domains: UG does not determine the cla.s.s of plants just as Goethe's Urform fails to specify the cla.s.s of languages. Similarly, the very interesting discovery of homeotic transformations of floral organs into one another in the weed Arabidopsis thaliana (Jenkins 2000, 150) does not have any eect on wh-fronting. To play a role in theoretical explanation of phenomena, the general conceptions of Urform and transformation need to be specifically formulated in terms of principles operating in distinct domains, pending unification.
Turning to the issue of whether a particular least eort principle of language, say, the Minimal Link Condition might apply in other domains and organisms, consider Chomsky's (2000d, 27) general idea that ''some other organism might, in principle, have the same I-language ( brain state) as Peter, but embedded in performance systems that use it for loco-motion.'' The thought is dicult to comprehend if ''I-language'' has a full-blooded sense that includes lexical features, Merge, and PCEs (plus PLD, if the I-language is not at the initial state). To proceed, let us a.s.sume that by ''I-language'' Chomsky princ.i.p.ally had minimal search conditions in FL, that is PCEs, in mind. To pursue it, Hauser, Chomsky, and Fitch 2002 suggest that ''comparative studies might look for evidence of such computations outside of the domain of communication (e.g., number, navigation, social relations).'' Elaborating, the authors observe that ''elegant studies of insects, birds and primates reveal that individuals often search for food using an optimal strategy, one involving minimal distances, recall of locations searched and kinds of objects retrieved.''
Following the Galilean a.s.sumption that nature is perfect, optimal search could well be a general property of every process in nature, including the functioning of organisms. As such, principles of optimal search could be present from collision of particles and flow of water to formation of syntactic structures in humans. However, it requires a giant leap of faith to a.s.sume that the same principles of optimal search hold everywhere. Plainly, we do not wish to ascribe ''recall of locations searched'' to colliding particles or to the trajectory of a comet. In the reverse direction, A Joint of Nature 233.
(currently) there is no meaningful sense in which principles of optimal water flow are involved in insect navigation, not to speak of syntactic structures in humans.
To emphasize, I am not denying that, say, foraging bees execute optimal search, as do singing humans and colliding particles. The problem is to show that there is a fundamental unity in these mechanisms. There could be an underlying mechanism of optimal search in nature that has ''parametric'' implementation across particles, bees and humans. But the unearthing of this mechanism will require the solution of virtually all problems of unification.
Even in such a general theory of nature as Newtonian mechanics (''a theory of everything''), economy considerations are formulated in terms of principles specific to a domain. Newton's first law of motion has two parts: (i) ''Every body perseveres in its state of rest or uniform motion in a straight line,'' and (ii) ''except insofar as it is compelled to change that state by forces impressed on it.'' The first part states a least-eort principle, and the second a last resort one in terms of properties specific to a theoretically characterized domain. Clearly, this very general law of nature does not belong to CHL since, at the current state of knowledge, nothing in grammar moves in rectilinear or elliptical paths, and no forces act on SOs. From the perspective of physics, the language system is an abstract construct; laws of physics do not apply to it just as they do not apply to the vagaries of the soul.
Yet, to return to the theme of chapter 1, there is no doubt that biolinguistics is a profound body of doctrine that has unearthed some of the principles underlying an aspect of nature. As Reinhart (2006, 22) observes, Chomsky's definition of ''Attract'' also combines ideas of last resort and least eort: K attracts F if F is the closest feature that can enter into a checking relation with a sublabel of K. This time the combination is implemented specifically for the aspect of nature under investigation in a dierent scientific ''continuum'' obtaining since Panini (section 1.3.2). It .
is certainly a law of nature if valid, but it does not apply to particles or planetary motions.
As far as we can see, this aspect of nature is somehow located in the human brain and not in the joints of the knee; it is also conceivable that laws of physics (ultimately) apply to the human brain. Thus, one is perfectly justified to use the bio in biolinguistics. Nonetheless, currently, there is nothing in the formulation of CHL that requires that CHL cannot be located in knee joints. Therefore, even if narrow physical channels have influenced the evolution of the brain as with much else in nature 234
Chapter 7.
(Cherniak 2005; Carroll 2005), it has not been shown that the influence extends to the design of the language faculty.
In fact, it is unclear what is there to show: ''What do we mean for example when we say that the brain really does have rules of grammar in it?
We do not know exactly what we mean when we say that. We do not think there is a neuron that corresponds to 'move alpha' '' (Chomsky, Huybregts, and Riemsdijk 1982, 32). Chomsky made this remark over a quarter of a century ago, but the situation does not seem to have changed in the meantime. For example, over a decade later, Chomsky (1994b, 85) observed that ''the belief that neurophysiology is even relevant to the functioning of the mind is just a hypothesis.'' Several years later, he continued to hold, after an extensive review of the literature, that ''I suspect it may be fair to say that current understanding falls well short of laying the basis for the unification of the sciences of the brain and higher mental faculties, language among them, and that many surprises may lie along the way to what seems a distant goal'' (Chomsky 2002, 61). In general, the problem of unification between ''psychological studies'' such as linguistics and biology is as unresolved today as it was two centuries ago (Chomsky 2001b). The ''locus'' of the problem continues to be on biology and the brain sciences (Chomsky 1995b, 2). To insist on some unknown biological basis to the actual operations and principles contained in CHL is to miss the fact that, with or without biology, the theory of CHL already uncovers an aspect of nature in its own terms.
To sum up, while the Galilean idea is a guide to science, nothing of empirical significance follows from the idea itself. We need to find out, for each specific system, how the idea is implemented there-if at all, because it cannot be a.s.sumed that every aspect of nature can be subjected to the Galilean form of inquiry. In that sense, it has been a groundbreaking discovery that the principles of CHL implement the Galilean idea in the human faculty of language. As far as I can see, the only empirical issue at this stage of inquiry is whether these principles specifically apply somewhere else. It is like asking if a principle of least eort witnessed in water flow extends, after suitable abstraction, to all fluids (liquids and gases).12 I have proposed that the economy principles postulated in the Minimalist Program to describe the functioning of CHL also apply to music under suggested abstractions; perhaps they apply to the rest of the systems in the hominid set. In other words, the suggestion is that the economy principles of language (''water'') extend to a restricted cla.s.s of systems (''fluids'').
A Joint of Nature 235.
7.3.2.
Scope of Computationalism What then are the restrictions on the domain(s) in which the economy principles of language apply? To recall, PCEs, as formulated in biolinguistics, are principles of computational eciency. I will suggest that the notion of computation, and its intimate connection with the notion of a symbol system, enables us to generalize from language to a restricted cla.s.s (hominid set), and the notion of computation, strictly speaking, is restricted only to this cla.s.s. The notion of computation thus characterizes a (new) aspect of nature.
Following some of the ideas in Turing (1950), the reigning doctrine in much of the cognitive sciences-for over half a century by now-is that cognitive systems are best viewed as computational systems (Pylyshyn 1984); the broad doctrine could be called ''computationalism.'' I have no s.p.a.ce here to trace the history of the doctrine and its current status (see Fodor 2000). Basically, once we have the mathematical theory of computation, any device whose input and output can be characterized by some or other mathematical function may be viewed as an instance of a Universal Turing machine, given the Church-Turing thesis (Churchland and Grush 1999, 155). Since brains no doubt are machines that establish relations between causes and eects (stimulus and behaviour), brains are Turing machines. However, as Churchland and Grush immediately point out, this abstract characterization of brains as computational systems merely by virtue of the existence of some I/O function, holds little conceptual interest. Any system that functions at all could be viewed as a computational system: livers, stomachs, geysers, toasters, solar systems, and so on, and of course computers and brains.
Turing (1950) made a narrower and more substantive proposal. The proposal was articulated in terms of a thought experiment-the ''Turing Test.'' The test consists of an interrogator A, a man B, and a woman C.
On the basis of a question-answer session, A is to find out which one of B and C is the woman. Almost pa.s.singly, Turing imposed two crucial conditions: (i) A should be in a separate room from B and C, and (ii) a teleprinter is to facilitate communication between the players ''in order that tones of voice may not help the interrogator.'' Turing asked: what happens when a Turing machine in the form of a digital computer takes the place of B? Turing's proposal was that if A's rate of success (or failure) in identifying the gender of B remains statistically insignificant when a human is replaced by a computer, we should not hesitate to ascribe thinking to the computer.
236.
Chapter 7.
As the conditions of the thought experiment show, the issue of whether a digital computer is ''intelligent'' was posed in terms of whether it can sustain a humanlike discourse (Michie 1999). To enforce this condition, Turing deliberately ''screened o '' the computer from its human interloc-utor, and allowed the ''discourse'' to take place only with the help of a teleprinter. Turing's insight exploited a central aspect of Turing machines: Turing machines are symbol manipulators (and nothing else).13 The pointer of a Turing machine moves one step at a time on a tape of blank squares to either erase or print a symbol on designated locations. In doing so, it can compute all computable functions in the sense that, given some interpretation to the input symbols, it can generate an output where the output can be interpreted as a sequence with a truth value. Thus, after some operations, suppose the tape has a sequence of eight ''j''s. In a machine with a dierent ''hardware,'' these could be sequences of lights.
When these ''j''s are suitably s.p.a.ced on the tape, the sequence can be given the arithmetic interpretation 2 2 4. Abstracting away from the particular design of the machine and the mode of interpretation enforced on it, the basic point is that computation takes place on things called ''symbols'' where a symbol is a representation that has an interpretation.
The notion of ''interpretation'' at issue is of course theory-internal (Chomsky 1995b, 10 n. 5). In the case of language and music, as noted, internal significance-satisfaction of legibility conditions for FLI and FMI systems respectively-is that notion.
The notions of computation and symbol systems are thus intimately related. From this narrower perspective, a system may be viewed as a computational system just in case it is a symbol system.14 If a computer is to be viewed as ''intelligent,'' its symbolic operations ought to generate sequences that may be intuitively interpreted as signs of intelligence. It follows that, in this setup, intuition of intelligence is restricted to only those systems whose operations may be viewed as representable in terms of the symbolic operations of the system. Suppose that the ability to carry on a conversation in a human language, which is a symbol system, is a sign of intelligence. Insofar as human languages are computable functions, then in principle they are (eectively) representable in a Turing machine. a.s.sume so. Under these conditions, we may think of the computer as partic.i.p.ating in a humanlike discourse. Since the entire issue of ''intelligence'' hangs on the satisfaction of the preceding conditions, everything else that distinguishes humans from computers needs to be screened o.
In eect, Turing's project was not to examine whether computers are intelligent (a meaningless issue anyway); he was proposing a research pro- A Joint of Nature 237.
gram to investigate whether aspects of human cognition can be explained in computational terms. The proposed connection between computation and symbolism gave rise to the computational-representational view of the mind in which a ''central and fundamental role'' was given to rules of computation, which are ''rules for the manipulation and transformation of symbolic representations'' (Horgan and Tienson 1999, 724725).
By its very conception, the project is restricted to systems where the notion of symbolic representation applies, not just I/O systems.
In my view, a further restriction applies when we try to turn Turing's formal program into an empirical inquiry because the connection between computation and symbolic representation places a stringent constraint on cognitive theories. Extremely sophisticated mathematical devices are routinely used in physics to describe the world. Thus, suppose a particular state of a system, such as some colliding particles, is described by a certain solution to a certain complex dierential equation. As noted, the system can be described in computational terms. However, it is always possible to hold an ''instrumentalist'' view of the eort such that we do not make the further suggestion that the colliding particles are solving dierential equations, or that snowflakes have ''internalized'' fractal geometry.
For cognitive theories, the burden is more. Particles do not ''internalize'' symbolic/mathematical systems, (natural) minds do. For example, we need to say that a human infant has internalized the rules of language.
I am not suggesting that, in internalizing the linguistic system, the human infant has internalized G-B theory. What I am suggesting is that the human infant has internalized-''cognized'' (Chomsky 1980)-a computational system which we describe in G-B terms, wrongly perhaps; that is the crucial distinction between a toaster/comet/snowflake and a human infant.
In recent years, Chomsky has argued with force that our theories of nature are restricted to what is intelligible to us, rather than to what nature is ''really'' like (Chomsky 2000d; Hinzen 2006; Mukherji, forthcoming b); within the restrictions of intelligibility we aim for the best theories. It does not follow that the notion of intelligibility applies in the same fas.h.i.+on in each domain of inquiry; what is intelligible in one domain may be unintelligible in the next. Thus, the motion of comets is captured in our best theory that uses dierential equations to make it intelligible to us why comets move that way; the theory will not be intelligible if it required that comets solve those equations. For computational theories in cognitive domains, in contrast, our best theories make it intelligible to us why 238
Chapter 7.
a child uses an expression in a certain way by requiring that the child is computing (explanatory adequacy); the theory will not be very intelligible otherwise since it would fail to distinguish between the child and snowflakes. It follows that we genuinely ascribe computational rules only to those systems to which we can intelligibly ascribe the ability to store and process symbolic representations.15 We need to distinguish, then, between symbol-processing systems per se and our ability to describe some system with symbols-the latter deriving obviously from the former. Mental systems are not only describable in computational terms; they are computational systems. And the only way to tell whether a system is computational is to see whether we can view the system as a genuine symbol manipulator. Once we make the distinction, we may not want to view stomachs and toasters as computational systems since it is hard to tell that these are symbol-processing systems themselves. A large variety of systems thus fall out of the range of computational systems understood in this narrow sense: systems of interacting particles, a.s.sembly of DNA, chemical anity, crystal structures, and so on. As noted, the list possibly includes the visual system which is a ''pa.s.sive'' system; it is not a system of symbols at all.
From this narrow perspective, it is not at all obvious that nonhuman systems, such as the system of insect navigation, also qualify as computational systems in the sense in which human language and music qualify.16 Charles Gallistel (1998) raised much the same problem, in my view. As Gallistel's illuminating review of the literature shows, sophisticated computational models have been developed to study the truly remarkable aspects of insect navigation, such as dead reckoning. How do we interpret these results? According to Gallistel, ''A system that stores and retrieves the values of variables and uses those values in the elementary operations that define arithmetic and logic is a symbol-processing system'' (p. 47).
''What processes enable the nervous system,'' Gallistel asks, ''to store the value of a variable . . . to retrieve that value when it is needed in computation?'' ''(W)e do not know with any certainty,'' Gallistel observes, ''how the nervous system implements even the operations a.s.sumed in neural net models, let alone the fuller set of operations taken for granted in any computational/symbolic model of the processes that mediate behaviour''
(p. 46). As noted in chapter 1, Chomsky (2001b) mentioned these remarks to ill.u.s.trate the general problem of unification between biology and psychology. In my opinion, Gallistel's concerns cover more.
As Gallistel observed, to say that a system const.i.tutes of computational principles is to say that it is a symbol-processing system. It is of some in- A Joint of Nature 239.
terest that Gallistel mentioned arithmetic and logic to ill.u.s.trate symbol-processing systems. No doubt, postulation of computational processes for, say, human language raises exactly the same problem with respect to the nervous system: we do not know with any certainty how the nervous system implements the operations a.s.sumed in any computational/ symbolic model of the processes that mediate linguistic behavior. This is the familiar unification problem that arises in any case. But Gallistel does not seem to be raising (only) this problem; he seems to be particularly worried about insects. Gallistel may be questioning the intelligibility of the idea that insects are symbol processors because their cognitive systems simply do not fit the paradigms of arithmetic and logic, not to mention language and music.
How then do we decide that a certain system is not only describable in computational terms, it may be intelligibly viewed as a computational system? Keeping to organic systems, which aspects of an organism's behavior is likely to draw our intelligible computationalist attention? To recapitulate, we saw that the sole evidence for the existence of CHL is the unbounded character of a variety of articulated symbol systems used by humans. In particular, we saw that CHL is likely to be centrally involved in every system that (simultaneously) satisfies the three general properties of language: symbolic articulation, discrete infinity, and weak external control. Symbolic articulation and its structure indicate an inner capacity with the properties of unboundedness and freedom from external control.
As the French philosopher Rene' Descartes put it, ''All men, the most stupid and the most foolish, those even who are deprived of the organs of speech, make use of signs,'' and signs are the ''only certain mark of the presence of thought hidden and wrapped up in the body'' (cited in Chomsky 1966, 6).17 In other words, we look for CHL when we find these properties cl.u.s.tering in the behavior of some organism. From this perspective, it is not at all clear what sense may be made of the proposal that foraging behavior of animals displays properties of discrete infinity and weak external control, since these animals simply do not exhibit the required behavior in any domain. After an extensive review of literature on cognitive capacities of nonhuman systems, Penn, Holyoak, and Povinelli (2008) conclude that, for a significant range of human capacities including communication with symbols and navigation with maps, there is not only absence of evidence in nonhuman systems, but also evidence of absence. In this sense, the a.n.a.logy between insects and humans is no more credible, for now, than that between humans and comets/snowflakes.
240.
Chapter 7.