The Primacy of Grammar - BestLightNovel.com
You’re reading novel The Primacy of Grammar Part 13 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
It seems that, in any case, we need to reach decisions about where to place viable limits on what counts as ''language theory''; as Chomsky observes, a theory of language cannot be a theory of ''everything.'' Some authors seem to hold the view that a theory of language is supposed to furnish ''a complete algorithmic account'' of how ''understanding is reached from specific acoustic properties of utterance tokens'' to the Words and Concepts 121.
''speaker's communicative intentions'' (Fodor 1989, 5). It seems to me that this conception of what a (general) theory of language ought to accomplish in the final a.n.a.lysis is shared subliminally by most people thinking on language; we are not really done until each of the components of the entire stretch somehow falls in place. I do not see what it means to satisfy this goal, not to speak of whether it is satisfiable. For one thing, grammatical theory does not even pretend to generate full tokens in any useful sense. Insofar as theories are concerned, this must be the case with any theory: grammar, truth conditions, speech acts, or whatever-none takes you from ''specific acoustic properties of utterance tokens'' to the ''speaker's communicative intentions.'' In that sense, a speech-act theory is no more ''complete'' than grammatical theory. So, the current suggestion is that, since a cut has to be made in any case, let us make it at the sharpest joint, namely, right after LF, and s.h.i.+ft other notions of meaning and content beyond language theory.
At the current state of understanding, the suggestion lacks plausibility.
No doubt, as noted, the issue of whether there are significant grammatical distinctions between decide and try is an empirical one. Nonetheless, from what we know, it seems rather unlikely that enough grammatical distinctions will be found across languages to (uniquely) identify each verb in grammatical terms. Moreover, each of the subcla.s.ses noted above is certainly ''open-ended'' in that new verbal labels are introduced as speakers of English experience new processes and events; some of these labels might enlarge the subcla.s.ses just listed.
The problem obviously compounds for (John, Bill ) and (college, church). We called them ''proper nouns'' and ''common nouns'' respectively. These are not even grammatical categories. For example, grammar will not distinguish between the proper noun, The White House, and the ''ordinary'' det-phrase (DP), the white house. In fact, John itself could be viewed as a DP with null det; similarly, for college and church in (88) (89). Roughly, insofar as grammatical theory is concerned, these are all N items that occur in DPs, period. Thus, given a pair of grammatical representations hPF, LFi for the strings John tried to attend college and John decided to attend church, no further part.i.tioning of representations is possible in grammatical theory. This much is pretty obvious (for more, see Marconi 1996, chapter 1).
What is not so obvious is the lesson to be drawn from this. Suppose we have a concept of meaning under which (88) and (89) are viewed as synonymous. I am surprised to learn that a similar view has sometimes been ocially aired. Jackendo (2002, 338) cites Grimshaw as follows: 122
Chapter 4.
''Linguistically speaking, pairs like [break and shatter] are synonyms, because they have the same structure. The dierences between them are not visible to the language.'' Maybe break and shatter are too close in meaning for users to tell the dierence; so maybe no significant linguistic dif-ference is involved: compare elm and beech (Putnam 1975). Is it plausible to declare that even John and Bill, or college and church, are synonyms, linguistically speaking?
Suppose we extend Grimshaw's idea from the ''thin'' cases to ''thick''
cases as well wherever needed, such as John and Bill, church and college, etc. In eect, we view the pair (88)(89) on a par with, say, a pair of active-pa.s.sive sentences. Our current conception of meaning is such that the synonymy of a pair of active-pa.s.sive sentences forms a crucial data for linguistic theory.1 But then, by parity of conception, the pair (88) (89) can not be viewed as synonymous at the same time. In fact, if we are to give an account of the sameness of meaning for active-pa.s.sive pairs, then giving an account of the dierence in meaning between (88) and (89) becomes part of the agenda for linguistic theory, as noted.
To probe a bit, we saw that an active-pa.s.sive pair is synonymous because of sameness of y-roles. If we are admitting y-roles in language theory to determine the sameness or dierence in linguistic meaning, why can't we admit conceptual categories such as Gplace, Ginst.i.tution, Greligious, and so on to distinguish the meanings of college and church?
Roughly, college will carry the features place, inst.i.tution, areligious, and church will have place, inst.i.tution, religious (ignore problems with church colleges). Such is the pressure of the current largely common-sense concept of meaning. Are we willing to submit to it?
I do not think such problems arise in physics (anymore). In principle, if two things dier in the s.p.a.ce-time framework, then, ceteris paribus, the magnitude of the forces acting on them will dier as well. Thus, there is a complete physical characterization of anything insofar as it has physical properties at all. No doubt, physics cannot fully characterize all the properties of those objects that happen to have physical properties as well. For example, physics cannot furnish a complete account of what makes something a tree or an elephant. The additional properties that are required to distinguish trees from elephants thus belong to other disciplines; biology, in this case. Therefore, unless one is a ''reductionist'' in the sense that one believes that all the properties of anything at all must have a physicalist account, no clear sense of incompleteness attaches to physics.
What I am driving at is the well-known point that, during the development of a science, a point comes when our pretheoretical expectations Words and Concepts 123.
that led to the science in the first place have changed enough, and have been accommodated enough in the science for the science to define its objects in a theory-internal fas.h.i.+on. At this point, the science-viewed as a body of doctrines-becomes complete in carving out some specific aspect of nature. From that point on, only radical changes in the body of theory itself-not pressures from common sense-force further s.h.i.+fting of domains (Mukherji 2001). In the case of grammatical theory, either that point has not been reached or, as I believe, the point has been reached but not yet recognized.
The apparent problem of incompleteness of grammar thus leaves us with the following three options as far as I can see. First, we can continue with the current thick concept of meaning, attempt to disentangle its parts, and attach accounts of these parts, arranged in some order, to grammatical theory to achieve growing completeness. In doing so, we have a variety of options that range from Jackendo 's apparently modest demand to Fodor's full-blooded one. Two further options seem to follow disjunctively if this first option fails.
As a second option, we can try to dissociate the scope of grammatical theory from the current putative scope of (broad) language theory.
Whether we continue to call grammatical theory ''linguistic theory''
becomes a verbal issue. We think of grammatical theory as defining, not unlike physics, its own domain internally. Some initial data, such as active-pa.s.sive pairs, no doubt triggered the search for this theory. But once a study of some core data has enabled us to reach an internally coherent model that addresses Plato's problem, we simply kick the ladder.
This is a routine practice even in linguistic research anyway (Larson and Segal 1995, 89), not to speak of the more advanced sciences. I am suggesting that we push this practice to its logical end. Clearly, the major task at that point is to come up with some conception of the domain so defined.
(It is no longer a secret that I am pursuing this choice in this work.) Or, finally as a third option, we reach the skeptical conclusion that grammatical theory is indeed an inherently incomplete theory and we try to find some philosophical justification for this conclusion. I think a number of philosophers profess this option without really considering the second option. Notice that we could settle on the skeptical option immediately after the failure of the first option. That means we could have taken the current conception of language theory for granted to conclude that the theory cannot be realized. This is how the philosophical/skeptical literature usually works; despite its apparently radical stance, the skeptical literature is essentially conservative.
124.
Chapter 4.
The second and third options arise only when the first fails. We have not seen that happen yet.
4.2.
Lexical Data Pursuing the first option, then, a necessary, but certainly not sucient, condition for generating a representation of a token is to invoke enough nongrammatical types to capture specific meanings of words. A natural first step in that direction is to attach ''selectional features'' to lexical items. In Chomsky 1965, 85, lexical items belonging to the category [N]
were a.s.sumed to have features such as GCommon, GCount, GAnimate, GHuman, and so on, arranged in the order just stated. These features were then used to display more fully some of the subcategorization properties of verbs. For example, subcategorization frames were now supposed to mention types such as [[Abstract] a Subject], [[aAnimate] a Object], and the like. This generates an elaborate system of agreements in which subcategorization frames of verbs will be checked to see whether they match the selectional features of arguments. Clearly, resources such as these may now be used, in fairly obvious ways, to throw out ''deviant''
strings such as colorless green ideas sleep furiously, golf plays John, and misery loves company, while admitting strings such as revolutionary new ideas appear infrequently, John plays golf, and John loves company (Chomsky 1965, 149).
An immediate problem with selectional features is that one does not know where to stop. Jackendo (1990, 5152) points out that the verb drink takes (names of ) liquids as internal arguments. Should we, therefore, include [Liquid] in the subcategorization frame of drink, and as a selectional feature of whatever happens to be in the internal argument position? To see the beginning of a problem that will occupy us for much of this chapter, consider the Bengali verb khaawaa (eat). It takes any of solids, liquids, and gases, among various other things: khaabaar (food), bhaat (rice), jol (water), haawaa (air), cigarette, gaal (abuses), chumu (kisses), dhaakkaa (push/jolt), and so on. A number of these complements are shared by the verb taanaa (pull): khaabaar, bhaat, jol, cigarette.
However, taanaa also takes other things that khaawaa does not: nisshaash (breath), khaat (bed), dori (rope), darjaa (door), naak (nose), kaan (ear), and so on [Naak taanaa typically does not mean pulling one's own or someone else's nose, rather it means drawing in nasal fluid; kaan taanaa, however, means pulling ears]. How are the subcategorization frames of khaawaa and taanaa structured?
Words and Concepts 125.
Selectional features, which attach to the category [N], do not explain other varieties of meaning relations.h.i.+ps that can be traced to synonymy between pairs of verbs. Additional mechanisms are needed to account for these facts. Consider the following pairs of sentences (Chomsky 1965, 162).
(90) John strikes me as pompous / I regard John as pompous.
(91) John bought the book from Bill / Bill sold the book to John.
(92) John struck Bill / Bill received a blow from John.
In each case, roughly, the relation that the first verb establishes between two NPs (John, I ) and (John, Bill ) respectively is maintained by the second verb, although the relative positions of NPs vary. It is natural to express these constancies in terms of thematic roles of NPs. Thus, in (92), the two sentences are related by the fact that, in each case, John is the agent and Bill the recipient/patient. Similarly for (90) and (91).
A somewhat dierent ''meaning'' relations.h.i.+p-not of synonymy, but of entailment-obtains between the verbs persuade and intend. Thus, Chomsky (1991b, 35) suggests that John persuaded Bill to attend college implies, apparently independently of world knowledge, that Bill decided or intended to attend college. Further, there is some sort of a presuppositional link between John is proud of what Bill did and John has some responsibility for Bill's actions, that needs to be explained in terms of universal concepts of PRIDE and RESPONSIBILITY (Chomsky 1972a, 60) (I adopt the convention that words in uppercase mark concepts). These then form a small sample of the data that a putative lexical semantics needs to give an account of.
Chomsky did pay some attention to these facts and to the issues that arise from them in his Aspects of the Theory of Syntax (1965), as noted.
There is some discussion in this book regarding the form a semantic theory might take to account for the facts listed above. Still, Chomsky concluded that ''selectional rules play a rather marginal role in the grammar.'' Thus, ''One might propose . . . that selectional rules be dropped from the syntax and that their function be taken over by the semantic component'' (p. 153).
Chomsky's writings on language, both technical and informal, have been torrential in the decades that followed Aspects. Excluding his technical monographs on grammatical theory, he has written to date at least a dozen books, and a very large number of papers of varying length, devoted primarily to traditional and informal issues in language theory.
126.
Chapter 4.
Yet it will be surprising if his constructive remarks, on the issues just raised, exceed a few dozen pages in all.2 In these pages, he basically repeats the examples to suggest that a universal theory of concepts is urgently needed, without suggesting, as Jackendo (2002, 275) also notes, how this theory is supposed to get o the ground.
Even though the scope of grammatical theory has been enlarged since Aspects to include semantics (LF), selectional rules still do not play any role in this theory. For example, it is routinely said (Chomsky 1993, 1995b) that lexical items, say book, carry semantic features, such as artifact, along with the usual phonological and formal features. It is not clear, however, that this feature is ever introduced in grammatical computation. As far as I can see, nothing happens to these features after lexical insertion except that they are simply carried over to the semantic component; this is one reason why systems such as Distributed Morphology divide lexical features into those that enter into computation to LF and those that do not.3 I touch on Distributed Morphology below.
It is reasonable to conclude then that problems of lexical semantics will probably be addressed, if at all, in a nonlinguistic theory of concepts, where the concepts that are in fact verbalized will belong to semantics proper. Chomsky's prolonged silence on how this can be done could be interpreted as his basic reservation about this enterprise.
4.2.1.
Uncertain Intuitions It seems that the data cited above are not as salient as the typical data for grammatical theory. As we saw for dozens of examples in chapter 2, the core data for grammatical theory carry a sense of immediacy and irrefut-ability. Confronted with paradigmatic cases of unacceptable strings, it is hard to see how the cases may be ''saved.'' *John appeared to the boys to like each other is flat wrong and tinkering with the meanings of appear or each other does not have any payo; in fact, such tinkering is not even attempted by the native user since, by the time he reaches each other, all possible interpretations have collapsed.
More interestingly, even when judgments of acceptability are relatively uncertain, the uncertainty, typically, can neither be removed nor enhanced on reflection. Thus, in an attempt to extend their coverage of data, linguists depend not only on sharp judgments of acceptability, but also on uncertain judgments, where native judgments are uncertain, for example, with respect to whether a given string is okay. The data is listed in some order of increasing uncertainty, and an attempt is made to explain the Words and Concepts 127.
uncertainties according to the order in which they arise. Consider the following (Chomsky 1986, 7677).
(93) *the man to whom I wonder what to give (94) *the man whom I wonder what to give to (95) *the man to whom I wonder what he gave (96) *the man whom I wonder what he gave to Although each of these sentences is marked as unacceptable, it is clear that they are not all unacceptable to the same degree: (93) is perhaps the most acceptable, (96) the most hopeless, and the rest fall somewhere in between. Or, notice the subtle dierence between the unacceptable expressions (97) and (98) (Chomsky 2006a).
(97) *which book did they wonder why I wrote (98) *which author did they wonder why wrote that book We will expect a grammatical theory to order these sentences as they occur because, although our judgments are uncertain, the degree to which a judgment is uncertain, for most speakers of a language, is largely invariant and is not likely to change with further thought. This property of grammatical intuitions enables ''the theory of language'' to ''generate sound-meaning relations fully, whatever the status of an expression''
(Chomsky 2006a).
Semantic judgments, in contrast, are typically open to further thought, even if such thoughts might, on occasion, confirm our initial intuitions.
As noted in pa.s.sing before, ''deviant'' strings ill.u.s.trate this point directly.
In cla.s.s, I need to shake my head rather vigorously when students attempt to attach coherent interpretations to the string colorless green ideas sleep furiously. The point is, I am never allowed to just mention the string, and proceed. We can throw this string out by invoking selection restrictions, as noted. But we may as well get it in by relaxing some of them; that is what the students want. Prinz and Clark (2004, 60) suggest that even the most mundane ''word salads,'' such as arrogant bananas and argue an earnest cake, ''will summon ideas from beyond their boundaries.'' As for golf plays John, John's addiction to golf might lead to the point where it's golf which takes over.4 Misery loves company sounds fine and is frequently in use since misery is infectious, it spreads. I am obviously stretching things here, but how could I allow myself to do so if the relations are supposed to highlight some aspect of my biological makeup?
128.
Chapter 4.
Consider the relation between persuade and intend, Chomsky's favorite example, as noted. If I have persuaded X to do Y, does it always follow that X intends to do Y? Is it meaningless to say, ''I have been persuading John to attend college, but so far he hasn't agreed to?'' Persuasion seems to be an act that is stretched over time; John's intentions, on the other hand, are not actions, but states of John, which he either has or does not have. It is not obvious that I have failed to act at all just because John failed to attain the relevant state. It will be said that my action was really not of persuading John, but of trying to persuade John; the try goes through even if the persuasion fails. So the suggested entailment does hold, confirming Chomsky's intuitions. But it took some persuasion to make a fairly competent user of English to agree.
I am not suggesting that the observed relation between persuade and intend is without theoretical interest. As Chomsky has shown recently in a (rare) extensive discussion of the issue (Chomsky 2003: ''Reply to Horwich''), the lexical items under consideration occur in fairly restrictive syntactic contexts. For example, persuade typically occurs in the syntactic frame (99), (99) Nominal - V.
- Nominal - [Infinitival Clause]
(100).
John persuaded Mary to buy the book.
Chomsky points out that (100) entails something about Mary's intentions, namely that she intends to buy the book, but it entails nothing about John's intentions. Interestingly, as with a host of other verbal items, the lexical item expect also appears in the syntactic frame (99), without entailing anything about Mary's intentions.
(101) John expected Mary to buy the book.
The parallel breaks down further as persuade cannot appear in (102), but expect can appear in (103): (102) *John persuaded there to be a hurricane tomorrow.
(103).
John expected there to be a hurricane tomorrow.
These facts certainly show that specific aspects of meanings of verbs apparently have syntactic consequences-that is, these aspects seem to enter into computation.5 Chomsky calls these aspects of meaning ''I-meanings.'' The result reinforces the conclusion reached earlier that, insofar as the domain of grammatical computation is concerned, there is no sharp division between syntax and semantics.
Words and Concepts 129.
However, it is important to be clear about the domain covered by I-meanings. Chomsky (2003, 298299) is explicit on the status of I-meanings in language theory: ''Principles of FL may enrich and refine the semantic properties of [lexical items] and the image SEM determined by computation,'' as with central grammatical phenomena such as referential dependence, understood Subject, pa.s.sivisation, direct causation, and the like. However, ''there is no reason to expect that what is informally called 'the meaning of X' will be fully determined'' by the faculty of language. In other words, study of I-meanings fall under grammatical theory. So far, these eects are typically listed as properties of lexical items-for example, persuade is listed as an Object-control verb. Hopefully, a more principled and abstract grammatical description will be found in future as the understanding of the structure of the lexicon improves.
Returning to the relations.h.i.+p between persuade and intend, the I-meaning properties just observed suggest that pursuade is a causative, say, ''make-intend.'' The fact remains that the preceding form of a.n.a.lysis tells us little about the individual concepts PERSUADE, INTEND, and EXPECT. As noted, all we know is that the meaning of persuade is such that persuade acts as a causative; we do not know what persuade means.
The point can be extended to cover examples (90)(92) above. Thus, sell is another causative amounting to ''make-buy,'' strike amounts to ''make-receive-blow,'' and so on. The burden of explanation thus s.h.i.+fts to the individual concepts INTEND, BUY, BLOW-RECEIVE, and the like, plus some general cognitive account of formation of causatives. Given Plato's problem, it was always obvious that we need an explanatory account of the system of concepts. The data cited above was expected to supply some point of entry to this complex system. The worry is that they do not. They simply state what we knew before, namely, that an account of individual concepts is needed.
In my opinion, the preceding (skeptical) remarks on lexical data also apply to some of the well-known examples in lexical semantics frequently cited by Chomsky in recent years. I will consider three of them (Chomsky 2006b): (104) If John painted the house brown then he put the paint on the exterior surface though he could paint the house brown on the inside.
(105) When John climbed the mountain he went up although he can climb down the mountain.
130.
Chapter 4.
(106) Books are in some sense simultaneously abstract and concrete as in John memorized and then burned the book.
It seems to me that there is some ambiguity about what Chomsky wants to accomplish with these example. At some places, he uses these examples to cast doubt on specific semantic theories, sometimes even on the very possibility of semantic theory. For example, versions of (106) are used to show the limitations of formal semantics; to my knowledge, Chomsky uses (106), and similar examples, only for this negative purpose.
In the sentence John memorized the book and then burned it, the p.r.o.noun it is referentially dependent on the book where the book is both memorized and burned; it is unclear how the denotation of the book is to be captured. Sometimes, examples (104)(106) are also used to highlight the complexity of innate knowledge a.s.sociated with lexical items as well as the variety of their particular uses, creating serious organizational problems for lexical semantics. We will get a glimpse of this problem with respect to paint and climb below. I will study this aspect of the problem for lexical semantics more fully later in this chapter in connection with lexical decomposition.
At other places, in contrast, Chomsky (2006b) gives the impression that examples (104)(106) form the core data for a theory of I-language alongwith familiar grammatical data involving understood Subject, p.r.o.noun binding, and the like. To say that the I-language specifies the listed properties of paint, climb, and book is to say that these properties fall under the category of I-meanings, and hence under (extended) syntax. Earlier, we saw that some of the lexical properties of persuade, intend, expect, and so on fall under I-meanings since these properties enter into grammatical computation. It is unclear if the cited properties of paint, climb, and book also fall under I-meanings in the same way. In fact it is unclear to me if these properties belong to the study of language at all in the sense in which understood Subject, p.r.o.noun binding, and causatives belong to it.