BestLightNovel.com

The Primacy of Grammar Part 8

The Primacy of Grammar - BestLightNovel.com

You’re reading novel The Primacy of Grammar Part 8 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

69.

(64).

for every person x, x betrayed the woman he loved (65) *for every person x, the woman he loved betrayed x In (64), the r-expression/variable x may be coindexed with the p.r.o.noun he in accordance with Principle B of Binding theory. However, the variable cannot be coindexed with he in (65), since it will result in ''weak crossover.'' The adopted notation thus enables a convergence of semantic facts with structural explanations. Also note that all and only elements of the respective LF-representations, including the empty elements, are given ''logical'' interpretation. Thus the LF-representations accord with the requirements of FI.

Next, we replace John in (60) and (61) with the WP who such that, after fronting and inversion where required, we get (66) and (67).

(66).



Whoi betrayed the woman hei loved (67) *Whoi did the woman hei loved betray Clearly, the relevant facts match exactly those for the QP-constructions in (62) and (63). A wh-element, then, is better viewed as a quantifier like every than as a name like John. Further, a wh-trace has the properties of a QP-trace. We get the logical forms for (66) and (67) simply by replacing every with which in (64) and (65). The quantifier-variable notation thus naturally extends to WPs as well, resulting in a large unification of theory.11 Returning to the examples in (56) and (57), the ground is now prepared for a.s.signing interpretations to the various multiple-wh and multiple-QP constructions. For instance, (56a), I wonder who gave the book to whom, has the logical form (68).

(68) I wonder (for which persons x,y (y gave the book to x)) The sentence has a natural pair-list interpretation-that is, it is about two persons, one of whom gave a book to the other. This interpretation is concealed in the phonetic form but is brought out in (68). (56b), who remembers where John read what, on the other hand, admits of multiple interpretations depending on the relative scopes of the embedded WPs.

Similarly, the first two examples in (57), repeated here in (69a) and (69b) respectively, and whose LF-representations were given in (58), may be a.s.signed dual logical forms as follows.

(69) a. Most Indians speak two languages LFI 1: For most Indians x, x speak two languages 70

Chapter 2.

LFI 2: Two languages y are such that for most Indians x, x speak y b. Every boy danced with a girl LFI 1: For every boy x, x danced with a girl LFI 2: A girl y is such that for every boy x, x danced with y We recall that, notwithstanding glaring problems, one of the reasons for the persistent use of logical notation is that it does seem to give a natural account of an aspect of language, namely, the crucial distinction between names and quantifiers. The quantifer-variable notation of first-order logic thus looked indispensable. We just saw that the structural aspects of this notation can be smoothly incorporated and even extended-to wh-constructions, for example-within grammatical theory itself as a part of the solution to Plato's problem.

The present discussion of linguistic theory started as a challenge to provide an alternative to Russell's way of handling scope distinctions for QPs. So far I have been discussing grammatical theory basically within the G-B framework. I have mentioned some of the central features of the Minimalist Program (MP) only in pa.s.sing. As noted, I will discuss MP more fully in chapter 5. However, it seems to me that MP can be used at this point to highlight the contrasts between Russell and grammatical theory even more dramatically. For this limited goal, I will describe very briefly how scope distinctions are represented in MP with an example.

For a fuller understanding of this example, readers might want to take a quick look at chapter 5 first.

A basic idea in MP is that phrases must get their uninterpretable features checked in syntax to meet FI, which is an economy condition. For example, noun phrases have (semantically) uninterpretable Case features that need to be checked. For checking, another instance of the feature is to be found in a local domain, say, c-command. Sometimes, NPs need to displace from their original position to meet this requirement. Displacement, needless to say, must meet economy conditions such as (the MP version of ) subjacency.

Now, consider a simple multiple-quantifier sentence such as (70). I will follow the a.n.a.lysis in Hornstein 1995, 155; 1999. I wish to emphasize the character of Hornstein's a.n.a.lysis, rather than its internal details that are controversial (Kennedy 1997).

(70) Someone attended every seminar.

It is obvious that the sentence is doubly ambiguous and we will want the theory to generate the two (desired) interpretations of the sentence. As Linguistic Theory I

71.

noted, Subjects such as someone are initially located inside the verb phrase itself: the VP-internal-Subject Hypothesis. For feature checking, particularly Case checking, the Subject someone and the Object every seminar must move outside the VP. Notice that this is the familiar NP movement, not QR, which Hornstein (1995, 153155) rejects on minimalist grounds. Since movement is copying, a copy of these items is inserted at the position shown by the arrows. Ignoring details, this results in (71).

(71) [Someone [every seminar [VP someone [VP attended every seminar]]]] "

A general economy condition on interpretation requires that no argument chain can contain more than one element at LF; this accords with the principle of Full Interpretation, noted above. Since (71) contains two such argument chains ((someone, someone); (every seminar, every seminar)), one member from each needs to be deleted. This generates four options (72)(75), where deleted elements are marked in strikethroughs.

(72) [Someone [every seminar [VP someone [VP attended every seminar ]]]]

(73) [Someone [ every seminar [VP someone [VP attended every seminar]]]]

(74) [ Someone [ every seminar [VP someone [VP attended every seminar]]]]

(75) [ Someone [every seminar [VP someone [VP attended every seminar ]]]]

Another constraint on interpretations says that arguments with strong quantifiers must be interpreted outside the VP-sh.e.l.l: the Mapping Principle. The requirement is purely grammatical. Consider, there [VP ensued [a/ some fight(s) in Delhi ]], in which the determiners a/some occur inside the VP. We cannot replace a/some with the/every/any, (''strong'' quantifiers).

This grammatical fact, noticed by Milsark (1974), is captured in the strong/weak cla.s.sification of quantifiers (Reuland and ter Meulen 1987; Hinzen 2006, 5.5). In Diesing 1992, chapter 3, this fact is (ultimately) explained via a distinction between the IP-part and the VP-part of a syntactic tree (see section 3.3). Strong quantifiers are said to presuppose the existence of ent.i.ties (in the discourse): they are discourse-linked (d-linked). D-linked quantifiers can be interpreted only when they occur in 72

Chapter 2.

the IP-part. Following Hornstein, I am using only the grammatical part of the argument. Since strong quantifiers such as every are d-linked, (73) and (74), in which the strong quantifier occurs inside the VP-sh.e.l.l, have no interpretation. This leaves (72) and (75) as the only interpretable structures, just as desired.

This is one way-Hornstein's-of accomplis.h.i.+ng the task; there are others (Lasnik and Uriagereka 2005, 6.36.9). Also, as noted, we need not accept all the details of the argument. For example, we may doubt if the phenomenon sought to be covered by the Mapping Principle requires an independent principle. Still, the a.n.a.lysis does show that there are purely grammar-internal grounds-feature checking and Full Interpretation-on which such displacements take place. However, in satisfying them, some requirements of the external C-I systems-for example, d-linked items can only be interpreted at the edge of a clause-are also met. In this sense, conditions on meaning follow from the satisfaction of narrow computational properties of the system. In sum, all that Russell achieved by imposing a logical notation has been achieved without it. To emphasize, all of this is a bonus: the system is basically geared to solve Plato's problem.

Can we conclude already that grammatical theory itself qualifies as a theory of language? Traditionally, grammars are not viewed as capturing the notion of meaning; grammars are viewed as representing the ''syntax''

part of languages, where ''syntax'' contrasts with ''semantics.'' For example, it would be said that even if (72) and (75) capture the structural aspects of the relevant scope distinction, we still need to say what these structures mean. As noted, Fodor and Lepore (1994, 155) hold that ''the highest level of linguistic description is, as it might be, syntax or logical form: namely a level where the surface inventory of nonlogico-syntactic vocabulary is preserved;'' that is, linguistic description up to LF is syntactic-not semantic-in character. Even biolinguists seem to waver on this point: thus, Fox (2003) holds that LF is a syntactic structure that is interpreted by the semantic component.12 So the demand is that we need to enrich grammatical theory with a theory of meaning or a semantic theory to capture human semantic understanding; only then do we reach a proper language system.

A number of familiar questions arise. Does the notion of grammar capture some notion of meaning? Beyond grammar, is the (broader) notion of language empirically significant? What are the prospects of ''enriching'' grammatical theory with postgrammatical notions of meaning? I turn to these and related questions in the next two chapters.

3 Grammar and Logic ''Semantic understanding'' is a catchall expression for whatever is involved in the large variety of functions served by language. The rough idea is that grammatical knowledge somehow interacts with our reasoning abilities concerning whatever we know about intentions, desires and goals, the world, cultures, and the like. It is a.s.sumed that, somewhere in this vast network of knowledge, there is a semantics-pragmatics distinction. Faced with such imprecision, we need some general plan to be able to raise specific questions on the issues facing us.

Given the vastness of the semantic enterprise and the limitations of s.p.a.ce here, I need to state, before I proceed, what exactly my goals are.

As noted in various places, a central motivation for this work is to see if the concept of language embedded in grammatical theory stands on its own as an adequate theory of human languages. Keeping to the meaning side, in eect the issue is whether the scope of grammatical theory needs to be expanded at all by incorporating what are believed to be richer conceptions of meaning. I a.s.sume that two broad semantic programs are immediately relevant here: formal semantics and lexical semantics.

To urge the adequacy of grammatical theory, I will argue in this chapter that the sharp traditional distinction between syntax and semantics, explicitly incorporated in logical theories, does not plausibly apply to the organization of grammatical theory; in that sense, grammatical theory contains a semantic theory. The rest of the discussion attempts to ''protect'' this much semantics. Thus, next, with respect to some cla.s.sical proposals in formal semantics, I will argue not only that some of the central motivations of formal semantics-for example, the need for a canonical representation of meaning-can be achieved in grammatical theory, but that it is unclear if formal semantics carries any additional empirical significance. In that sense, I will disagree with the very goals of formal semantics.

74.

Chapter 3.

In the next chapter, in contrast, I will agree that, from what we can currently imagine, a theory of conceptual aspects of meaning is desper-ately needed, but, with respect to some of the influential moves in lexical semantics, there are principled reasons to doubt if the data for lexical semantics is theoretically salient, and if we can take even the first steps toward an abstract, explanatory theory in this domain at all.

As indicated in chapter 1 (sections 1.2, 1.3.3), I am indeed rather skeptical about all nongrammatical approaches in semantics that work either with constructs of logic or invoke concepts; it is hard to see what is left.

However, proving skepticism is not the agenda here. All I want to do within the s.p.a.ce available is to bring out enough foundational diculties in the current form of these approaches to motivate a parting of ways from traditional concerns of language theory, while the search for deeper theories in formal and lexical semantics continues.

3.1.

Chinese Room I wish to take a detour to develop an image that might help us secure a firmer perspective on how to conceptualize the semantic component of language. In a very influential essay, the philosopher John Searle (1980) proposed a thought experiment to evaluate the scope of the general idea that certain mental systems can be viewed as symbol-manipulating devices. We saw that this idea certainly guides biolinguistic research.

Searle invites us to imagine a room that contains a monolingual English speaker S, a number of baskets filled with Chinese symbols, and a ''rulebook'' that contains explicit instructions in English regarding how to match Chinese symbols with one another. Suppose S is handed in a number of questions in Chinese. He is then instructed to consult the rulebook and hand out answers in Chinese. Suppose the Chinese speakers find that these answers are eminently plausible; hence, S pa.s.ses the Turing test (Turing 1950). Yet, according to Searle, for all that S knows, he does not understand Chinese. He simply matched one unintelligible symbol with another and produced unintelligible strings on the basis of the rulebook. A symbol-manipulating device, therefore, cannot represent genuine understanding. Since Chinese speakers by definition understand Chinese, Chinese speakers cannot ( just) be symbol-manipulating devices.

I will not enter into the internal merits of this argument insofar as it concerns the specific features of the Chinese room. Just too many details of the parable need to be clarified before one can begin to draw general lessons from the argument. For example, if the argument warns against Grammar and Logic

75.

taking computer simulations-whether computers can be programmed to mimic human understanding-too realistically, then we might readily agree with the spirit of the argument (Chomsky, Huybregts, and Riemsdijk 1982, 12). If, on the other hand, Searle's argument is designed to be a global refutation of computational theories of mind and language, then we would want to be clear about several details. To mention the most obvious of them: What is the content of the rulebook? What concept of S's understanding enters into his understanding the instructions of the rulebook? Why should we infer, from the activity of matching ''unintelli-gible'' symbols, a total lack of understanding on S's part? In any case, these questions have been extensively discussed in the literature (Dennett 1991; Block 1995, etc.).

Instead, I will be concerned directly with the general conclusion Searle draws, notwithstanding its source. Thus Searle (1990, 27) says, ''Having the symbols by themselves-just having the syntax-is not sucient for having the semantics. Merely manipulating symbols is not enough to guarantee knowledge of what they mean. . . . Syntax by itself is neither const.i.tutive of nor sucient for semantics.'' Searle oers what we needed: some concept of syntax and some way of distinguis.h.i.+ng it from semantics.

Unfortunately, the concept of semantics continues to be uncomfortably thick: all we have been told is that semantics is not syntax. But, at least, we have been told what syntax is: it is mere manipulation of symbols. Although the statement is not exactly clear, it is a start.

3.2.

PFR and SFR Let us grant the (perhaps) obvious point that the person inside the Chinese room does not understand Chinese insofar as, ex hypothesi, he does not know what the various Chinese symbols ''stand for.'' Until we are told more about the contents of the rulebook, this is the only sense in which S does not understand Chinese; no other sense of his lack of understanding of Chinese has been proposed. So, according to Searle, a linguistic system executes two functions: a syntactic function that establishes ''purely formal relations.h.i.+ps'' (PFRs) between a collection of lexical items, and a semantic function that relates the collection to what it stands for (SFRs). According to Searle, S understands a language L just in case both the functions, in order, are executed. Let us examine what PFR and SFR mean, if anything, in the context of grammatical theory.

The conceptions of PFR and SFR are deeply ingrained in the logical tradition from where, presumably, Searle and others cull their conception 76

Chapter 3.

of how language works. A logical theory is usually conceived of in two stages. The first stage is called ''syntax,'' which states the rules of well-formedness defined over the primitive symbols of the system to execute PFRs, where a PFR is understood to be just an arrangement of ''noise''

or marks on paper.

A logical theory also contains a stage of ''semantics,'' which is viewed as a scheme of interpretation that gives the satisfaction conditions (in a model) of the well-formed strings supplied by syntax. That is, SFRs are executed when the scheme of interpretation is applied to the well-formed strings. For example, syntax says that the string ''P5Q'' is well formed and semantics says that ''P5Q'' is true just in case each of ''P'' and ''Q''

is true. In other words, (logical) syntax outputs a set of structures that are taken to be interpretable though uninterpreted, and (logical) semantics says, in general terms, what that interpretation is. As we will see, the general picture is routinely taken for granted in the formal semantics program.

I am not suggesting that Searle, or others who uphold the syntax-semantics divide, would automatically approve of the formal semantics program as it is typically pursued. In fact, from what I can make of Searle's position on the issue, he would want to go far beyond merely the logical conditions to capture what he takes to be genuine linguistic understanding: ultimately, it will involve communication intentions, illo-cutionary acts, and the like. So, Searle is likely to think of the formal semantics program as so much more of syntax. Nevertheless, there is no doubt that Searle's notion of semantics begins to get captured with the logical conditions themselves. In that sense, I am suggesting that a logic-motivated syntax-semantics divide has been seen to be necessary for launching any semantics program beyond LF; by parity, doubts about the divide lead to doubts about the entire program no matter where it culminates.

In this program, sound-meaning correlation is taken to be direct since, by stipulation, a logical ''language'' has no room for ambiguity: a given string has exactly one interpretation. So, there is no need either to correlate dierent interpretations with the same string or to attach the same interpretation to dierent strings (Moravcsik 1998). I feel, tentatively for the moment, that this idea of direct correlation between sound and meaning is part of the motivation for thinking that syntax and semantics are strictly distinct; it encourages a ''syntax first, semantics next'' picture.

This is one of the many reasons why a logical system is, at best, an artificial language. An artificial language is conceived of as an external ob- Grammar and Logic

77.

ject constructed for various ends: examples include computer languages, logistic systems and other formal languages, perhaps even systems such as Esperanto, and so on. When we design such languages, it is likely that we bring in certain pretheoretical expectations in their construction. One such expectation is a sharp syntax-semantics divide. Even natural languages could be viewed as external objects such as sets of sentences, and not as something internalized by the child. It is no wonder that similar expectations will be brought in to study natural languages as well even if we do not deliberately create them. This may have given rise to the idea that there is an absolute distinction between syntax and semantics in natural languages as well.

The sharp distinction between the well-formedness and interpretability of a sentence fails to accommodate the intuition that we expect these things to converge-that the form of a sentence is intrinsically connected to its interpretation even if actual convergence may not be uniformly available.1 For a flavor of this very complex issue, consider the following.

The expression who you met is John is (immediately) interpretable without being grammatical (Chomsky 1965, 151); there seems to a man that Bill left, on the other hand, is not immediately interpretable without being ungrammatical (Hornstein 1995, 70, following Chomsky 1993).

We would like to think that, at these points, the system ''leaks'' in that it allows generation of various grades of gibberish. Thus an empirically significant theory will include a subtheory of gibberish: ''The task of the theory of language is to generate sound-meaning relations fully, whatever the status of an expression'' (Chomsky 2006a). Logical theory, on the other hand, blocks all gibberish by stipulation.

We saw that a grammatical theory takes the lexicon as given and gives a generative account of a pair of representations, PF and LF, formed thereof. A PF-representation imposes grammatical constraints on how a string is going to sound, while an LF-representation captures the grammatically sensitive information that enters into the meaning of a string.

Is there a PFR-SFR distinction as separate stages in grammatical computation? Note that in order to say that the output of grammar itself is an arrangement of ''noise'' (PFRs), we should be able to say that both the outputs of grammar, PF and LF, represent noise-that is, PF and LF individually represent PFRs.

Could we say that the phonological part of the language system executes just PFRs? No doubt the phonetic properties of individual lexical items are distinct from their nonphonetic properties-that is, categorial and semantic properties. There is also no doubt that there is a distinct 78

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

The Primacy of Grammar Part 8 summary

You're reading The Primacy of Grammar. This manga has been translated by Updating. Author(s): Nirmalangshu Mukherji. Already has 607 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

BestLightNovel.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to BestLightNovel.com