Group Blog
 
 
¸Ñ¹ÇÒ¤Á 2548
 
 123
45678910
11121314151617
18192021222324
25262728293031
 
7 ¸Ñ¹ÇÒ¤Á 2548
 
All Blogs
 

question 3: English Grammar and Syntax

English Syntax

¤Ó¶ÒÁ It has been said that for Chomsky, the domain of syntax is that of possible sentences, including such beauties as

“The man and woman whose children wanted to help them decide which design to adopt have already realised that the new kitchen will be much more difficult to keep clean than the old one was”

This focus has proved a stumbling block for many teachers, who have wondered whatever happened to the traditional concern with fluent, natural occurring sentences. It was nonetheless powerfully strategic of Chomsky to do this to break away from the prevailing school of descriptivism in Linguistics. Discuss the reasons for Chomsky’s focus on just this aspect of the human language faculty and positions taken by other linguistic schools over the last 40 years to this focus.


¤ÓµÍº

In the first sentence of Syntactic Structures (1957), a language is considered to be “a set….of sentences.” (Chomsky, 13), and for him the goal of linguistic theory is to provide a formal characterization of human language and distinguish the grammatical processes that can occur in a language from those that cannot. So, complex sentences such as “The man and woman whose children wanted to help them decide which design to adopt have already realized that the new kitchen will be much more difficult to keep clean than the old one was” is as interesting to a linguist as a simple sentence such as “I like Syntax” or even a seemingly nonsensical, but still grammatically English, sentence such as “Colourless green ideas sleep furiously.” Apart from length, these sentences are not really different; all of them are formed by the same set of rules that generate all English sentences. Therefore, if we discover this set of rules and can state them explicitly, then it means that 1) we not only come up with a new theory of language---one that will ideally explain what holds true across human languages while addressing their variations, and 2) we’ll arrive ultimately at the understanding of the human mind (i.e. the brain). However, these theoretical assumptions are not without resistance. Other linguistic schools, especially operating under the functionalist framework, focus more on the social aspects of language and how speakers use language to accomplish everyday social life, criticising that the Chomskyan framework treats language as intrinsically asocial and as if language existed in a vacuum. Therefore, complex sentences that are unlikely to be used during human interactions are not of interest to these schools.

In this paper, it is my purpose to show that Chomsky’s assumptions 1) and 2) above are valid for the study of language in general and especially Syntax, while at the same time, the social treatment of language can still be captured under his framework. I shall first discuss both assumptions in detail, taking Chomsky as the representative of the Generative Grammar framework (especially that of Principles and Parameters), and subsequently, I shall discuss why other frameworks think differently, using Systemic Functional Linguistics by Michael Halliday as the representative.

Before Syntactic Structures, descriptive linguistics in America followed the Bloomfieldian tradition, whose view on language was to discover grammar by performing a set of operations on a corpus of data, and a complete, successful
grammatical description must always be in the following order: phonetic description--> morphological description-->syntactic description-->discourse. The order is important, and each level of description must not be mixed with the others. This is because a morpheme is made up of one or more phonemes, and a syntactic construction is made up of morphemes. So, it follows that a linguist has to extract the phonemes from the flow of speech first, and find out how they are arranged to form a morpheme, and so on. If two phones exhibit contrast in meaning in a minimal pair, then they are separate phonemes, and morphemes are classified in a similar procedure as well. Syntax,
on the other hand, focuses on how to analyse the structure of extended discourse. The obvious problems for this framework are, for example, on the phonological level, it is impossible to handle suprasegmental distinctions; on the morphological level, explanations for classifying morphemes such as “went” are troublesome as to how many morphemes there are; and on the syntactic level, relationships between clearly related sentences (e.g. between an active sentence and its passive counterpart) are shown in an ad hoc manner. Needless to say all these problems contributed to the downfall of the Bloomfieldian tradition.

With the new trend in Linguistics presented in Chomsky’s Syntactic Structures (1957), linguistic description does not need to be mechanically performed in a step-by-step manner, so that the phonemes must be discovered first, morphemes second, and so on. In this framework, grammar came to be the focus of linguistics, and the success of grammatical analysis depends on two conditions. The first is called “external conditions of adequacy,” which include for example:

1. The sentences generated will have to be acceptable to the native speaker.
2. Every case of a “constructional homonymity” (the assignment of more than one structural description to a sentence) describes a real ambiguity; every case of ambiguity is represented by constructional homonymity.
3. Differential interpretations of superficially similar sentences are represented by different derivational histories.
4. Sentences understood in similar ways are represented in similar ways at one level of description. (quoted in Newmeyer (1986), 21)

However, the external conditions alone are not enough. There must be the generality condition in grammar as well:

we require that the grammar of a given language be constructed in accordance
with a specific theory of linguistic structure in which such terms as “phoneme” and “phrase” are defined independently of any particular language. If we drop either the external conditions or the generality requirement, there will be no way to choose among a vast number of totally different grammars (Syntactic Structures, 50).

Operating under these two conditions, generative grammarians can solve the problems faced by structural linguists. For example, transformational rules make explicit the structural parallels between an active sentence and its corresponding passive version, verb infection (affix hopping), and so on. Phrase structure rules, on the other hand, dividing a sentence into its component phrases are found in every language, and recur in every sentence. Deep structure can solve the problem of ambiguity of two sentences that appear similar on the surface.

By 1960s, with the advent of Chomsky’s Aspects of the theory of Syntax, the condition of generality mentioned above came to be equated with that which is true of language by biological necessity---i.e. that which is innate, and two new terms were coined: “performance” and “competence.” The former is “the actual use of language in concrete situations,” and the latter refers to “the speaker/hearer’s knowledge of his language,” (Aspects, 4) which is acquired in childhood by any normal individual. A grammar---the theory of language---was now a theory of a speaker’s competence, which is one of the many systems that contribute to performance. That is, the theory of competence is a subpart of the eventual theory of performance, so that a linguist has to understand what a native speaker knows about his language before he can study the effects of, say, slips of the tongue. In short, Chomsky’s argument rested simply on the fact that a person “knows” his/her language, and “knowing” a language was interpreted as having a theory of a set of sentences. It is a linguist’s task to analyse this knowledge as to what it consists of and explain how it works, and finally how we come to have it. Primary data are drawn from observation and native speakers’ intuition.

By 1980s, the innateness hypothesis came to be called alternatively “Universal Grammar,” as Chomsky puts it: “In many cases that have been carefully studied…it is a near certainty that fundamental properties” of the grammars that children attain “are radically underdetermined by evidence available” to them, and “must therefore be attributed to UG itself” (Lectures on GB, 3).
With time, although the initially posited rules like phrase structure and transformational rules were modified in whole or in part (or even abandoned!), the basic principles still remain: a grammar is, still, made up of rules that generate possible sentences. These rules have come to be known currently as Principles and Parameters. The former provides us with the tools needed to describe the grammar of any natural language adequately (that is, Universal Grammatical principles, which are invariant across languages), while the latter provides us with aspects/values that belong to a particular language, hence distinguishing it from other languages.
An appealing aspect of Principles and Parameters is that it incorporates comparative syntax and typological consideration, making it possible to account for cross-linguistic variation. At one level, every human being who speaks a natural language has the same language (that is, Universal Grammar), while at the other level, differences amongst human languages are results of each language setting slightly different parameters, with the possibilities of differences being limited to just binary options. For example, English and Japanese exhibit broad differences in terms of both sounds and structures. Each selects a different cluster of sounds drawn from the stock of humanly possible linguistic sounds, and the former is a head-initial language while the latter a head-final one. Despite these differences, both the sounds that are available for use and the broad structural differences are included in UG. The structural options are represented as two possible positions of head-first or head-last “switch” that children sets early on upon being exposed to the language.

As explained above that one part of grammar is universal (i.e. same across human languages) while the other part specific to language, Chomsky posits that these two parts of grammar must be lodged in a linguistic faculty (i.e. brain module), which is separate from other mental faculties. This faculty enables a speaker to acquire any natural language as his/her native language, providing him/her with a set of procedures (i.e. an algorithm) for developing a grammar on the basis of limited linguistic experience (i.e. poverty of stimulus). This acquisition process is unique to human beings, and it is different in kind from any other type of learning which human beings experience, so that learning a language involves mental processes entirely distinct from those involved in learning other social activities. This then leads inescapably to the discovery about the human mind, as Chomsky (1972) puts it:
Whatever evidence we do have seems to me to support the view that the ability to acquire and use language is a species-specific human capacity, that there
are very deep and restrictive principles that determine the nature of human language and are rooted in the specific character of the human mind. (Language and mind, 102)

The most remarkable fact about human language, Chomsky says, is the discrepancy between its apparent complexity and the ease with which children acquire it. Learning other aspects of human activities, such as how to ride a bicycle or how to do mathematics, requires intensive instruction, whereas every normal child learns at least one natural language seemingly with ease through exposure. Chomsky’s explanation is
that most of the complexity of languages does not have to be learned, because much of
the linguistic knowledge (except for vocabulary, for example) is innate, and our brains
(i.e. minds) are already hardwired for language learning.
Moreover, humans are creative, rather than imitative, in using language. We are capable of producing and understanding not only sentences we have previously heard, but
also new sentences which we have never encountered before, and these sentences are
understood by other speakers of the same language. Chomsky (1972) claims:

The normal use of language is innovative in the sense that much of what we say in the course of normal language use is entirely new, not a repetition of anything that we have heard before, and not even similar in pattern-in any useful sense of the terms “similar” and “pattern”-to sentences or discourse that we have heard in the past. (Language and Mind, 12)

However, it must be stressed that creativity in the Chomskyan sense is the mundane everyday ability to create and understand novel sentences according to the established knowledge in the mind---novelty within the constraints of grammar: “Creativity is predicated on a system of rules and forms, in part determined by intrinsic human capacities. Without such constraints, we have arbitrary and random behavior, not
creative acts” (Reflections on Language, 133).

Therefore, it is not surprising why Generative Grammar is interested in a well-formed sentence such as the lengthy one mentioned in the beginning of this paper. Even though it is unlikely that we will hear such a sentence uttered in any context (except in a Syntax class), it is still a clearly possible and well-formed English sentence constructed out of human linguistic creativity. Research studies have shown that animals (such as chimpanzees), although they seem to learn human languages to some extent, do not exhibit any degree of creativity. They can only repeat what humans teach them to do. So, it follows that in studying the structure of human languages, we are investigating a central aspect of human nature.

Focusing on linguistic creativity, Generative Grammar appears to treat syntax as autonomous, seemingly ignoring the social aspects of human language. In other words, the Generative framework focuses on competence, rather than on performance. Chomsky made clear the separation of syntax out of other areas of linguistics since Syntactic Structures: “I think that we are forced to conclude that grammar is autonomous and independent of meaning” (17). Without doubt, this is the point that triggers criticism from other schools of linguistics. Halliday, for example, who is responsible for Systemic Functional Linguistics, claims that because language is social semiotic, we need to look at the functions of language in making meanings within the social and cultural context.
The main questions he has are: how do people use language? And how is language structured for use? (Compare these with Chomsky’s underlying question: What are those rules that generate possible sentences?)

Instead of focusing on the language faculty as the source of creativity (and hypothetical, imaginary sentences), Halliday’s framework looks at text, because it is an instance of language in use. Suzanne Eggins (1994) makes this clear:
As soon as we ask functional questions such as “how do people use language?” (i.e. “what do people do with language?”), we realize we have to
look at real examples of language in use. Intuition does not provide a sufficiently reliable source of data for doing functional linguistics. Thus, systematicists are interested in the authentic speech and writing of people interacting in naturally occurring social contexts. (3)

This framework sees those elements that make up a sentence as products of meaningful choices that a speaker makes, against the background of other choices which could have been made, depending on the contexts of use. For example, by looking at transitivity (i.e. how participants, processes, and circumstances are placed in a sentence), it is possible to tell how a speaker encodes his/her experiential meaning. The position of this framework no doubt can be traced back to structuralism, so that language is viewed as a system of signs.

I do not think that Generative grammarians would deny that the form-meaning/function relation of a linguistic item can be captured, and that meaning is important, as Chomsky puts it in Syntactic Structures: “[w]e should like the syntactic framework of the language that is isolated and exhibited by the grammar to be able to support semantic description, and we shall naturally rate more highly a theory of formal structure that leads to grammars that meet this requirement more fully” (102). Nevertheless, the need for principles of competence governing the distribution of possible sentences, asocial as they are, should not be undermined by principles from performance governing speaker’s choice. For example, whatever the configurations might be---“She carried the bomb onto the plane” or “The bomb was carried onto the plane by her”---the sentence still consists of two noun phrases, one prepositional phrase and one verb phrase. Or whatever the differences might be between “I went to the bar and I got drunk” and “I got drunk and I went to the bar,” there are still two propositions. Obviously in some cases there may be motivations for a speaker to choose one sentence over another sentence that is similar in meaning, but it also means that s/he must already know how many options s/he has and what each option looks like. And the available choices, needless to say, vary to a more or less extent across languages.

The grammar that supports semantic description that Chomsky referred to above is possible at present if we adopt his modular concept of the human mind in describing a linguistic phenomenon. That is to say, autonomous though it is, grammar is just one “module” that interacts with other systems in giving language its overall character.

Chomsky had this idea about the interaction of a variety of factors as far back as the terms competence and performance were coined: “To study actual linguistic performance, we must consider the interaction of a variety of factors, of which the underlying competence of the speaker-hearer is only one. In this respect, study of
language is no different from empirical investigation of other complex phenomena.
(Aspects, 4)

Let me explain how this model works by quoting the sentence in the exam question again “The man and woman whose children wanted to help them decide which design to adopt have already realized that the new kitchen will be much more difficult to keep clean than the old one was.” For one thing, that this sentence is unacceptable to many teachers cannot be attributed to ill-formedness, since it is interpretable (of course not without effort), and there is no movement violation of the WH constituents, the S-V agreement is checked, and so on. However, the unacceptability can easily be accounted for in terms of confusion, which is due to its length and the complex embedded clauses involved. And the unacceptability follows from the modular interaction of the autonomous syntax module and the perceptual module.
Finally, what the modular model does also makes it possible to relate forms and meanings (which form is chosen and in which context). In most cases, there seems to be no one-to-one correspondence between forms and meanings. One form may exhibit many functions, and one function can be realised in many forms. For example, Givon (1993) suggests that grammatical devices to indicate contrast can be achieved in the following ways:

Neutral contrasted element
Joe will milk the goat none
Stress-focus
JOE will milk the goat subject
Joe WILL milk the goat auxiliary
Joe will MILK the goat verb
Joe will milk THE GOAT object
Cleft
It’s Joe who will milk the goat subject
It’s the goat that Joe will milk object
Pseudo-cleft
The one who will milk the goat is Subject
JoeWhat Joe will do to the goat
is milk it verb

What Joe will do is milk the goat verb phrase
What Joe will milk is the goat object
(English Grammar, 177-178)

While grammar makes possible all these choices to speakers to contrast ideas in a sentence, it is up to the speaker which way he will choose, depending on other systems, such as acoustics, context, and so on, all of which interact with grammar. For example, the stress-focus contrastive device is going to be used in spoken language, not written language, while “It’s Joe who will milk the goat” presupposes that there is somebody who will milk the goat, and it is Joe, not somebody else.

In conclusion, I argue in this paper the reasons for Chomsky’s postulation of the
Universal Grammar (UG) and the innately-endowed human language faculty, and why the hypothesis is not resisted by other schools of linguistics. However, the UG conception does not necessarily mean the disposal of semantics and the language in real use. Rather, the modular conception of grammar interacting with other systems to form a complex linguistic phenomenon can address the concerns that other schools of linguistics might have.

Works Cited

Chomsky, Noam. Syntactic Structures. Mouton, The Hague, 1957.

--------------------. Aspects of Theory of Syntax. Cambridge, MA: MIT Press, 1965.

--------------------. Language and Mind. Harcourt Brace Jaovanovich, New York, 1972.

--------------------. Reflections on Language. London: Temple Smith, 1976.

--------------------. Lectures of Government and Binding. Dordrecht: Foris, 1981.

Eggins, Suzaane. An Introduction to Systemic Functional Linguistics. USA: Pinter,
1994.

Givon, Talmy. English Grammar: A Function-based Introduction. Vol II. Amsterdam: J.
Benjamins, 1993.

Newmeyer, Frederick. Linguistic Theory in America. 2nd ed. San Diego: Academic
Press, 1986.




 

Create Date : 07 ¸Ñ¹ÇÒ¤Á 2548
2 comments
Last Update : 7 ¸Ñ¹ÇÒ¤Á 2548 16:04:16 ¹.
Counter : 1475 Pageviews.

 

 

â´Â: SAPgal (SAPgal ) 5 ¡Ã¡®Ò¤Á 2549 14:22:59 ¹.  

 

¢Í·Ó¤ÇÒÁÃÙé¨Ñ¡´éǤ¹ ¡ÓÅÑ§Ê¹ã¨·Ó Thesis ã¹ÃдѺ»ÃÔ­­Òâ·àÃ×èͧ Discourse analysis ¤èÐ áµè¡ÓÅѧ¨Ðµéͧâ§价Õè¡ÒÃÊ͹ àÍÒäÇé¨ÐÁҢͤӻÃÖ¡ÉҹФÐ

 

â´Â: ÊÒÇãË­èÇÑ¡ÃÐàµÒÐ 28 ÁԶعÒ¹ 2554 15:59:54 ¹.  

ª×èÍ :
Comment :
  *ãªé code html µ¡áµè§¢éͤÇÒÁä´é੾ÒÐÊÁÒªÔ¡
 


krisdauw
Location :
Washington, Seattle United States

[´Ù Profile ·Ñé§ËÁ´]

½Ò¡¢éͤÇÒÁËÅѧäÁ¤ì
Rss Feed

¼ÙéµÔ´µÒÁºÅçÍ¡ : 11 ¤¹ [?]




Friends' blogs
[Add krisdauw's blog to your web]
Links
 

 Pantip.com | PantipMarket.com | Pantown.com | © 2004 BlogGang.com allrights reserved.