Children’s Acquisition of Syntactic Knowledge
Children’s Acquisition of Syntactic Knowledge
- Rosalind ThorntonRosalind ThorntonAssociate Professor, Department of Linguistics, Macquarie University
Children’s acquisition of language is an amazing feat. Children master the syntax, the sentence structure of their language, through exposure and interaction with caregivers and others but, notably, with no formal tuition. How children come to be in command of the syntax of their language has been a topic of vigorous debate since Chomsky argued against Skinner’s claim that language is ‘verbal behavior.’ Chomsky argued that knowledge of language cannot be learned through experience alone but is guided by a genetic component. This language component, known as ‘Universal Grammar,’ is composed of abstract linguistic knowledge and a computational system that is special to language. The computational mechanisms of Universal Grammar give even young children the capacity to form hierarchical syntactic representations for the sentences they hear and produce. The abstract knowledge of language guides children’s hypotheses as they interact with the language input in their environment, ensuring they progress toward the adult grammar. An alternative school of thought denies the existence of a dedicated language component, arguing that knowledge of syntax is learned entirely through interactions with speakers of the language. Such ‘usage-based’ linguistic theories assume that language learning employs the same learning mechanisms that are used by other cognitive systems. Usage-based accounts of language development view children’s earliest productions as rote-learned phrases that lack internal structure. Knowledge of linguistic structure emerges gradually and in a piecemeal fashion, with frequency playing a large role in the order of emergence for different syntactic structures.
1. Acquiring Syntax
All parents take it for granted that language will emerge in their developing child. All typically developing children pass through similar stages and in a short time become adult speakers of their local language (or languages). Children babble, pass through a single and multiword stage, and then start to produce entire sentences that increase in complexity. Exactly what knowledge base, if any, and what mechanisms drive this progression in the language acquisition process is a matter of controversy. The challenge for language acquisition researchers is to reveal how this process unfolds.
Two current approaches to the problem of language acquisition are introduced. One theory of language acquisition follows the theory of Universal Grammar advanced by Noam Chomsky (Chomsky, 1965, 1981, 1995). This is often called the generative approach to language acquisition. This theory takes as a basic assumption that children are ‘hardwired’ with linguistic knowledge that gives them access to structural representations in the absence of experience. The second approach is the usage-based account of language acquisition. Discussion will focus on one particular version of usage-based grammar that has been prominent in the acquisition literature. This is the constructivist approach promoted by Elena Lieven, Michael Tomasello, and others (see Ambridge & Lieven, 2011; Lieven & Tomasello, 2008; Tomasello, 2003). Language acquisition researchers working within this framework argue that children learn sentence structure through experience.
The discussion begins with a consideration of the goals of a linguistic theory and theory of acquisition. Each theory’s perspective on how children acquire syntactic representations is reviewed. The debate on whether children’s sentence representations are hierarchical structures or linear schemas is explored by addressing structure dependence in complex yes/no questions and errors in children’s wh-questions.
2. Goals and Assumptions
In Knowledge of Language, Chomsky proposed that in order to establish how language is represented in the mind/brain of speakers, three questions need to be addressed. The first question asks what constitutes knowledge of language. The second question asks how knowledge of language is acquired, and the third asks how knowledge of language is put to use (Chomsky, 1981). As will become clear, generative and usage-based linguistic theories have different ideas about what constitutes the representation of language, and syntax in particular, in the mind. The theories also depart in their perspective on whether acquisition of language is guided partly by innate knowledge or whether all knowledge of language is learned through experience. This is often known as the ‘nature’ versus ‘nurture’ controversy. Although it is of interest to record how language is used in context, this article restricts its inquiry to the first two questions.
A widely shared assumption is that exposure to language and interaction with speakers in a language community are essential for acquisition to proceed. Speakers of the language, that is, caretakers, siblings and so on, provide linguistic input to the child in the form of utterances and their corresponding meanings. This is known as ‘positive input.’ The fact that positive input is essential for language acquisition to proceed is not disputed. The dispute among language acquisition researchers is whether positive evidence alone is sufficient for children to achieve mastery of the adult grammar.
Putting individual idiosyncrasies or dialectal differences of speakers aside, convergence on the adult grammar means that children turn into speakers who have the same grammatical knowledge; they know its boundaries. That is, they generate the same set of syntactic structures, and share judgements about which structures are grammatical and which are ungrammatical. This raises a provocative question. While positive input informs language learners of the possible sentences and meanings, what linguistic evidence informs children of the outer limits of the grammar, that is, the sentence/meaning pairs that are not permitted in the language? The commonsense answer is that the adult speakers of the language provide this information by correcting children’s ungrammatical sentences.
Corrective feedback is known as ‘negative evidence’ in just those cases when the child is actually told that he or she said something ungrammatical. For example, if a parent was to actually label a child’s sentence as ungrammatical, by saying “Don’t say ‘I want he go’; say ‘I want him to go,’” and this kind of feedback was consistent, the child would have all the information needed to eliminate the ungrammatical syntactic structure. This would allow children to settle on the adult grammar in a relatively short period of time. However, research has revealed that parents do not provide this kind of explicit correction (Brown & Hanlon, 1970; Marcus, 1993; Morgan & Travis, 1989). What Brown and Hanlon (1970) concluded was that parents mostly correct their children for truth-value, that is whether they have said something that is true or not. They found that while some feedback may be provided for mispronunciations and lexical errors, parents rarely correct the grammaticality of sentences and interpretations of their children’s utterances. For this reason, generative and usage-based researchers alike have reached a consensus that children do not receive negative evidence. The drawback is that this leaves us with no solution to the issue of how children come to know what sentences are ungrammatical in their language. Two proposals to resolve this problem will be considered.
One proposal offered by child language researchers, and accepted by constructivist language researchers, is to suggest that the information needed to throw out certain kinds of ungrammatical sentences is available in the positive input but not offered in the direct form (i.e., “Don’t say X; say Y”) investigated by Brown and Hanlon (1970). The proposal views children as able to monitor and interpret certain aspects of the positive input that lead them to reconsider their grammatical hypotheses.
It has been suggested that certain speech acts in the child-directed speech, such as expansions, repetitions, confirmation questions, and so on, alert children to their errors (e.g., Hirsh-Pasek, Treiman, & Schneiderman, 1984; Demetras, Post, & Snow, 1986). For example, if a child utters “Don’t put tape in” and the parent expands this with the question “Don’t put the tape in?” the child might realize that he or she had omitted the determiner (see Morgan & Travis, 1989). Several issues arise with the proposal that children are alert to feedback provided in speech acts in the positive input. First, children would need to know that particular speech acts, expansions, for example, are key speech acts to look out for because they contain corrective feedback. Second, children would need to be able to readily identify the different speech acts so that they could make use of the information therein. It is also the case that the parent would have to deliver the speech act consistently, so that the child could utilize the information with certainty. For example, if the parent consistently provided an expansion every time the child produced an ungrammatical utterance, it would be easy for the child to act on this information, and purge the error. Every time children heard an expansion, they would know they needed to fix an ungrammatical utterance. However, parents do not provide consistent feedback (Marcus, 1993; Morgan & Travis, 1989). Parents provide ‘noisy feedback,’ sometimes responding to children’s ungrammatical utterances with an expansion, but sometimes providing expansions (or whatever speech act is in question) to grammatical sentences (Marcus, 1993). The result is that it is difficult for children to interpret such speech acts and to know when to act on them and when to ignore them.
The constructivist literature has been more focused on constraining argument structure errors than ungrammaticality of sentences per se. According to constructivist researchers, the frequency of a construction in the positive input is one factor that the child takes into consideration when considering grammaticality. A construction that is frequent in the input will become ‘entrenched.’ This means that if the child is frequently exposed to a verb used in one argument structure pattern, the child is likely to think any other use is ungrammatical. For example, if a child has heard the verb laugh used only in intransitives, in sentences like Bart laughed, then he or she is likely to think that The clown laughed Bart is ungrammatical, as it has never been heard in this usage (Rowland, 2014). In some cases, this could be the wrong conclusion to draw, but this can be amended with further positive input. In certain cases, however, hearing an expression that is inconsistent with their grammar causes t children to purge their own (presumably ungrammatical) use of an argument structure and replace it with the adult one. This is called ‘pre-emption’ (Ambridge & Lieven, 2011; Tomasello, 2003). According to Rowland (2014) pre-emption is relevant only when the two argument structures at issue have the same meaning. The often discussed example concerns acquiring the argument structure for a verb like disappear, which, unlike many other verbs, cannot have a causative use when it is used in a transitive frame (*The magician disappeared the ball). Suppose the child expects the causative use, but this expectation is not met in the positive input. Instead, the child is exposed to the periphrastic causative The magician made the ball disappear. Exposure to the periphrastic causative would cause the child to adopt this structure, and would inhibit use of the simple transitive, that is, the simple transitive frame would be ‘pre-empted’ by the periphrastic causative.
In sum, the constructivist proposal to reduce productivity of unattested argument structure patterns draws on a confluence of verb semantics, entrenchment, and pre-emption. How these mechanisms extend more generally to eliminate ungrammatical syntactic structures still requires some refinement. For further discussion, see Ambridge and Lieven (2011), Rowland (2014), and Saxton (2010).
Chomsky’s response to the lack of negative evidence in the child’s linguistic input took a different turn. First, he made the observation that children, and speakers of a language in general, seem to know more about their language than they have evidence for in the positive input. This, coupled with the fact that there seems to be no negative evidence, led Chomsky to argue that the child is biologically endowed with abstract linguistic knowledge, ‘Universal Grammar.’ This innate linguistic knowledge is what prevents children from producing certain kinds of ungrammatical sentences and from allowing certain prohibited sentence meanings. In particular, Universal Grammar contains ‘principles’ (also known as ‘constraints’) that limit children’s hypothesis space so that they do not attempt generalizations that would lead them to produce sentences excluded by the adult grammar. This in turn allows faster and more error-free convergence on the adult grammar. It is worth considering an example.
Suppose children knew from the positive input surrounding them, that pronouns often substitute for another noun phrase, often a name, that has already been introduced in the sentence. That is, they have come to realize that in a sentence like (1a), the pronoun he can refer to the troll. The pronoun can, of course, also refer to some person who is not mentioned in the sentence but is perhaps salient in the context, but this interpretation is not our concern here. Let us suppose, further, that children’s linguistic experience has also provided evidence that the pronoun he can refer to the troll in sentences like (1b), where the name and the pronoun are in the reverse order. Linguistic input of this kind could lead the child to form the erroneous generalization that a pronoun can always refer to a name that is elsewhere in the sentence. This generalization would lead children to misinterpret a sentence like (1c). Like (1b), the pronoun he comes before the name the troll, but in this case, the pronoun and the name cannot ‘corefer’; they cannot both refer to the troll. However, based on input sentences like (1a) and (1b), logical children would assume that sentences like (1c) can mean that the troll said he himself cleared the obstacles cleanly. And, there would be no reason to suppose that a child couldn’t also produce (1c) with this illicit meaning.
On Chomsky’s theory of Universal Grammar, children are endowed with a principle of Universal Grammar that prevents them coming up with the mistaken hypothesis that a pronoun can always refer to a name in the same sentence. The principle, known as Principle C, requires them to pay attention to the position of the pronoun and the name in the hierarchical structure of the sentence, not just to the ordering of the pronoun and the name in the sentence. The particular position of the pronoun relative to the name in the sentence hierarchy is what prevents coreference in (1c). Since the principle is a universal, it should constrain children’s generalizations no matter what language they are acquiring, provided that the language has pronouns, names, and so on.
The next section turns to children’s knowledge of syntax. The perspective of the generative linguistic theory is outlined first, followed by the constructivist perspective on early child representations of syntactic knowledge.
3. Children’s Sentence Representations
On the theory of Universal Grammar (UG) children are ‘language ready’ at birth. The language component, Universal Grammar, is ready to analyze the positive input available from speakers of the surrounding language and to start building the grammar of the local language (English, Mandarin, Hindi, etc.). In a sense, acquiring the syntax is easy, because UG contains a computational system that generates sentence structures. The computational system provides advance knowledge of the potential kinds of elements available in human languages such as (Noun, Verb, etc.) and the corresponding phrases (Noun Phrase, Verb Phrase, etc.) and combines these together to form sentence representations. Therefore, once the child has figured out what syntactic category a particular sound in the soundstream maps on to, the computational system can use the lexical items to build representations for phrases and sentences.
The representations for the phrases and sentences that children build are hierarchical structures. For example, the sentence Daddy want white milk might be represented by the child as in (2a). The finer details of the tree structure are not important—what is important is that both child and adult representations are hierarchical structures. The child has access to the range of syntactic categories. The sentence-level category is Inflection Phrase (IP) shown at the top of the tree. The child’s representation is not completely adult-like because the information representing a third-person subject and present tense is missing from ‘Infl,’ since the child’s production of the verb is want and not wants. The adult sentence representation with the tense and agreement information complete is shown in (2b). The information for tense and agreement is represented in the Inflection node, and eventually is pronounced on the main verb wants.
Since the theory of UG assumes that children are born with the capacity to represent structures using the same categories and phrase structure as adults, none of this has to be learned. In this sense, there is what is known as ‘continuity’ between the child and adult grammars (cf. Pinker, 1984; Crain & Pietroski, 2001, 2002).
The usage-based approach does not assume continuity between child and adult ‘constructions’ (Saxton, 2010). Early grammars have no abstract syntactic categories. Children have to learn the range of syntactic categories and possible constructions employed in their language from the caretaker input.
The constructivist approach to language acquisition views children’s earliest productions as having no internal structure; they are rote-learned holistic phrases (Lieven & Tomasello, 2008). A phrase like Whassat?, for example, is an unanalyzed word, and is not analyzed as a question with an inverted copula. Children gradually begin to produce multi-word utterances and after considerable exposure to frequently used constructions, start to form generalizations across similar utterances and form what are known as schemas (or templates). The early schemas are known as ‘lexically specific schemas’ because the schema is mostly full of lexical items. At first, there may be just one open position called a ‘slot,’ in which various words sharing the same function may be substituted. For example, children may have accumulated the knowledge that as well as Daddy want milk, other options such as Grandma want milk, or My baby want milk, and so on are also permitted. This list eventually is generalized to a schema: X want milk. At first, the slot may just be ‘X,’ and only later in the course of development does it become identified with the syntactic category ‘NP.’
With accumulating exposure to input, children’s schemas become more abstract, and the number of slots increases. The variable slots may be identified with a function such as THING or ACTION. Over time, the slots become identified with syntactic categories. A hypothetical development is shown in (3), where (3f) might represent the transitive construction in the adult grammar.
Although the adult grammar incorporates syntactic categories like NP and VP in the schema, the schema are not shorthand for hierarchical representations. Schema are linear representations of constructions in the language. As the usage-based linguist Goldberg (2003) notes, on this theory “A ‘what you see is what you get’ approach to syntactic form is adopted: no underlying levels of syntax or any phonologically empty elements are posited” (Goldberg, 2003, p. 219). The assumptions that have been outlined have a significant impact on the predictions each theory makes for children’s acquisition of syntax.
The next sections will investigate empirical evidence from child language that has attempted to investigate the nature of children’s sentence representations. Arguments from both theoretical perspectives on whether or not children adopt hierarchical sentence representations will be reviewed. The first topic is often discussed as ‘structure dependence’ and revolves around children’s acquisition of complex yes/no questions. This topic has received considerable press in the literature. The discussion of children’s sentence representations then continues with issues that arise in children’s acquisition of wh-questions.
The starting point for this discussion is Chomsky’s claim that children have an innate ‘Universal Grammar’ (UG) that guides language acquisition (Chomsky, 1965, 1981, 1986, 1995). This Universal Grammar endows children with the computational system that is engaged when children represent sentences in their minds. Furthermore, Chomsky argued that in cases when children need to hypothesize a rule to represent a process in the language they are acquiring, that rule must be formulated by referring to positions in the hierarchical syntactic representations provided by the computational system. That is, Chomsky claimed, children’s hypotheses are ‘structure-dependent’ (Chomsky, 1971). Chomsky claimed that structure dependence would drive children’s hypotheses even in cases where the positive input is consistent with alternative hypotheses that might be based on general cognitive mechanisms. To illustrate the claim, Chomsky discussed the case of yes/no questions, although the argument is not limited to yes/no question formation.
In generative linguistics, yes/no questions are derived from declarative sentences. The auxiliary verb or modal is moved in the hierarchical structure to a position higher than the subject NP. This movement is often called subject-aux inversion, but is more accurately termed ‘I to C movement’ in current linguistic theory. The tree in (4a) shows the sentence before I to C movement applies and the tree in (4b) shows that the auxiliary verb is has moved to the C position in the hierarchical structure.
Although Chomsky’s claim was that children would represent the rule in hierarchical terms, such as “Move the auxiliary verb or modal positioned in Infl in the main clause to C,” he pointed out that if children were to use general learning mechanisms to analyze the input sentences, they might well come up with a linear rule such as “Move the first auxiliary verb or modal that you find in the sentence string to the front of the sentence.” This rule is a linear rule because it refers to the order of words by terms such as ‘first’ and ‘front of the sentence’ and so on. This linear rule would, nevertheless, still give the correct result: Is the baby eating a banana? Because almost all of the yes/no questions young children hear in the input are simple ones (not multi-clause ones), the positive input is compatible with either the hierarchical rule or the linear one.
When it comes to more complex structures, the hierarchical hypothesis and the linear hypothesis diverge. When the subject NP is modified by a relative clause, the linear hypothesis yields the wrong result. Consider the sentence: The baby who is smiling is eating a banana, in which who is smiling is the relative clause modifying the subject NP. As the tree structures in (5) show, the structure-dependent rule works as before, moving the auxiliary verb in I to C to yield the question: Is the baby who is smiling eating a banana? But if we were to apply the linear rule to the sentence, the first auxiliary verb encountered in the linear string of words would be the is in the relative clause. If this were moved, the resulting question would be: Is the baby who smiling is eating a banana? Clearly, this is not a grammatical question.
An experiment by Crain and Nakayama (1987) tested whether or not children’s hypothesis space is indeed constrained by structure-dependence, as Chomsky had proposed. In order to test the structure-dependence proposal, Crain and Nakayama conducted an elicited production experiment, eliciting simple and complex yes/no questions. Thirty children between the ages of 3 and 6 years participated in the experiment. The task was to ask Jabba the Hutt, a creature from Star Wars, questions about ‘earth things.’ If he was able to answer the question correctly, children fed him a frog (his favorite food). The experimental finding was that the complex yes/no questions were quite challenging, especially for the younger group of children who were 4 and a half years and under. This younger group of children asked adult-like complex yes/no questions 38% of the time. The older group, children over 4 and a half years were successful at using the adult structure 80% of the time. Although the younger group found the complex yes/no question structure challenging, crucially, they did not ask any questions that suggested they were relying on a structure-independent rule. Children asked ‘restart’ questions like Is the boy who is running fast, is he tall? or ones with auxiliary doubling, such as Is the boy who is running fast is tall?, but never Is the boy who running is tall?, which would reflect the linear hypothesis on which the ‘first’ auxiliary verb moves. Overall, the results were taken to demonstrate adherence to the structure-dependence constraint.
As Crain and Nakayama (1987) pointed out, however, children’s auxiliary doubling questions do not offer data that decides between a structure-dependent rule and one based on linear order. In auxiliary doubling questions, it is not possible to tell which position the fronted auxiliary verb originated in, given that is appeared both in the relative clause and the main clause. In a follow-up experiment, Crain and Nakayama tested 10 children who had made the auxiliary doubling errors in the original experiment. This time, they asked children to form questions from statements such as The boy who can see Mickey Mouse is happy in which the relative clause contained can and the main clause contained is. Now it would be easy to tell if children were using a linear hypothesis as the can would be doubled, instead of is, as in Can the boy who can see Mickey Mouse is happy? No child produced questions with can doubled, thereby supporting the proposal that children base their hypotheses on hierarchical structure.
Constructivist language acquisition researchers have argued more recently that Chomsky’s argument is moot, because construction grammars do not represent questions using movement (Ambridge, Rowland, & Pine, 2008; Ambridge & Lieven, 2011). This renders the debate about whether movement rules are based on hierarchical structure or linear order irrelevant. According to Ambridge and Lieven (2011), children learn the complex yes/no question construction based on the input. This is not to say that children hear complex yes/no questions in the input. In fact, these complex questions containing relative clauses are almost entirely absent in child-directed speech. In a search of almost 3 million caretaker utterances in the CHILDES database, MacWhinney (2000, 2004) found only 1 instance of a complex yes/no question. Nevertheless, according to Ambridge et al. (2008) and Ambridge & Lieven (2011), children can learn to produce complex yes/no questions by building on simple ones. They propose that the first step would be to hear sufficient simple yes/no questions like Is the baby eating a banana? to enable construction of the abstract schema in (6).
The next step is to simply substitute a complex NP, such as the baby who is smiling for simple NPs like the baby. To do this, children need to notice that both simple and complex NPs have the same referent (i.e., the baby in this example) and the same distributional properties (Ambridge & Lieven, 2011; Ambridge et al., 2008). So far, there is no empirical data demonstrating that children can do this kind of distributional analysis, however.
In order to make the argument that children are capable of this kind of distributional analysis, Ambridge et al. (2008) and Ambridge and Lieven (2011) turn to findings from computational modeling studies. Lewis and Elman (2002) trained a simple recurrent network to model question formation. Even though they did not give the model complex yes/no questions with two auxiliary verbs such as Is the baby who is smiling eating a banana?, the model predicted that strings such as Is the baby who should always be followed by an auxiliary verb. As Ambridge and Lieven (2011) and Gualmini and Crain (2005) before them point out, it is possible that what the network learned was that local bi-grams like who smiling are unacceptable. This bi-gram is a sub-string of the ungrammatical structure-independent question Is the baby who smiling is eating a banana? On this account, the fact that the who smiling bigram is not predicted by the model would mean that children would not attempt the ungrammatical complex question form.
But, as Gualmini and Crain (2005) note, if the relative clause is changed from a subject-gap relative clause to an object-gap relative clause, a sequence of words like who smiling can be grammatical. This is shown in (7), where the ‘_’ indicates the object gap in the relative clause. In the object-gap relative clause smiling is the subject NP in the relative clause. This suggests that the computational model predicts that children would not be able to produce such object gap relative clauses either.
It remains to be seen whether or not children can learn the ‘correct’ complex yes/no question structure from distributional analysis. For one thing, this relies on having learned the abstract template in (6), but, in general, constructivist researchers claim that adult abstract schemas develop late. It is questionable whether this level of abstract schema would be in place by three to four years of age, when Crain and Nakayama show children can produce complex questions. Thus far, the claim that children learn the form of complex yes/no question by building on simple ones in the input has not been demonstrated empirically, so this is research for the future.
There are other experimental data in the literature that show children manipulate hierarchical structure, rather than the wellformedness of local strings. If children are attending to structure, and not linear strings, then it might suggest that children are not attending to bigrams such as who smiling to guide their acquisition of complex yes/no questions. A study by Gualmini and Crain (2005) presented children with sentences that contained an object gap in the relative clause, ones like (8).
The example in (8) contains negation in cannot and the operator ‘or.’ When negation is in the structural relationship with ‘or’ in the hierarchical tree structure that is known as ‘c-command,’ a conjunctive entailment arises (cf. Crain, 2012). For example, if we take just the locally well-formed piece He cannot lift the honey or the doughnut, the sentence would mean that he cannot lift the honey and he cannot lift the doughnut. In other words, he can lift neither one. Children have been shown in multiple studies in English and across language to access the conjunctive entailment (Crain, 2012). The conjunctive entailment would emerge if children did not pay attention to the hierarchical structure of the entire sentence in (8) and were to attend just to the restricted part He cannot lift the honey or the doughnut.
In the hierarchical structure for the sentence in (8), negation is inside the relative clause and therefore doesn’t c-command the operator ‘or.’ The result is that the conjunctive entailment does not arise. The sentence means that the Karate Man gives the Pooh Bear he can’t lift (there are two Pooh Bears in the story) one or other of the honey and the doughnut. However, suppose that children are carrying out distributional analysis and looking at locally well-formed units of words, as claimed by Ambridge et al. (2008). If so, then there is nothing to prevent children from assigning a meaning to the disjunction word or in (8), which combines disjunction with negation, so as to produce the ‘neither’ reading. In this case, children could easily interpret the sentence as meaning The Karate man will give the Pooh Bear he cannot lift neither the honey nor the doughnut.
The impossibility of combining the meanings of negation and disjunction in sentences like (8) is another example of structure-dependence. In their experimental study with 3- to 6-year-old children, Gualmini and Crain showed that children analyzed disjunction correctly in sentences like (8). They never assigned the meaning that is consistent with the locally well-formed string He cannot lift the honey or the doughnut. Thus, these results give support to the proposal that children’s sentence representations involve hierarchical syntactic structures. They do not support the idea that children attend to local distributional properties of sentences.
The kinds of errors children make with wh-questions, and how the generative and constructivist theories explain them, is the next topic of discussion.
One of the earliest child language researchers who attempted to use Chomsky’s linguistic theory to predict the stages of language acquisition was Roger Brown, a developmental psychologist at Harvard University. Drawing on the transformational theory of syntax of that time period, Brown predicted potential stages in children’s acquisition of questions (Brown, 1968). At that time, wh-questions were derived from a base-structure in which an indefinite such as something, someone, etc. was first changed to a question word, and then two transformational rules were applied. The first rule moved the question word to the appropriate position in the hierarchical tree structure, and the second rule accomplished subject-aux inversion (or Infl to C movement), as discussed previously. Brown anticipated that children might produce wh-questions that mirrored a partial syntactic derivation in which one or both of the transformation rules failed to be carried out due to linguistic complexity. As it turned out, children do not produce erroneous wh-questions with the wh-phrase unmoved (e.g., He can ride in what?), but Brown discovered that children do sometimes produce wh-questions that appear to lack subject-aux inversion. These were ones such as What he can ride in? in which the modal can has not been moved from Infl in the structure to the C position, higher than the subject NP he. The existence of such nonadult productions is well-documented now, but at the time, this was a radical finding because it revealed that children can produce what Brown termed “ungrammatical creations,” ones that were not a reflection of the parental input to children (Brown, 1968).
Research findings in Stromswold (1990) have documented that, for the most part, children’s wh-questions are adult-like, with subject-aux inversion in place. Stromswold’s investigation examined spontaneous production data from 12 children in the CHILDES database, including the ‘Harvard children’ studied by Brown (Brown, 1973). Stromswold’s study revealed that, when children provided an auxiliary verb or modal, the correct inverted word order for questions was used over 90% of the time. The 10% or so of errors in which children fail to carry out subject-aux inversion (I to C movement) reveals a structure that is consistent with generative linguistic theory, although why children sometimes fail to do subject-aux inversion is open to debate.
Since Brown’s seminal study, the rule of subject-aux inversion has also been used to explain another kind of nonadult production, namely the doubling of the auxiliary verb or modal, as observed in Crain & Nakayama’s (1987) study. This also occurs in wh-questions. These auxiliary doubling wh-questions are ones like What can he can ride in? or What’s this is doing? Researchers working in the generative acquisition framework propose that children correctly carry out subject-aux inversion, moving the auxiliary verb or modal to the correct position in the hierarchical structure, but fail to make the auxiliary verb or modal in the original position silent (see Mayer, Erreich, & Valian, 1978; Guasti & Thornton, 1996; Stromswold, 1990). This analysis means that the child’s syntactic structure is adult-like; the error is simply one of pronunciation. Children simply fail to suppress the pronunciation of the modal or auxiliary verb in the unmoved position.
Recall that usage-based accounts do not assume there is any movement, with statements and wh-questions having no derivational relationship to each other. Declaratives and wh-questions are separate constructions that children learn from the input. There is no proposal about the way in which constructions are built up that would expect children to produce the subject NP and auxiliary verb or modal in the reverse order (i.e., What he can ride in?). Instead, usage-based researchers propose that these nonadult wh-questions that are absent in the adult input stem from frequency effects.
According to Rowland and Pine (2000), a frame (i.e., ‘schema’) for each wh-word + aux combination must be learned piecemeal from the input. Each of these frames (e.g., what do, where can, why has, etc.) must be heard with sufficient frequency in order to add them to the inventory of wh-question frames. Lack of sufficient exposure to a specific wh-question frame causes children to cobble together a wh-question by drawing on existing constructions already in their grammar. According to Rowland (2007), a child who hasn’t learned the ‘what does’ combination and intends the meaning of the adult question What does he like? could take what and put it together with the declarative he likes to produce the non-inverted wh-question word order What he likes? Similarly, the two schema who can + he can see could be juxtaposed to yield a question with doubling of the auxiliary verb or modal such as Who can he can see?
The proposal that children overlay schema provides a neat account of the nonadult wh-questions children have been observed to produce in both spontaneous and experimental contexts. The challenge is to prevent the overlaying of schema from occurring any time the child is unsure of how to produce a construction. As the proposal stands, the mechanism is extremely powerful, predicting many errors that are not attested in children’s productions.
From a usage-based perspective, the generative proposal fails to account for differences in inversion rates across auxiliary verbs and modals. Proponents of the constructivist theory point out that differences in inversion rates are not expected if children have acquired a general subject-aux inversion rule (Rowland, 2007). Children should carry out subject-aux inversion 100% of the time, for all auxiliary verbs. Generative language researchers are less likely to attribute much importance to differences across auxiliary verbs given that children’s overall inversion rate in wh-questions is over 90% anyway, and it is usual to allow up to 10% errors to be attributed to performance factors (Brown, 1973). A generative researcher may claim that such differences are simply due to the fact that the meaning of individual auxiliary verbs must be learned separately. For usage-based researchers, however, this is evidence that the various wh+auxiliary verb combinations are learned piecemeal, and not by a hierarchically based movement rule.
6. Nature or Nurture Again
This article has introduced two theories detailing children’s acquisition of syntactic knowledge. On the one hand, Chomsky’s theory of Universal Grammar assumes that children have innate knowledge of the computational system and syntactic categories, and universal principles and parameters. Equipped with this knowledge, the child should compute hierarchical sentence representations and have little difficulty acquiring the syntactic structures of the local language. On this account, children acquire the grammar quickly and in a relatively error-free manner, partly because their hypotheses are constrained by universal principles. On the other hand, the usage-based constructivist theory assumes that the child has no specialized knowledge of language or syntax, and must learn this, on the basis of positive input alone. This is a slow process, because children must gradually build up knowledge of the constructions permitted in the language. The constructions are initially lexically specific schema that become more abstract over time. These are linear representations of permissible constructions. The challenge is to demonstrate how children do develop the local language without over-generating and producing sentences that are not part of the adult grammar. Proponents of the constructivist language acquisition research program have been tackling this problem in recent research (cf. Ambridge, 2013; Ambridge, Pine, & Rowland, 2012a, b). The debate over whether child language acquisition is all ‘nurture’ or in part, a gift from ‘nature’ continues.
- Ambridge, B., & Lieven, E. (2011). Child language acquisition: Contrasting theoretical approaches. Cambridge, U.K.: Cambridge University Press.
- Crain, S. (1991). Language acquisition in the absence of experience. Behavioral and Brain Sciences, 4, 597–650.
- De Villiers, J., & Roeper, T. (2011). Handbook of generative approaches to language acquisition. Dordrecht, The Netherlands: Springer.
- Guasti, M. T. (2002). Language acquisition: The growth of grammar. Cambridge, MA: MIT Press.
- Lidz, J., Snyder, W., & Pater, J. (2016). The Oxford handbook of developmental linguistics. Corby, U.K.: Oxford University Press.
- Lust, B., & Foley, C. (2004). First language acquisition: The essential readings. Malden, MA: Blackwell Publishing.
- Rowland, C. (2014). Understanding child language acquisition. New York: Routledge.
- Saxton, M. (2010). Child language: Acquisition and development. London: SAGE.
- Snyder, W. (2007). Child language: The parametric approach. New York: Oxford University Press.
- Ambridge, B. (2013). How do children restrict their linguistic generalizations? An (un-) grammaticality judgment study. Cognitive Science, 37, 508–543.
- Ambridge, B., & Lieven, E. (2011). Child language acquisition: Contrasting theoretical approaches. Cambridge, U.K.: Cambridge University Press.
- Ambridge, B., Pine, J., & Rowland, C. (2012a). Semantics versus statistics in the retreat from locative overgeneralisation errors. Cognition, 123, 260–279.
- Ambridge, B., Pine, J., & Rowland, C. (2012b). The roles of verb semantics, entrenchment and morphophonology in the retreat from dative argument structure overgeneralization errors. Language, 88, 45–81.
- Ambridge, B., Rowland, C., & Pine, J. (2008). Is structure dependence an innate constraint? Experimental evidence from children’s complex-question production. Cognitive Science, 32, 222–255.
- Brown, R. (1968). The development of wh questions in child speech. Journal of Verbal Learning and Behavior, 7, 279–290.
- Brown, R. (1973). A first language: The early stages. London: Allan and Unwin.
- Brown, R., & Hanlon, C. (1970). Derivational complexity and order of acquisition in child speech. In J. R. Hayes (Ed.), Cognition and the development of language (pp. 11–54). New York: Wiley.
- Chomsky, N. (1959). Review of B. F. Skinner’s verbal behavior. In L. Jakobovits & M. Miron (Eds.), Readings in the psychology of language. Englewood Cliffs, NJ: Prentice-Hall.
- Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.
- Chomsky, N. (1971). Problems of knowledge and freedom. New York: Pantheon.
- Chomsky, N. (1981). Lectures in government and binding. Dordrecht, The Netherlands: Foris.
- Chomsky, N. (1986). Knowledge of language. New York: Praeger Publishers.
- Chomsky, N. (1995) The minimalist program. Cambridge, MA: MIT Press.
- Crain, S. (2012). The emergence of meaning. New York: Cambridge University Press.
- Crain, S., Gardner, A., Gualmini, A., & Rabbin, B. (2002). Children’s command of negation. In Y. Otsu (Ed.), Proceedings of the Third Tokyo Conference on Psycholinguistics (pp. 71–95). Tokyo, Japan: Hituzi Syobo.
- Crain, S., & Nakayama, M. (1987). Structure dependence in grammar formation. Language, 63, 522–543.
- Crain, S., & Pietroski, P. (2001). Nature, nurture and universal grammar. Linguistics and Philosophy, 24, 139–186.
- Crain, S., & Pietroski, P. (2002). Why language acquisition is a snap. Linguistic Review, 18, 163–183.
- Demetras, M., Post, K., & Snow, C. (1986). Feedback to first language learners: The role of repetitions and clarification questions. Journal of Child Language, 13, 275–292.
- Goldberg, A. (1995). Constructions: A construction grammar approach to argument structure. Chicago: University of Chicago Press.
- Goldberg, A. (2003). Constructions: A new theoretical approach to language. Trends in Cognitive Science, 7, 219–224.
- Gualmini, A., & S. Crain (2005). The structure of children’s linguistic knowledge. Linguistic Inquiry, 36, 463–474.
- Guasti, M. T., & Thornton, R. (1996). Negation in children’s questions: The case of English. In D. MacLaughlin & S. McEwen (Eds.), Proceedings of the 19th Annual Boston University Conference on Child Language Development (pp. 228–239). Somerville, MA: Cascadilla Press.
- Hirsh-Pasek, K., Treiman, R., & Schneiderman, M. (1984). Brown and Hanlon revisited: Mothers’ sensitivity to ungrammatical forms. Journal of Child Language, 11, 81–88.
- Langacker, R. (1987). Foundations of cognitive grammar. Stanford, CA: Stanford University Press.
- Lieven, E., & Tomasello, M. (2008). Children’s first language acquisition from a usage-based perspective. In P. Robinson & N. Ellis (Eds.), Handbook of Cognitive Linguistics and Second Language Acquisition (pp. 168–196). New York: Routledge.
- Lewis, John, & Jeffrey Elman (2002). Learnability and the statistical structure of language: Poverty of stimulus arguments revisited. In B. Skarabela, S. Fish, & A. H.-J. Do (Eds.), BUCLD 26: Proceedings of the 26th annual Boston University Conference on Language Development (pp. 359–370). Somerville, MA: Cascadilla Press.
- MacWhinney, B. (2000). The CHILDES Project: Tools for Analyzing Talk. Mahwah, NJ: Lawrence Erlbaum Associates.
- MacWhinney, B. (2004). A multiple process solution to the logical problem of language acquisition. Journal of Child Language, 31, 883–914.
- Marcus, G. (1993). Negative evidence in language acquisition. Cognition, 46, 53–85.
- Mayer, J., Erreich, A., & Valian, V. (1978). Transformations, basic operations and language acquisition. Cognition, 6, 1–13.
- Morgan, J., & Travis, L. (1989). Limits on negative information in language input. Journal of Child Language, 16, 531–552.
- Pinker, S. (1984). Language learnability and language development. Cambridge, MA: Harvard University Press.
- Rowland, C. (2007). Explaining errors in children’s questions. Cognition, 104, 106–134.
- Rowland, C., & Pine, J. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What do children know? Journal of Child Language, 27, 157–181.
- Saxton, M. (2010). Child language: Acquisition and development. London: SAGE.
- Skinner, B. F. (1957). Verbal behavior. Acton, MA: Copley Publishing Group.
- Stromswold, K. (1990). Learnability and the acquisition of auxiliaries. (Unpublished PhD Diss.). MIT, Boston, MA.
- Tomasello, M. (2000). Do young children have adult syntactic knowledge? Cognition, 70, 209–253.
- Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press.