Prosodic bootstrapping

From Wikipedia, the free encyclopedia

Prosodic bootstrapping (also known as phonological bootstrapping) in linguistics refers to the hypothesis that learners of a primary language (L1) use prosodic features such as pitch, tempo, rhythm, amplitude, and other auditory aspects from the speech signal as a cue to identify other properties of grammar, such as syntactic structure.[1] Acoustically signaled prosodic units in the stream of speech may provide critical perceptual cues by which infants initially discover syntactic phrases in their language.[1] Although these features by themselves are not enough to help infants learn the entire syntax of their native language, they provide various cues about different grammatical properties of the language, such as identifying the ordering of heads and complements in the language using stress prominence,[2] indicating the location of phrase boundaries, and word boundaries.[3] It is argued that prosody of a language plays an initial role in the acquisition of the first language helping children to uncover the syntax of the language, mainly due to the fact that children are sensitive to prosodic cues at a very young age.[4]

Argument for[]

The argument for prosodic bootstrapping was first introduced by Gleitman and Wanner (1982), who observed that infants might use prosodic cues (particularly acoustic cues) to discover underlying grammatical information about their native language. These cues (e.g. intonation contour in a question phrase, lengthening a final segment)[1] could aid infants in dividing the speech input into different lexical units, and furthermore aid in placing these units into syntactic phrases appropriate to the language.[5]

Prosodic bootstrapping may also provide an explanation to the problem as to how infants segment continuous input. Just like adult speakers, children are exposed to continuous speech. Hearing continuous speech poses a problem for children learning their native language because pauses in speech do not align with word boundaries. As a result, children have to construct word representations from the speech that they hear.[6]

A study conducted by Christophe et al. (1994) showed that infants, aging three-days old, are sensitive to acoustic properties of a language. It was shown that three-day olds are able to discriminate bisyllabic stimuli with the same segments based on whether they were extracted from within a word or across a word boundary. The duration of the word initial consonant and the word final vowel are the cues for the existence of a word boundary, which infants may use to learn about syntactic structure.[6]

Another main support for the prosodic bootstrapping hypothesis is that the use of prosodic elements to segment parts of speech can occur at a very early age, as early as 3 days,[4] where infants have shown the ability to differentiate languages based on phonological characteristics alone, and the fact that the use of prosodic cues occurs before the use of lexical or syntactic data. This has led to hypothesis of "bootstrapping from the signal"/"prosodic bootstrapping", which has three main elements:[7]

  1. The syntax of language is correlated with acoustic properties.
  2. Infants can detect and are sensitive to these acoustic properties.
  3. These acoustic properties can be used by infants when processing speech.

Phonological phrases[]

A phonological phrase boundary indicates how the continuous speech stream is broken up into smaller units, which infants use to pick out and more closely identify individual parts of the sentence.[8] A phonological phrase can contain between four and seven syllables, and can be detected by infants, due to the fact that the edges of the phrases are either strengthened or lengthened.[9] Various studies have been done to test if prosody helps with acquisition of syntax, morphology, and phonology.[6][10][11][12]

Another acoustic cue that indicates a prosodic boundary is the duration of a pause. These pauses will usually be longer in duration at the edge of a word boundary, when referring to clause boundaries.[7] For example, the two sentences below, while seemingly similar on the surface representation, have different prosodic structure, which correlates to the different syntactic structure ("..." = longer duration of pause in speech):

  1. "The boy met the girl at the teach in" → [The boy]NP ... [met the girl]VP ... [at the teach in]PP
  2. "The boy met the girl and the teacher" → [The boy]NP ... [met the girl and the teacher]VP

Using different durations of pause, the underlying syntactic structure can be better distinguished by the listener.

Acquiring lexicon[]

For infants who are learning their native language, it is difficult to extract words from speech waves because pronounced words are not separated by silence. There are several proposals for lexical acquisition. The first is that children hear words in isolation: if a new piece goes between two words that are known, the new piece must be a new word. The second proposal is that there are some cues in the speech that give signal to the presence of a word boundary: duration, pitch, energy.[6]

The fact that speech is presented in a continuous stream without pause only makes the task of acquiring a language more difficult for infants.[13] It has been proposed that prosodic features such as the strength of certain sounds, relative to their location in the word, can be used to break apart and identify fragments within the speech stream, in order to differentiate between potentially ambiguous sentences.[14] In English for example, the final [d] in the word "bold" tends to be "weak", in that it is not fully released. On the other hand, an initial [d] in a word such as "dime" is more clearly released, opposed to its word-final counterpart.[14] This difference in strong v. weak sounds may help to better identify where the sound occurs in the word, whether at the beginning or the end.

Studies have shown that phonological boundaries can be interpreted as word boundaries, which further aids the child in the task of developing a lexicon.[8] For example, Millotte et al. (2010) tested 16-month olds, observing how children use phonological phrase boundaries to constrain lexical access. When infants heard a prosodic boundary, they were able to detect the existence of a word boundary. In the experiments authors used the conditioned head-turn procedure which showed that when infants were trained to turn their heads for a bisyllabic word, they responded to sentences that contained this word more often than to those that contained both syllables of this word, but separated by a phonological phrase boundary.[11]

Because prosodic boundaries will never occur inside of a word, thus infants will not be constrained in how they identify words in the speech signal. For example, children can differentiate between words such as "dice" and "red ice", even though both are phonologically similar. This is because a prosodic boundary will not appear in the middle of the word *(d][ice) but around the word instead ([dice]).[14]

Children use phonological phrase boundaries to constrain lexical access. They infer the existence of a word boundary given a prosodic boundary. If two sequences differ in prosody while being made up of identical segments (pay per vs. paper), children treat them as different sequences. Studies that measured cues from prosody to phonological phrases have been done in a variety of languages that differ from each other, providing support that phonological phrases could possibly aid in acquiring lexicon universally.[11]

Acquiring syntax[]

In addition to helping to identify lexical items, a key element of prosodic bootstrapping involves using prosodic cues to identify syntactic knowledge about the language.[9] Because prosodic phrase boundaries are correlated to syntactic boundaries, listeners can determine the syntactic category of a word, using only prosodic boundary information. Christophe et al. (2008) demonstrated that adults could use prosodic phrases to determine the syntactic category of ambiguous words. Listeners were provided two sentences with an ambiguous word [mɔʀ], which could either belong to a verb category ("mord", translated as "it bites") or a noun category ("mort", translated as the adjective "dead").[9]

Category Sentence Translation
Verb [le petit chien] [mord...] [the little dog] [bites...]
Noun (adjective) [le petit chien mort...] [the little dead dog...]

The table above depicts the two sentences heard by French-speaking adults in Christophe et al. (2008), where the emboldened word is the phonetically ambiguous word, and the brackets represent phonological phrase boundaries.[9] Using the position of the prosodic boundaries, adults were able to determine which category the ambiguous word [mɔʀ] belonged to, since the word is assigned to a different phonological phrase, depending on its syntactic category and semantic meaning in the sentence.

An important tool for acquiring syntax is the use of function words (e.g. articles, verb morphemes, prepositions) to point out syntactic constituent boundaries.[9] These function words frequently occur in language, and generally appear at the borders of prosodic units. Because of their high frequency in the input, and the fact that they tend to have only one to two syllables, infants are able to pick out these function words when they occur at the edges of a prosodic unit. In turn, the function words can help learners determine the syntactic category of the neighboring words (e.g., learning that the word "the" [ðə] introduces a noun phrase, and that suffixes such as "-ed" require a verb to precede it).[9] For example, in the sentence "The turtle is eating a pigeon", through the use of function words such as "the" and the auxiliary verb "is", children can get better sense as to where prosodic boundaries fall, resulting in a division such as [The turtle][is eating][a pigeon], where brackets indicate a boundary. As a result, infants tend to look out for these words to better identify the beginnings and ends of the prosodic units.[9] Noun articles like "the" or "a", in English for example, can only be followed by noun, since they are the only words that can fit this category; one would never hear a sentence such as "The *destroy was widespread". Likewise, the use of verb morphemes (e.g. past tense "-ed" [d]/[t], continuous "-ing" [iŋ], auxiliary "is" [ɪz]) indicate that a verb must precede it, and indicate that no other word can fill the category besides a verb (e.g. *"I saw that he *happyed yesterday").

In a study by Carvalho et al. (2016), experimenters tested preschool children, where they showed that by the age of 4 prosody is used in real time to determine what kind of syntactic structure sentences could have. The children in the experiments were able to determine the target word as a noun when it was in a sentence with a prosodic structure typical for a noun and as a verb when it was in a sentence with a prosodic structure typical for a verb. Children by the age of 4 use phrasal prosody to determine the syntactic structure of different sentences.[12]

Linguistic rhythm[]

Stress[]

Rhythm is an important aspect of prosody in terms of syllable timing and emphasis, and varies from language to language.[15] Languages are grouped into different categories based on their rhythm, primarily in stress based, rhythm (syllable) based, and mora based categories.[16] Infants around 6 months of age have shown to be able to differentiate between different languages solely on the basis of these particular stress differences. More specifically, infants by 2 months of age can from vague categories of different rhythmic structures, those that are native classes, and those that are nonnative.[16] Before reaching 2 months, infants can distinguish between languages of any class, but by the age of 2 months can only put languages in the native or nonnative class. For example, English speaking infants will have a hard time differentiating between English and Dutch (since both are stress based languages), but will be able to distinguish Russian (a stress based language) and Japanese (a mora based language).[15] By 2 months however, an English speaking baby will group syllable-timed and mora-timed languages into one "nonnative" group, and thus will have a hard time differentiating languages such as French (syllable-timed) and Japanese (mora-timed).[15] This stress variance is also a useful tool for bilingual infants, and acts as a strong indicator when differentiating between different languages being learned.[17]

Detecting head direction[]

The question of whether the head direction parameter can be detected using prosodic cues has been tested with French babies listening to Turkish sentences,[2] in order to determine whether or not 6 to 12 weeks old babies are sensitive to prosodic prominence in speech. Setting the head direction parameter allows infants to acquire a hierarchal branching structure for a particular language, which determines whether the language is left-headed (right-branching) or right-headed (left-branching).[1] This particular experiment (Christophe et al. 2003) had 6- to 12-week-old babies listening to modified "nonsense" (the modified French and modified Turkish sentences in the table below) sentences that were neither French nor Turkish, but only differed in the fact that the Turkish-based sentences were head final and French based sentenced were head initial. The reasoning behind this is that infants might be able detect prominence within these phonological phrases, as prominence has been shown to follow a systematic pattern with languages; head-initial languages have prominence on the right (French), while head-final languages have prominence on the left (Turkish).[2]

These nonsense sentences were created in order to eliminate any non-prosodic interference (e.g. phonological differences, different number of syllables, etc.) thus babies would only be able to differentiate between the two languages based on the prominence of prosodic cues in the sentences.

Language 1st phonological phrase 2nd phonological phrase
French Le grand orang-outang était énervé
Turkish Yeni kitabɪmɪ almak istiyor
Modified French leplem peleplem epe pemelse
Modified Turkish jeme pepepeme elmep espejel

The table above depicts the sentences heard by the French babies (translated as "The large orangoutang was nervous"), where the bolded and enlarged letter indicates word stress and prominence[2] (Christophe et al. 2003). As predicted, French babies tended to prefer the modified nonsense French phrases, based solely on prosodic prominence, given by the location of the head direction parameter.

Jusczyk et al. (1992) tested 9 month-olds, where they showed that infants are sensitive to acoustic correlates of main phrasal units that are present in the prosody of English sentences. The prosodic markers in the input are longer durations of the syllable that precedes a main phrasal boundary and declinations in fundamental frequency.[10]

Computational modeling[]

Several language models have been used to show that in a computational simulation, prosody can help children acquire syntax.[18][19]

In one study, Gutman et al. (2015) build a computational model that used prosodic structure and function words to jointly determine the syntactic categories of words. The model assigned syntactic labels to prosodic phrases with success, using phrasal prosody to determine the boundaries of phrases, and function words at the edges for classification. The study presented the model of how early syntax acquisition is possible with the help of prosody: children access phrasal prosody and pay attention to words placed at the edges of prosodic boundaries. The idea behind the computational implementation is that prosodic boundaries signal syntactic boundaries and function words that are used to label the prosodic phrases. As an example, a sentence "She's eating a cherry" has a prosodic structure such as [She's eating] [a cherry] where the skeleton of a syntactic structure is [VN NP] (VN is for verbal nucleus where a phrase contains a verb and adjacent words such as auxiliaries and subject pronouns). Here, children may utilize their knowledge of function words and prosodic boundaries in order to create an approximation of syntactic structure.[18]

In a study by Pate et al. (2011), where a computational language model was presented, it was shown that acoustic cues can be helpful for determining syntactic structure when they are used with lexical information. Combining acoustic cues with lexical cues may usefully provide children with initial information about the place of syntactic phrases which supports the prosodic bootstrapping hypothesis.[19]

Criticism[]

A key criticism of the bootstrapping theory in general is that these mechanisms (whether they be syntactic, semantic, or prosodic) serve mainly as a starting point for learning the language.[5] That is, the bootstrapping mechanisms are only useful up to a certain point in linguistic development for infants, and thus there might be some other mechanism that might be used later on, since the bootstrapping mechanisms primarily use information that is not controlled for "cross-linguistic variation" (information that varies from language to language).[5]

Regarding prosodic bootstrapping in particular, there is speculation on how accurately prosodic phrases map to syntactic structure.[5] That is, phrases with identical syntactic structure can have different possible prosodic structures. In the sentence "The cat chased the rat that ate the cheese.", the prosodic structure would resemble:

[The cat] [chased the rat] [that ate the cheese]

However, the prosodic unit [chased the rat] in this case is not a syntactic constituent, demonstrating that not every prosodic unit is a syntactic unit. Rather, one can observe that a language may not always provide one-to-one mapping from prosodic information to linguistic units. Prosody does not give children direct and systematic information from prosodic structure to linguistic structure.[1]

Jusczyk (1997) argued that most people who accept this theory assume that children are drawing on "a range of information available in the speech signal that extends beyond prosody",[20] further explaining that relying on prosodic information alone is not enough to learn the structure of the language.

See also[]

References[]

  1. ^ a b c d e Lust, Barbara (2006). Child Language: Acquisition and Growth. Cambridge, United Kingdom: Cambridge University Press. p. 290. ISBN 978-0-521-44922-9.
  2. ^ a b c d Christophe, Anne; Nespor, Marina; Guasti, Maria; Ooyen, Brit (2003). "Prosodic structure and syntactic acquisition: the case of the head-direction parameter". Developmental Science. 6 (2): 211–220. doi:10.1111/1467-7687.00273.
  3. ^ Christophe, Anne; Guasti, Teresa; Nespor, Marina; Dupoux, Emmanuel; Ooyen, Brit V. (1997). "Reflections on Phonological Bootstrapping: Its Role for Lexical and Syntactic Acquisition". Language and Cognitive Prosesses. 12 (5–6): 585–612. CiteSeerX 10.1.1.554.9654. doi:10.1080/016909697386637.
  4. ^ a b Christophe, Anne; Mehler, Jacques; Sebastían-Gallés, Núria (2001). "Perception of Prosodic boundary Correlates by Newborn Infants". Infancy. 2 (3): 385–394. CiteSeerX 10.1.1.535.5403. doi:10.1207/s15327078in0203_6.
  5. ^ a b c d Höhle, Barbara (2009). "Bootstrapping mechanisms in first language acquisition" (PDF). Linguistics. 47 (2): 359–382. doi:10.1515/ling.2009.013. Archived from the original (PDF) on 2014-10-28. Retrieved 2016-11-03.
  6. ^ a b c d Christophe, Anne; Dupoux, Emmanuel; Bertoncini, Josiane; Mehler, Jacques (1994-03-01). "Do infants perceive word boundaries? An empirical study of the bootstrapping of lexical acquisition". The Journal of the Acoustical Society of America. 95 (3): 1570–1580. doi:10.1121/1.408544. ISSN 0001-4966.
  7. ^ a b Soderstrom, Melanie; Seidl, Amanda; Kemler Nelson, Deborah; Jusczyk, Peter (2003). "The prosodic bootstrapping of phrases: Evidence from prelinguistic infants". Journal of Memory and Language. 49 (2): 249–267. doi:10.1016/s0749-596x(03)00024-x.
  8. ^ a b Gout, Ariel; Christophe, Anne; Morgan, James L. (2004). "Phonological phrase boundaries constrain lexical data access II. Infant data". Journal of Memory and Language. 51 (4): 548–567. doi:10.1016/j.jml.2004.07.002.
  9. ^ a b c d e f g Christophe, Anne; Millotte, Séverine; Bernal, Savita; Lidz, Jeffrey (2008). "Bootstrapping Lexical and Syntactic Acquisition". Language and Speech. 51 (1–2): 61–75. doi:10.1177/00238309080510010501. PMID 18561544.
  10. ^ a b Jusczyk, P. W.; Hirsh-Pasek, K.; Nelson, D. G.; Kennedy, L. J.; Woodward, A.; Piwoz, J. (1992-04-01). "Perception of acoustic correlates of major phrasal units by young infants". Cognitive Psychology. 24 (2): 252–293. doi:10.1016/0010-0285(92)90009-q. ISSN 0010-0285. PMID 1582173.
  11. ^ a b c Millotte, Séverine; Morgan, James; Margules, Sylvie; Bernal, Savita; Dutat, Michel; Christophe, Anne (2010-01-01). "Phrasal prosody constrains word segmentation in French 16-month-olds". Journal of Portuguese Linguistics. 10 (1): 67. doi:10.5334/jpl.101. ISSN 1645-4537.
  12. ^ a b de Carvalho, Alex; Dautriche, Isabelle; Christophe, Anne (2016-03-01). "Preschoolers use phrasal prosody online to constrain syntactic analysis". Developmental Science. 19 (2): 235–250. doi:10.1111/desc.12300. ISSN 1467-7687. PMID 25872796.
  13. ^ Christophe, Anne; Dupoux, Emmanuel (1996). "Bootstrapping lexical acquisition: The role of prosodic structure". The Linguistic Review. 13 (3–4): 383–412. doi:10.1515/tlir.1996.13.3-4.383.
  14. ^ a b c Mattys, Sven L.; Jusczyk, Peter W. (2001). "Do Infants Segment Words or Recurring Contiguous Patterns". Journal of Experimental Psychology. 27 (3): 644–655. CiteSeerX 10.1.1.527.9150. doi:10.1037/0096-1523.27.3.644.
  15. ^ a b c Ramus, Franck; Nespor, Marina; Mehler, Jacques (1999). "Correlates of linguistic rhythm in the speech signal" (PDF). Cognition. 73 (3): 265–292. doi:10.1016/s0010-0277(99)00058-x. PMID 10585517.
  16. ^ a b Mazuka, Reiko (2007). "The Rhythm-based Prosodic Bootstrapping Hypothesis of Early Language Acquisition: Does It Work for Learning for All Languages?". Gengo Kenkyu. 132: 1–13.
  17. ^ Bosch, Laura; Sebastián-Gallés, Núria (1997). "Native-language recognition abilities in 4-month-old infants from monolingual and bilingual environments". Cognition. 65 (1): 33–69. doi:10.1016/s0010-0277(97)00040-1. PMID 9455170.
  18. ^ a b Gutman, Ariel; Dautriche, Isabelle; Crabbé, Benoît; Christophe, Anne (2015-07-03). "Bootstrapping the Syntactic Bootstrapper: Probabilistic Labeling of Prosodic Phrases". Language Acquisition. 22 (3): 285–309. doi:10.1080/10489223.2014.971956. ISSN 1048-9223.
  19. ^ a b Pate, John K.; Goldwater, Sharon (2011-01-01). Unsupervised Syntactic Chunking with Acoustic Cues: Computational Models for Prosodic Bootstrapping. Proceedings of the 2Nd Workshop on Cognitive Modeling and Computational Linguistics. CMCL '11. Stroudsburg, PA, USA: Association for Computational Linguistics. pp. 20–29. ISBN 9781932432954.
  20. ^ Jusczyk, Peter W. (1997-01-10). The Discovery of Spoken Language. A Bradford Book.
Retrieved from ""