Manual babbling is a linguistic phenomenon that has been observed in deaf children and hearing children born to deaf parents who have been exposed to sign language. Manual babbles are characterized by repetitive movements that are confined to a limited area in front of the body similar to the sign-phonetic space used in sign languages. In their 1991 paper, Pettito and Marentette concluded that between 40% and 70% of deaf children's manual activity can be classified as manual babbling, whereas manual babbling accounts for less than 10% of hearing children’s manual activity. Manual Babbling appears in both deaf and hearing children learning American Sign Language from 6 to 14 months old (Marschark, 2003).[1] [2] [3]
Manual babbling is not to be confused with movement that is motor-driven and non-communicative/common communicative in nature. Babbling occurs during the same period of development when an infant is also trying to establish a sense of their spatial orientation and cognition. This results in arm and hand movements outside of what could be categorized as manual babbling. For example, when an infant waves their arm back and forth, they may be transitioning between uncoordinated behaviors and intentional, voluntary behaviors like reaching. The frequency of these arm and hand gestures peaks between 5 and a half months and 9 and a half months, which is around the same time that babbling begins (6 to 9 months).[4]
Babbling is an important step in the language acquisition of infants (Chamberlain et al, 1998). Before an infant is even able to form their first words, they will produce these phonological tokens that, while meaningless, conform to the broad rules for syllable structure. Children that have access to spoken language will produce vocal babbles while children that have access to signed language will produce manual babbles. In other words, “vocal babbling is “triggered” by the patterned input of a spoken linguistic environment while manual babbling is triggered by the patterned input of a signed linguistic environment” (Cormier et al., 1998, p. 55).
All infants are equipped to detect rhythmic patterns and properties of the linguistic input they receive. Non-hearing infants explore manual gestures (like those made in sign language) in the same way that a hearing child may explore phonemes of a spoken language. Where hearing children are triggered by the sound patterns they hear, deaf children are more attentive to the movement patterns they see. In their studies, Petitto and Marentette researched the difference between manual babbling of both hearing and non-hearing infants, and found that non-hearing babies produce more tokens of manual babbles than hearing infants do. However, they did not find a significant difference between the frequency of communicative gestures (such as waving, reaching, and pointing) between hearing and non-hearing infants.
In 1995, Meier and Willerman defined the three primary manual gestures as pointing, reaching, and waving. These common communicative gestures are different from babbles because they carry meaning (whereas babbles are meaningless). Pettito and Marentette, referencing American Sign Language phonology, defined language-driven manual babbling as signed symbols that have a hand shape, location, and a movement that must be realized as a change in location, hand shape, or palm orientation (Marentette, 1989).
Differences in the vocal behavior of deaf and hearing children do not appear in the first 3 stages of vocal development: the phonation stage (0-1 month), the GOO stage (2–3 months), and the expansion stage (4–6 months). The most significant differences begin to appear in the reduplicative babbling stage (7–10 months), during which a hearing infant will begin producing marginal babbling and canonical babbling (repetitive consonant-vowel syllables). Deaf infants, like hearing infants, will begin producing marginal babbling, but they will rarely proceed to canonical babbling. During the typical reduplicative babbling stage, the vocal activity of deaf children decreases dramatically. This decrease in vocal activity indicates that a lack of auditory feedback significantly inhibits deaf children’s vocal development (Chamberlain, 1999).[5] It is important to note that while it has been determined that auditory feedback is crucial to the development and self-monitoring of spoken language, the role of visual feedback in the development of signed language has yet to be explored (Cutler, 2017).[6]
Furthermore, there is evidence that manual babbling resembles the vocal babbling of hearing children and allows for further communication through sign. As the deaf infant develops, the cyclical nature of their manual babbling increases, leading to the production of different hand shapes used in sign language, such the 5 hand, C hand, and S hand. These hand shapes become more significant if a caretaker receives the infant’s babbling and reinforces it using child-directed signing (i.e. sign motherese). Child-directed signing is similar to vocal motherese (aka "baby-talk"). A caretaker’s reinforcement will convey to a deaf child that they are producing sign speech in a way that is analogous to that of hearing parents and infants learning spoken language.[7] [8]
Pettito and Marentette found that for a child learning sign language, they may produce their first sign around 8 to 10 months old while hearing children typically produce their first words around 12 to 13 months old.[9] Despite this slight difference in the onset of language, very few differences have been found in the acquisition of vocabulary and language between deaf and hearing children. Both hearing and deaf children produce babbles in rhythmic, temporally oscillating bundles that are syllabically organized and share phonological properties with fluent adults. In fact, manual babbling is “characterized by identical timing, patterning, structure, and use” as that of vocal babbling (Chamberlain, 1999, pg. 18).
Summary Table: Vocal Babbling vs. Manual Babbling
In a study by Adrianne Cheek, Kearsy Cormier, Christian Rathmarm, Ann Repp, and Richard Meier, they found similarities between babbles and first signs. The analysis of the properties of babbles and signs showed that all infants produced a relaxed hand with all fingers extended more often than any other hand shape; the same held for deaf infants in first signs. Infants also displayed downward movements more often for babbles and signs than any other movement category.[10] Finally, babies demonstrated a preference for one-handed babbles over two-handed ones. Deaf babies maintained this preference by producing more one-handed signs than two-handed signs. For palm orientation, subjects predominately babbled or signed with palms down.[10]
Kearsy Cormier, Claude Mauk, and Ann Repp conducted an observational study of the natural behaviors of hearing and deaf infants. They utilized a global approach to manual babbling in their coding, as suggested by Meier and Willerman. The two goals of their study were: “(1) Specify the time course of manual babbling in deaf and hearing infants; and (2) Examine the relationship between manual babbling and the onset of communicative gestures” (Cormier, 1998, p. 57). The results support the predictions and claims made previously by Meier and Willerman which found most early gestural behavior is a result of motor development in addition to linguistic influences. What's more, deaf children usually produced more referential gestures, specifically referential pointing, than hearing children, which could be the result of their distinct linguistic environments. For example, pointing becomes essential for deaf children learning sign language. While hearing children also engage in pointing behavior, it will always be an added gesture that is not required by the spoken language. The study concludes that, while a child’s early communicative gestures are partially determined by linguistic environment, manual babbling is mainly influenced by motor development, which occurs in both deaf and hearing children.[11] This finding differs from the initial research conducted by Pettito and Marentette, which found that manual babbling is mainly influenced by linguistic development. Pettito addressed this in a subsequent study and concluded that these differences were derived from the different coding methods used (Meier and Willerman's method used a more general definition of manual babbling than Pettito and Marentette had developed) (Pettito, 2004).[12]