Cued speech | |
Creator: | R. Orin Cornett |
Created: | 1966 |
Setting: | Deaf or hard-of-hearing people |
Family: | Adds information about the phonology of the word that is not visible on the lips |
Notice: | IPA |
Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues (representing consonants), in different locations near the mouth (representing vowels) to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is not a sign language such as American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development.
Cued speech was invented in 1966 by R. Orin Cornett at Gallaudet College, Washington, D.C.[1] After discovering that children with prelingual and profound hearing impairments typically have poor reading comprehension, he developed the system with the aim of improving the reading abilities of such children through better comprehension of the phonemes of English. At the time, some were arguing that deaf children were earning these lower marks because they had to learn two different systems: American Sign Language (ASL) for person-to-person communication and English for reading and writing.[2] As many sounds look identical on the lips (such as pronounced as //p// and pronounced as //b//), the hand signals introduce a visual contrast in place of the formerly acoustic contrast. Cued Speech may also help people hearing incomplete or distorted sound—according to the National Cued Speech Association at cuedspeech.org, "cochlear implants and Cued Speech are perfect partners".[3]
Since cued speech is based on making sounds visible to the hearing impaired, it is not limited to use in English-speaking nations. Because of the demand for use in other languages/countries, by 1994 Cornett had adapted cueing to 25 other languages and dialects.[1] Originally designed to represent American English, the system was adapted to French in 1977., Cued speech has been adapted to approximately 60 languages and dialects, including six dialects of English. For tonal languages such as Thai, the tone is indicated by inclination and movement of the hand. For English, cued speech uses eight different hand shapes and four different positions around the mouth.
Though to a hearing person, cued speech may look similar to signing, it is not a sign language; nor is it a Manually Coded Sign System for a spoken language. Rather, it is a manual modality of communication for representing any language at the phonological level (phonetics).
A manual cue in cued speech consists of two components: hand shape and hand position relative to the face. Hand shapes distinguish consonants and hand positions distinguish vowel. A hand shape and a hand position (a "cue") together with the accompanying mouth shape, makes up a CV unit - a basic syllable.[4]
Cuedspeech.org lists 64 different dialects to which CS has been adapted.[5] Each language takes on CS by looking through the catalog of the language's phonemes and distinguishing which phonemes appear similar when pronounced and thus need a hand sign to differentiate them.
Cued speech is based on the hypothesis that if all the sounds in the spoken language looked clearly different from each other on the lips of the speaker, people with a hearing loss would learn a language in much the same way as a hearing person, but through vision rather than audition.[6] [7]
Literacy is the ability to read and write proficiently, which allows one to understand and communicate ideas so as to participate in a literate society.
Cued speech was designed to help eliminate the difficulties of English language acquisition and literacy development in children who are deaf or hard-of-hearing. Results of research show that accurate and consistent cueing with a child can help in the development of language, communication and literacy but its importance and use is debated. Studies address the issues behind literacy development,[8] traditional deaf education, and how using cued speech affects the lives of deaf and HOH children.
Cued speech does indeed achieve its goal of distinguishing phonemes received by the learner, but there is some question of whether it is as helpful to expression as it is to reception. An article by Jacqueline Leybaert and Jesús Alegría discusses how children who are introduced to cued speech before the age of one are up-to-speed with their hearing peers on receptive vocabulary, though expressive vocabulary lags behind.[9] The writers suggest additional and separate training to teach oral expression if such is desired, but more importantly this reflects the nature of cued speech; to adapt children who are deaf and hard-of-hearing to a hearing world, as such discontinuities of expression and reception are not as commonly found for children with a hearing loss who are learning sign language.[9]
In her paper "The Relationship Between Phonological Coding And Reading Achievement In Deaf Children: Is Cued Speech A Special Case?" (1998), Ostrander notes, "Research has consistently shown a link between lack of phonological awareness and reading disorders (Jenkins & Bowen, 1994)" and discusses the research basis for teaching cued speech as an aid to phonological awareness and literacy.[10] Ostrander concludes that further research into these areas is needed and well justified.[11]
The editor of the Cued Speech Journal (currently sought but not discovered) reports that "Research indicating that Cued Speech does greatly improve the reception of spoken language by profoundly deaf children was reported in 1979 by Gaye Nicholls, and in 1982 by Nicholls and Ling."[12]
In the book Choices in Deafness: A Parents' Guide to Communication Options, Sue Schwartz writes on how cued speech helps a deaf child recognize pronunciation. The child can learn how to pronounce words such as "hors d'oeuvre" or "tamale" or "Hermione" that have pronunciations different from how they are spelled. A child can learn about accents and dialects. In New York, coffee may be pronounced "caw fee"; in the South, the word friend ("fray-end") can be a two-syllable word.[13]
The topic of deaf education has long been filled with controversy. There are two strategies for teaching the deaf that exist: an aural/oral approach or a manual approach. Those who use aural-oralism believe that children who are deaf or hard of hearing should be taught through the use of residual hearing, speech and speechreading. Those promoting a manual approach believe the deaf should be taught through the use of signed languages, such as American Sign Language (ASL).[14]
Within the United States, proponents of cued speech often discuss the system as an alternative to ASL and similar sign languages, although others note that it can be learned in addition to such languages.[15] For the ASL-using community, cued speech is a unique potential component for learning English as a second language. Within bilingual-bicultural models, cued speech does not borrow or invent signs from ASL, nor does CS attempt to change ASL syntax or grammar. Rather, CS provides an unambiguous model for language learning that leaves ASL intact.[16]
Cued speech has been adapted to more than 50 languages and dialects. However, it is not clear how many of them are actually in use.[17]
Similar systems have been used for other languages, such as the Assisted Kinemes Alphabet in Belgium and the Baghcheban phonetic hand alphabet for Persian.[19]