Phonetics


Phonetics is a branch of linguistics that studies how humans produce and perceive sounds or, in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines: articulatory phonetics, acoustic phonetics, and auditory phonetics. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones and it is also defined as the smallest unit that discerns meaning between sounds in any given language.
Phonetics deals with two aspects of human speech: production and perception. The communicative modality of a language describes the method by which a language produces and perceives languages. Languages with oral-aural modalities such as English produce speech orally and perceive speech aurally. Sign languages, such as Australian Sign Language and American Sign Language, have a manual-visual modality, producing speech manually and perceiving speech visually. ASL and some other sign languages have in addition a manual-manual dialect for use in tactile signing by deafblind speakers where signs are produced with the hands and perceived with the hands as well.

History

Antiquity

The first known study of phonetics was undertaken by Sanskrit grammarians as early as the 6th century BCE. The Hindu scholar Pāṇini is among the most well known of these early investigators. His four-part grammar, written, is influential in modern linguistics and still represents "the most complete generative grammar of any language yet written". His grammar formed the basis of modern linguistics and described several important phonetic principles, including voicing. This early account described resonance as being produced either by tone, when vocal folds are closed, or noise, when vocal folds are open. The phonetic principles in the grammar are considered "primitives" in that they are the basis for his theoretical analysis rather than the objects of theoretical analysis themselves, and the principles can be inferred from his system of phonology.
The Sanskrit study of phonetics is called Shiksha, which the 1st-millennium BCE Taittiriya Upanishad defines as follows:
Om! We will explain the Shiksha.
Sounds and accentuation, Quantity and the expression,
Balancing and connection, So much about the study of Shiksha. || 1 |
Taittiriya Upanishad 1.2, Shikshavalli, translated by Paul Deussen.

Modern

Advancements in phonetics after Pāṇini and his contemporaries were limited until the modern era, save some limited investigations by Greek and Roman grammarians. In the millennia between Indic grammarians and modern phonetics, the focus shifted from the difference between spoken and written language, which was the driving force behind Pāṇini's account, and began to focus on the physical properties of speech alone. Sustained interest in phonetics began again around 1800 CE with the term "phonetics" being first used in the present sense in 1841. With new developments in medicine and the development of audio and visual recording devices, phonetic insights were able to use and review new and more detailed data. This early period of modern phonetics included the development of an influential phonetic alphabet based on articulatory positions by Alexander Melville Bell. Known as visible speech, it gained prominence as a tool in the oral education of deaf children.
Before the widespread availability of audio recording equipment, phoneticians relied heavily on a tradition of practical phonetics to ensure that transcriptions and findings were able to be consistent across phoneticians. This training involved both ear training—the recognition of speech sounds—as well as production training—the ability to produce sounds. Phoneticians were expected to learn to recognize by ear the various sounds on the International Phonetic Alphabet and the IPA still tests and certifies speakers on their ability to accurately produce the phonetic patterns of English. As a revision of his visible speech method, Melville Bell developed a description of vowels by height and backness resulting in 9 cardinal vowels. As part of their training in practical phonetics, phoneticians were expected to learn to produce these cardinal vowels to anchor their perception and transcription of these phones during fieldwork. This approach was critiqued by Peter Ladefoged in the 1960s based on experimental evidence where he found that cardinal vowels were auditory rather than articulatory targets, challenging the claim that they represented articulatory anchors by which phoneticians could judge other articulations.

Production

Language production consists of several interdependent processes which transform a nonlinguistic message into a spoken or signed linguistic signal. Linguists debate whether the process of language production occurs in a series of stages or whether production processes occur in parallel. After identifying a message to be linguistically encoded, a speaker must select the individual words—known as lexical items—to represent that message in a process called lexical selection. The words are selected based on their meaning, which in linguistics is called semantic information. Lexical selection activates the word's lemma, which contains both semantic and grammatical information about the word.
After an utterance has been planned, it then goes through phonological encoding. In this stage of language production, the mental representation of the words are assigned their phonological content as a sequence of phonemes to be produced. The phonemes are specified for articulatory features which denote particular goals such as closed lips or the tongue in a particular location. These phonemes are then coordinated into a sequence of muscle commands that can be sent to the muscles, and when these commands are executed properly the intended sounds are produced. Thus the process of production from message to sound can be summarized as the following sequence:
  • Message planning
  • Lemma selection
  • Retrieval and assignment of phonological word forms
  • Articulatory specification
  • Muscle commands
  • Articulation
  • Speech sounds

    Place of articulation

Sounds which are made by a full or partial constriction of the vocal tract are called consonants. Consonants are pronounced in the vocal tract, usually in the mouth, and the location of this constriction affects the resulting sound. Because of the close connection between the position of the tongue and the resulting sound, the place of articulation is an important concept in many subdisciplines of phonetics.
Sounds are partly categorized by the location of a constriction as well as the part of the body doing the constricting. For example, in English the words fought and thought are a minimal pair differing only in the organ making the construction rather than the location of the construction. The "f" in fought is a labiodental articulation made with the bottom lip against the teeth. The "th" in thought is a linguodental articulation made with the tongue against the teeth. Constrictions made by the lips are called labials while those made with the tongue are called lingual.
Constrictions made with the tongue can be made in several parts of the vocal tract, broadly classified into coronal, dorsal and radical places of articulation. Coronal articulations are made with the front of the tongue, dorsal articulations are made with the back of the tongue, and radical articulations are made in the pharynx. These divisions are not sufficient for distinguishing and describing all speech sounds. For example, in English the sounds and are both coronal, but they are produced in different places of the mouth. To account for this, more detailed places of articulation are needed based upon the area of the mouth in which the constriction occurs.

Labial

Articulations involving the lips can be made in three different ways: with both lips, with one lip and the teeth, so they have the lower lip as the active articulator and the upper teeth as the passive articulator, and with the tongue and the upper lip. Depending on the definition used, some or all of these kinds of articulations may be categorized into the class of labial articulations. Bilabial consonants are made with both lips. In producing these sounds the lower lip moves farthest to meet the upper lip, which also moves down slightly, though in some cases the force from air moving through the aperture may cause the lips to separate faster than they can come together. Unlike most other articulations, both articulators are made from soft tissue, and so bilabial stops are more likely to be produced with incomplete closures than articulations involving hard surfaces like the teeth or palate. Bilabial stops are also unusual in that an articulator in the upper section of the vocal tract actively moves downward, as the upper lip shows some active downward movement. Linguolabial consonants are made with the blade of the tongue approaching or contacting the upper lip. Like in bilabial articulations, the upper lip moves slightly towards the more active articulator. Articulations in this group do not have their own symbols in the International Phonetic Alphabet, rather, they are formed by combining an apical symbol with a diacritic implicitly placing them in the coronal category. They exist in a number of languages indigenous to Vanuatu such as Tangoa.
Labiodental consonants are made by the lower lip rising to the upper teeth. Labiodental consonants are most often fricatives while labiodental nasals are also typologically common. There is debate as to whether true labiodental plosives occur in any natural language, though a number of languages are reported to have labiodental plosives including Zulu, Tonga, and Shubi.