Lingual Articulatory Configurations of Japanese Devoiced Vowels

Name: 

Rion Iwasaki

Department:

Speech-Language-Hearing Sciences

Project Title:

Lingual Articulatory Configurations of Japanese Devoiced Vowels

My name is Rion Iwasaki, a level II student entering my third year of the Ph.D. Program in Speech-Language-Hearing Sciences at the Graduate Center, CUNY. I am from Tokyo, Japan. My name, Rion (理音), is written with Chinese characters meaning “reasoning” (理) and “sounds” (音), so I am naturally interested in speech sounds, having received an M.A. in phonology and phonetics before starting at the GC. I am particularly interested in the relationship between how articulators move and the resulting speech sounds, what this relationship can tell us about how languages work, and how languages change over time.

Project

All languages change over time. There is a phenomenon in Tokyo Japanese that might give some insight into a sound change in progress. Japanese typically has only open syllables, where a single consonant precedes a vowel. Vowels normally involve phonation and some specific lingual articulation. However, vowel “devoicing”—producing a vowel without phonation—frequently occurs in certain (but not all) phonetic contexts in Tokyo Japanese. There is controversy as to whether devoiced vowels are just unphonated or deleted.

I am using ultrasound to see what the tongue is doing during the production of these vowels. If devoiced vowels are just unphonated, the lingual articulation of unphonated and phonated vowels should be the same. Conversely, differences in the lingual articulation may indicate that devoiced vowels are deleted. The vowel deletion implies that a sound change may be in progress in Tokyo Japanese—from a language with mostly open syllables to one that permits consonant clusters.

 

During the summer, I collected articulatory data from native speakers of Tokyo Japanese living in New York City. All data collection was conducted at the Speech Production, Acoustics and Perception Lab at the CUNY Graduate Center. Speakers produced word pairs contrasting in the voicing relation of two different vowels (voiced and devoiced). I used an ultrasound machine (Figure 1, the same as you would see in a medical office) to measure the lingual articulation of these vowels as a way to determine whether the vowels produced in devoiceable environments are just unphonated or deleted. To record the lingual articulation, speakers put their chin on an ultrasound transducer when producing the word pairs.

Figure 1: A diagnostic ultrasound device in Speech Production, Acoustics and Perception Lab in Graduate Center with a tongue image. The white line in the middle represents the tongue surface on the midsagittal plane.

Currently, I am analyzing the articulatory data. The data analysis mainly consists of two parts: (a) tracing tongue contours on ultrasound images corresponding to both voiced and devoiced vowels and (b) comparing the lingual articulation of devoiced and voiced vowels by quantifying the traced tongue contours. The next step of this project is to continue analyzing the data in the upcoming fall semester and to prepare myself to present the findings of this project at the 178th Meeting of the Acoustical Society of America, which will be held in San Diego, California in December 2019. I will be able to travel to the conference and present my initial findings thanks to this fellowship.

My Ph.D. advisor, Dr. Douglas H. Whalen, is affiliated with both Haskins Laboratories and Yale University in New Haven, CT, where a number of established researchers in this field are working. I thank the fellowship for funding multiple trips to New Haven to consult some of the researchers working in Haskins Laboratories and Yale University regarding my research design and the data analysis. I expect a few more trips to New Haven as I continue analyzing the data.