The workshop aims to bring together researchers from different background to discuss issues concerning language variation and change, especially in the context of language diversity in China. The workshop will take place on June 14-16, 2019 at The University of Chicago Center in Beijing. In addition to the invited research talks on June 14-15, we will also present two research method tutorials on June 16.
This workshop follows the success of the University of Chicago Language Research Forum that took place in March 2017 and March 2018 at the Hong Kong Jockey Club University of Chicago Academic Complex | The University of Chicago Francis and Rose Yuen Campus in Hong Kong.
Presentations (June 14th & 15th)
Invited Speakers
Aynur Abish & Éva Á. Csató Johanson (Minzu University of China & Uppsala University, Sweden)
Contact induced changes in Kazakh in Urumchi (abstract)
Yeshes Vodgsal Atshogs (Nankai University)
Languge Contact or Cognate: — New Evidence of Tibeto-Altaic grammatical drift (abstract)
Zhiming Bao (National University of Singapore)
How languages mix (abstract)
Angel Chan & Wenchun Yang (The Hong Kong Polytechnic University)
Documenting the language abilities of ‘left-behind’ ethnic minority children in China: evidence from Kam-Mandarin bilingual children’s comprehension of relative clauses (abstract)
Jie Dong (Tsinghua University)
Taste, discourse and middle-class identity: Ethnography of the Chinese Saabists (abstract)
Matt Faytak (University of California, Los Angeles)
Change and inertia in Jiangnan: an ultrasound study of fricative vowels in Suzhou Chinese (abstract)
Feng-fan Hsieh (National Tsing Hua University)
A Cross-dialectal Comparison of Er-suffixation in Beijing Mandarin and Northeastern Mandarin: An Electromagnetic Articulography Study (abstract)
Xuping Li (Zhejiang University)
Symmetric and asymmetric coordinated phrases in Gan-Qing Mandarin (abstract)
Hongyong Liu (University of Macau)
Split numeral classifiers in the Chinese variety spoken by the Lalo Yi people (abstract)
Andrew Simpson (University of Southern California)
Analyzing head-initiality, head-finality and mixed headedness: domains, language types, and patterns of change (abstract)
Bei Wang (Minzu University of China)
The distribution of post-focus-compression (PFC) in Tibeto-Burman language Family: Is PFC a genetic linguistic feature? (abstract)
William S-Y Wang (Hong Kong Polytechnic University)
Mode of Transmission and Language Change (abstract)
Xuan Wang (Hong Kong Polytechnic University)
Exploring the outcome of dialect contact across three generations in Hohhot, China (abstract)
Alan Yu (The University of Chicago)
Kristine Yu (University of Massachusetts Amherst)
Variation in the phonetics of creaky voice in Cantonese, Hmong, and Mandarin (abstract)
Methods Tutorials (June 16th)
Matt Faytak (University of California, Los Angeles)
Dimensionality reduction for linguists: a practical introduction
Linguistic data is often high-dimensional: quantitative analysis is frequently carried out on a handful of features carefully selected for their linguistic relevance which are extracted from a complex signal containing many potentially analyzable features. In light of this property of language, processing linguistic data using dimensionality reduction is an attractive alternative in some cases. Dimensionality reduction methods algorithmically identify informative covariations among the many values of a data set and capture these using a smaller number of variables. Such an approach is “data-driven” rather than researcher-driven and is especially useful when the features for analysis are difficult to choose, extraction of the features is subject to undesirable amounts of inter-annotator variability, or holistic representations of data are required.
In this tutorial, we will focus on application (using the Python programming language) of simple dimensionality reduction methods to linguistic data of several types. We will focus on two methods: principal component analysis, on the one hand, maximizes variance in each successive principal component discovered, and as such is useful for creating lower-dimensional projections of high-dimensional data. We will also cover linear discriminant analysis, a related method which models differences among known categories using a recombination of variables present in the data, yielding a smaller number of linear discriminants whose values are maximally predictive of class membership.
Matt Wagers (University of California, Santa Cruz)
Sponsors
- Linguistics Department, University of Chicago
- University of Chicago Beijing Center
- Department of Foreign Languages and Literatures, Tsinghua University