Abstract
The use of parameters in the description of natural language syntax has to balance between the need to discriminate among (sometimes subtly different) languages, which can be seen as a cross-linguistic version of Chomsky's descriptive adequacy (Chomsky, 1964), and the complexity of the acquisition task that a large number of parameters would imply, which is a problem for explanatory adequacy. Here we first present a novel approach in which machine learning is used to detect hidden dependencies in a table of parameters. The result is a dependency graph in which some of the parameters can be fully predicted from others. These findings can be then subjected to linguistic analysis, which may either refute them by providing typological counter-examples of languages not included in the original dataset, dismiss them on theoretical grounds, or uphold them as tentative empirical laws worth of further study. Machine learning is also used to explore the full sets of parameters that are sufficient to distinguish one historically established language family from others. These results provide a new type of empirical evidence about the historical adequacy of parameter theories.
Original language | English |
---|---|
Title of host publication | The Evolution of Language: Proceedings of the 12th International Conference (EVOLANGXII) |
Editors | C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, T. Verhoef |
Place of Publication | Torun, Poland |
Publisher | Online at http://evolang.org/torun/proceedings/papertemplate.html?p=176 |
Number of pages | 10 |
Publication status | Published - 29 Jan 2018 |