By the same authors

Machine Learning Models of Universal Grammar Parameter Dependencies

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Full text download(s)

Published copy (DOI)

Author(s)

Department/unit(s)

Publication details

Title of host publicationProceedings of The Knowledge Resources for the Socio-Economic Sciences and Humanities Workshop
DatePublished - 1 Sep 2017
Pages31-37
Number of pages7
Original languageEnglish

Abstract

The use of parameters in the description of natural language syntax has to balance between the need to discriminate among (sometimes subtly different) languages, which can be seen as a cross-linguistic version of Chomsky’s (1964) descriptive adequacy, and the complexity of the acquisition task that a large number of parameters would imply, which is a problem for explanatory adequacy. Here we present a novel approach in which a machine learning algorithm is used to find dependencies in a table of parameters. The result is a dependency graph in which some of the parameters can be fully predicted from others. These empirical findings can be then subjected to linguistic analysis, which may either refute them by providing typological counter-examples of languages not included in the original dataset, dismiss them on theoretical grounds, or uphold them as tentative empirical laws worth of further study.

Bibliographical note

This is an author-produced version of the published paper. Uploaded with permission of the publisher/copyright holder. Further copying may not be permitted; contact the publisher for details

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations