DS-2017-08: Computational Modelling of Artificial Language Learning: Retention, Recognition & Recurrence

DS-2017-08: Alhama, Raquel Garrido (2017) Computational Modelling of Artificial Language Learning: Retention, Recognition & Recurrence. Doctoral thesis, University of Amsterdam.

[thumbnail of Full Text] Text (Full Text)
DS-2017-08.text.pdf

Download (1MB)
[thumbnail of Samenvatting] Text (Samenvatting)
DS-2017-08.samenvatting.txt

Download (5kB)

Abstract

Artificial Language Learning has, over the last 20 years, become a key paradigm to study the nature of learning biases in speech segmentation and rule generalization. In experiments in this paradigm, participants are exposed to a sequence of stimuli that have certain statistical properties, and which may follow a specific pattern. The design intends to mimic particular aspects of speech and language, and participants are tested on whether and under which conditions they can segment the input and/or discover the underlying pattern.

In this dissertation, I have used computational modelling to interpret results from Artificial Language Learning experiments on infants, adults and even non-human animals, with the goal of understanding the most basic mechanisms of language learning. I have conceptualized the process as consisting of three steps: (i) memorization of sequence segments, (ii) computing the propensity to generalize, and (iii) generalization. Along the dissertation I have proposed an account of each of these steps with a computational model.

Step (i) is relevant to understand how individuals segment a speech stream. In chapter 3 I have proposed R&R, a processing model that explains segmentation as a result of retention and recognition. I have shown that this model can account for a range of empirical results on humans and rats (Peña et al., 2002; Toro and Trobalón,2005; Frank et al., 2010). R&R offers an intuitive explanation of the segmentation process, and it also prompted the discovery that the memorization of segments tends to produce skewed and overlapping distributions of words and partwords.

Identifying step (ii) as a separate step is actually a contribution from this dissertation (as is explained in chapter 5). I propose that Simple Good Turing (or SGT, Good(1953)), an existing smoothing model used in Natural Language Processing to account for unseen words in corpora, can be taken as a rational model for step (ii) since the principle it is based on can explain the responses of individuals in the experiments.

As for step (iii), I first presented an extensive critical review of the existing models (chapter 6), in order to identify the state of the art and the critical issues that still have not been resolved. After listing desiderata for future research on generalization, I present a neural network model that addresses some of those. Concretely, my neural network model accounts for the results of one influential experiment (Marcus et al., 1999) by incorporating two core ideas: pre-wiring the connections of the network to provide the model with another type of memory, and pre-training to account for the relevant experience that influences generalization (concretely, incremental presentation of novelty).

Throughout the dissertation, I also reflect on methodological issues in computational modelling. After introducing Marr's levels of analysis (Marr, 1982) and discussing the implications of top-down and bottom-up approaches, I explore the strengths and weaknesses of each level of analysis with each one of the proposed models, and conclude that the best choice depends on the particular research question. Finally, I also discuss issues with model evaluation. Through a model comparison study (chapter 4), I explore alternative evaluation procedures (model parallelisation vs. model sequencing, internal representations vs. external output). This study illustrates the need for complementary types of analysis of empirical results, and it provides the basis for devising stricter evaluation criteria.

This dissertation thus provides an integrated account of segmentation and generalization, based on computational models that have been shown to reproduce empirical results, produce testable predictions, contribute to unresolved theoretical questions and, overall, increase our understanding of the basic processes of language learning.

Item Type: Thesis (Doctoral)
Report Nr: DS-2017-08
Series Name: ILLC Dissertation (DS) Series
Year: 2017
Subjects: Cognition
Computation
Language
Logic
Depositing User: Dr Marco Vervoort
Date Deposited: 14 Jun 2022 15:17
Last Modified: 14 Jun 2022 15:17
URI: https://eprints.illc.uva.nl/id/eprint/2148

Actions (login required)

View Item View Item