MoL-2017-28: Understanding Generalization: Learning Quantifiers and Negation with Neural Tensor Networks

MoL-2017-28: Repplinger, Michael (2017) Understanding Generalization: Learning Quantifiers and Negation with Neural Tensor Networks. [Report]

[thumbnail of MoL-2017-28.text.pdf]

Download (2MB) | Preview


The following investigation is focused on the intersection of symbolic and distributional approaches to natural language semantics. Broadly speaking, we analyze symbolic approaches to semantics in order to learn how the performance of distributional models can be improved that are applied to compositional language tasks. Specifically then, we set out to discover empirical justification for the often claimed advantage of employing higher-order tensors in semantic vector space models.
Using an earlier natural language inference experiment, and adjusting it towards a more stringent test of generalization performance, we are able to find clear signs in support of the conjectured advantage of tensor models. In our experiments, a clear difference emerges in generalization performance between a conventional matrix-based tree-structured neural network and a tensor-based variant of the architecture.
To discover the cause for this performance difference, we visualize sentence representations of models trained on the semantic task. We find evidence of linearization of complex data in the tensor model, such as hyperplane separation of negated expressions, and the systematic organization of quantified expressions. We then suggest an explanation of our prediction results, by linking the observed properties of the model representations with the logical requirements of the inference task.

Item Type: Report
Report Nr: MoL-2017-28
Series Name: Master of Logic Thesis (MoL) Series
Year: 2017
Subjects: Language
Depositing User: Dr Marco Vervoort
Date Deposited: 19 Oct 2017 15:51
Last Modified: 19 Oct 2017 15:51

Actions (login required)

View Item View Item