Exemplar-Based Linguistics: How to Get Productivity from Examples Rens Bod Abstract: Exemplar-based models of language propose that human language production and understanding operate with a store of concrete linguistic experiences rather than with abstract linguistic rules. While exemplar-based models are well acknowledged in areas like phonology and morphology, common wisdom has it that they are intrinsically flawed for syntax where infinite generative capacity is needed. This paper shows that this common wisdom is wrong. It starts out by reviewing an exemplar-based syntactic model, known as Data-Oriented Parsing, or DOP, which operates on a corpus of phrase-structure trees. While this model is productive, it is inadequate from the point of grammatical productivity. We therefore extend it to the more sophisticated linguistic representations proposed by Lexical-Functional Grammar theory, resulting in the model known as LFG-DOP, which does allow for meta-linguistic judgments of acceptability. We show how DOP deals with first language acquisition, suggesting a unified model for language learning and language use, and go into a number of syntactic phenomena that can be explained by DOP but that challenge rule-based models. We argue that if there is anything innate in language cognition it is not Universal Grammar but 'Universal Representation'. Keywords: exemplar-based linguistics; data-oriented parsing