MoL-2025-31: Meaning and Agency in Large Language Models

MoL-2025-31: Huespe, Mayra (2025) Meaning and Agency in Large Language Models. [Report]

[thumbnail of MoL-2025-31.text.pdf] Text
MoL-2025-31.text.pdf - Published Version

Download (860kB)

Abstract

Since the emergence of large language models (LLMs), there has been growing interest in the question of whether they produce meaningful outcomes (henceforth, the Problem of Meaning in LLMs). In the literature, “producing meaningful outputs” has been understood either as a manifestation of genuine understanding or successful use of language. Existing approaches often adapt philosophical theories of meaning, originally developed for human speakers, to define criteria under which LLMs can be said to produce meaning. However, this strategy creates a tension: we can either support the intuition that LLMs engage in basic linguistic interactions, at the cost of assuming that they simulate human processes of meaning production, or we can deny this intuition and thereby lose the conceptual space needed to account for their distinctive forms of linguistic engagement.
I resolve this tension by proposing a framework that explains how LLMs participate in linguistic interactions without presupposing human-like processes of meaning production. To this end, I first situate the Problem of Meaning within the broader debate on AI agency, adopting Floridi’s distinction between cognitive and non-cognitive views, and show that existing accounts presuppose either a cognitive or soft-cognitive notion of agency. This opens space for a noncognitive approach. I then develop the first such account of the Problem of Meaning in LLMs, shifting the focus from whether LLMs produce meaningful outcomes to conceptualizing their distinct non-cognitive modes of linguistic engagement within the environment. Philosophical theories of meaning, thus, serve not to define criteria for successful meaning production in LLMs, but rather to conceptualize the linguistic environments in which they operate. In this way, although I still address the Problem of Meaning in LLMs by adopting a philosophical theory of meaning, in particular Putnam’s semantic externalism, I now guide its application through the adoption of a non-cognitive conception of AI agency. Hence, instead of asking whether these models engage in linguistic interactions by simulating a cognitive process, the non-cognitive perspective, along with Putnam’s theory of meaning, shifts the challenge to describing their specific modes of engagement within the linguistic community.

Item Type: Report
Report Nr: MoL-2025-31
Series Name: Master of Logic Thesis (MoL) Series
Year: 2025
Uncontrolled Keywords: AI Agency, LLMs, Non-cognitive Agency, Semantic Externalism, Language Technologies
Subjects: Logic
Mathematics
Depositing User: Dr Marco Vervoort
Date Deposited: 12 Jan 2026 13:27
Last Modified: 12 Jan 2026 13:27
URI: https://eprints.illc.uva.nl/id/eprint/2407

Actions (login required)

View Item View Item