Comparing Children and Large Language Models in Word Sense Disambiguation: Insights and Challenges - Laboratoire Parole et Langage Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Comparing Children and Large Language Models in Word Sense Disambiguation: Insights and Challenges

Résumé

Understanding how children process ambiguous words is a challenge because sense disambiguation depends on sentence context bottom-up and top-down aspects. Here, we seek insight into this phenomenon by investigating how such a competence might arise in large distributional learners (Transformers) that purport to acquire sense representations from language input in a largely unsupervised fashion. We investigated how sense disambiguation might be achieved using model representations derived from naturalistic child-directed speech. We tested a large pool of Transformer models, varying in their pretraining input size/nature as well as the size of their parameter space. Tested across three behavioral experiments from the developmental literature, we found that these models capture some essential properties of child sense disambiguation, although most still struggle in the more challenging tasks with contrastive cues. We discuss implications for both theories of word learning and for using Transformers to capture child language processing.
Fichier principal
Vignette du fichier
cogsci23a-sub2080-cam-i9-2.pdf (2.66 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04101847 , version 1 (21-05-2023)

Identifiants

  • HAL Id : hal-04101847 , version 1

Citer

Francesco Cabiddu, Mitja Nikolaus, Abdellah Fourtassi. Comparing Children and Large Language Models in Word Sense Disambiguation: Insights and Challenges. Proceedings of the 45th Annual Meeting of the Cognitive Science Society, Jul 2023, Sidney, Australia. ⟨hal-04101847⟩
58 Consultations
37 Téléchargements

Partager

Gmail Facebook X LinkedIn More