ÃÊ·Ï |
Chung, Wonil & Park, Myung-Kwan. (2022). Neural network language models as psycholinguistic subjects: Focusing on reflexive dependency. The Linguistic Association of Korea Journal, 30(4), 169-190. The purpose of this study is to investigate the reflexive-antecedent dependency resolution accompanying the wh-filler-gap dependency resolution in neural network language models (LMs) sentence processing, comparing the processing result of LMs to the one of humans. To do so, we adopt the psycholinguistic methodology that Fraizer et al. (2015) used for humans. The neural-network language models employed in this study are four LMs: the Long Short-Term Memory (LSTM) trained on large datasets, the Generative Pre-trained Transformer-2 (GPT-2) trained on large datasets, an LSTM trained on small datasets (L2 datasets), and the GPT-2 trained on small datasets (L2 datasets). We found that only the LMs trained on large datasets were sensitive to the dependency between a reflexive and its antecedent matching in gender, but all of the four neural LMs failed to learn reflexive-antecedent dependency accompanying wh-filler-gap dependency. Furthermore, we also found that the neural LMs have a learning bias in gender mismatch. |