´ëÇѾð¾îÇÐȸThe Linguistic Association of Korea

ÇÐȸÁö

  • Ȩ
  • ÇÐȸÁö
  • ³í¹®ÀÚ·á½Ç

³í¹®ÀÚ·á½Ç

Á¦¸ñ Is c-command Machine-learnable?
ÀúÀÚ Unsub Shin ¡¤ Myung-Kwan Park ¡¤ Sanghoun Song
±Ç/È£ Á¦29±Ç / 1È£
Ãâó 183-204
³í¹®°ÔÀçÀÏ 2021.03.31
ÃÊ·Ï Shin, Unsub; Park, Myung-Kwan & Song, Sanghoun. (2021). Is c-command machine-learnable? The Linguistic Association of Korea Journal, 29(1), 183-204. Many psycholinguistic studies have tested whether pronouns and polarity items elicit additional processing cost when they are not c-commanded. The previous studies claim that the c-command constraint regulates the distribution of relevant syntactic objects. As such, the syntactic effects of the c-command relation are greatly affected by the types of licensing (e.g. quantificational binding) and reading comprehension patterns of subjects (e.g. linguistic illusion). The present study investigates the reading behavior of the language model BERT when the syntactic processing of relational information (i.e. X c-commands Y) is required. Specifically, our two experiments contrasted the BERT comprehension of a c-commanding licensor versus a non-c-commanding licensor with reflexive anaphora and negative polarity items. The analysis based on the information-theoretic measure of surprisal suggests that violations of the c-command constraint are unexpected for BERT representations. We conclude that deep learning models like BERT can learn the syntactic c-command restriction at least with respect to reflexive anaphors and negative polarity items. At the same time, BERT appeared to have some limitations in its flexibility to apply compensatory pragmatic reasoning when a non-c-commanding licensor intruded in the dependency structure.
ÆÄÀÏ PDFº¸±â  ´Ù¿î·Îµå