A Latent-Variable Model for Intrinsic Probing
Research output: Contribution to journal › Journal article › Research › peer-review
Documents
- Fulltext
Submitted manuscript, 602 KB, PDF document
The success of pre-trained contextualized representations has prompted researchers to analyze them for the presence of linguistic information. Indeed, it is natural to assume that these pre-trained representations do encode some level of linguistic knowledge as they have brought about large empirical improvements on a wide variety of NLP tasks, which suggests they are learning true linguistic generalization. In this work, we focus on intrinsic probing, an analysis technique where the goal is not only to identify whether a representation encodes a linguistic attribute but also to pinpoint where this attribute is encoded. We propose a novel latent-variable formulation for constructing intrinsic probes and derive a tractable variational approximation to the log-likelihood. Our results show that our model is versatile and yields tighter mutual information estimates than two intrinsic probes previously proposed in the literature. Finally, we find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
Original language | English |
---|---|
Journal | AAAI Conference on Artificial Intelligence |
Volume | 37 |
Issue number | 11 |
Pages (from-to) | 13591-13599 |
ISSN | 2159-5399 |
DOIs | |
Publication status | Published - 2023 |
ID: 381157186