How Well can We Learn Interpretable Entity Types from Text?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

  • Dirk Hovy
We investigate a largely unsupervised approach to learning interpretable, domain-specific entity types from unlabeled text. It assumes that any common noun in a domain can function as potential entity type, and uses those nouns as hidden variables in a HMM. To constrain training, it extracts co-occurrence dictionaries of entities and common nouns from the data. We evaluate the learned types by measuring their prediction accuracy for verb arguments in several domains. The results suggest that it is possible to learn domain-specific entity types from unlabeled data. We show significant improvements over an informed baseline, reducing the error rate by 56%.
Original languageEnglish
Title of host publicationProceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Place of PublicationBaltimore, Maryland
PublisherAssociation for Computational Linguistics
Publication date2014
Pages482-487
Publication statusPublished - 2014

ID: 107672668