Relation Extraction Datasets in the Digital Humanities Domain and Their Evaluation with Word Embeddings

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this research, we manually create high-quality datasets in the digital humanities domain for the evaluation of language models, specifically word embedding models. The first step comprises the creation of unigram and n-gram datasets for two fantasy novel book series for two task types each, analogy and doesn’t-match. This is followed by the training of models on the two book series with various popular word embedding model types such as word2vec, GloVe, fastText, or LexVec. Finally, we evaluate the suitability of word embedding models for such specific relation extraction tasks in a situation of comparably small corpus sizes. In the evaluations, we also investigate and analyze particular aspects such as the impact of corpus term frequencies and task difficulty on accuracy. The datasets, and the underlying system and word embedding models are available on github and can be easily extended with new datasets and tasks, be used to reproduce the presented results, or be transferred to other domains.

Cite

CITATION STYLE

APA

Wohlgenannt, G., Chernyak, E., Ilvovsky, D., Barinova, A., & Mouromtsev, D. (2023). Relation Extraction Datasets in the Digital Humanities Domain and Their Evaluation with Word Embeddings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13396 LNCS, pp. 207–219). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-23793-5_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free