On Applicability of Neural Language Models for Readability Assessment in Filipino

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the field of automatic readability assessment (ARA), the current trend in the research community focuses on the use of large neural language models such as BERT as evidenced from its high performance in other downstream NLP tasks. In this study, we dissect the BERT model and applied it to readability assessment in a low-resource setting using a dataset in the Filipino language. Results show that extracting embeddings separately from various layers of BERT obtain relatively similar performance with models trained using a diverse set of handcrafted features and substantially better than using conventional transfer learning approach.

Cite

CITATION STYLE

APA

Ibañez, M., Reyes, L. L. A., Sapinit, R., Hussien, M. A., & Imperial, J. M. (2022). On Applicability of Neural Language Models for Readability Assessment in Filipino. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13356 LNCS, pp. 573–576). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-11647-6_118

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free