Inter-rater agreement for the annotation of neurologic signs and symptoms in electronic health records

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

The extraction of patient signs and symptoms recorded as free text in electronic health records is critical for precision medicine. Once extracted, signs and symptoms can be made computable by mapping to signs and symptoms in an ontology. Extracting signs and symptoms from free text is tedious and time-consuming. Prior studies have suggested that inter-rater agreement for clinical concept extraction is low. We have examined inter-rater agreement for annotating neurologic concepts in clinical notes from electronic health records. After training on the annotation process, the annotation tool, and the supporting neuro-ontology, three raters annotated 15 clinical notes in three rounds. Inter-rater agreement between the three annotators was high for text span and category label. A machine annotator based on a convolutional neural network had a high level of agreement with the human annotators but one that was lower than human inter-rater agreement. We conclude that high levels of agreement between human annotators are possible with appropriate training and annotation tools. Furthermore, more training examples combined with improvements in neural networks and natural language processing should make machine annotators capable of high throughput automated clinical concept extraction with high levels of agreement with human annotators.

Cite

CITATION STYLE

APA

Oommen, C., Howlett-Prieto, Q., Carrithers, M. D., & Hier, D. B. (2023). Inter-rater agreement for the annotation of neurologic signs and symptoms in electronic health records. Frontiers in Digital Health, 5. https://doi.org/10.3389/fdgth.2023.1075771

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free