Automatic detection of hateful comments in online discussion

22Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Making violent threats towards minorities like immigrants or homosexuals is increasingly common on the Internet. We present a method to automatically detect threats of violence using machine learning. A material of 24,840 sentences from YouTube was manually annotated as violent threats or not, andwas used to train and test themachine learning model. Detecting threats of violence works quit well with an error of classifying a violent sentence as not violent of about 10% when the error of classifying a non-violent sentence as violent is adjusted to 5%. The best classification performance is achieved by including features that combine specially chosen important words and the distance between those in the sentence.

Cite

CITATION STYLE

APA

Hammer, H. L. (2017). Automatic detection of hateful comments in online discussion. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST (Vol. 188, pp. 164–173). Springer Verlag. https://doi.org/10.1007/978-3-319-52569-3_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free