Tactile facial action units toward enriching social interactions for individuals who are blind

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Social interactions mediate our communication with others, enable development and maintenance of personal and professional relationships, and contribute greatly to our health. While both verbal cues (i.e., speech) and non-verbal cues (e.g., facial expressions, hand gestures, and body language) are exchanged during social interactions, the latter encompasses more information (~65%). Given their inherent visual nature, non-verbal cues are largely inaccessible to individuals who are blind, putting this population at a social disadvantage compared to their sighted peers. For individuals who are blind, embarrassing social situations are not uncommon due to miscommunication, which can lead to social avoidance and isolation. In this paper, we propose a mapping between visual facial expressions, represented as facial action units, which may be extracted using computer vision algorithms, to haptic (vibrotactile) representations, toward discreet and real-time perception of facial expressions during social interactions by individuals who are blind.

Cite

CITATION STYLE

APA

McDaniel, T., Devkota, S., Tadayon, R., Duarte, B., Fakhri, B., & Panchanathan, S. (2018). Tactile facial action units toward enriching social interactions for individuals who are blind. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11010 LNCS, pp. 3–14). Springer Verlag. https://doi.org/10.1007/978-3-030-04375-9_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free