Deep learning based gesture recognition and its application in interactive control of intelligent wheelchair

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the development of robotics technology, new human-robot interaction technology has gradually received more and more attention. Bioelectric-based gesture recognition, which is to be studied in this article, has become a frontier subject of new human-robot interaction because of its natural and intuitive information representation function and it is not restricted from complex background conditions. A deep neural network model based on the Alexnet-based network structure is used for gesture recognition based on sEMG (surface electromyography) and inertial information. The data is collected by the sliding window method, the recognition thread loads the trained model and performs online recognition in real time. Moreover, in order to improve the robustness of the algorithm to the input data, a verification model based on the twin neural network is used to verify whether the input data belongs to the identification type. And the human-robot interaction method proposed is verified on the omnidirectional intelligent wheelchair, and the obvious control effect is obtained.

Cite

CITATION STYLE

APA

Zhou, X., Wang, F., Wang, J., Wang, Y., Yan, J., & Zhou, G. (2019). Deep learning based gesture recognition and its application in interactive control of intelligent wheelchair. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11740 LNAI, pp. 547–557). Springer Verlag. https://doi.org/10.1007/978-3-030-27526-6_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free