Spatio-Temporal Action Instance Segmentation and Localisation

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current state-of-the-art human action recognition is focused on the classification of temporally trimmed videos in which only one action occurs per frame. In this work we address the problem of action localisation and instance segmentation in which multiple concurrent actions of the same class may be segmented out of an image sequence. We cast the action tube extraction as an energy maximisation problem in which configurations of region proposals in each frame are assigned a cost and the best action tubes are selected via two passes of dynamic programming. One pass associates region proposals in space and time for each action category, and another pass is used to solve for the tube’s temporal extent and to enforce a smooth label sequence through the video. In addition, by taking advantage of recent work on action foreground-background segmentation, we are able to associate each tube with class-specific segmentations. We demonstrate the performance of our algorithm on the challenging LIRIS-HARL dataset and achieve a new state-of-the-art result which is 14.3 times better than previous methods.

Cite

CITATION STYLE

APA

Saha, S., Singh, G., Sapienza, M., Torr, P. H. S., & Cuzzolin, F. (2020). Spatio-Temporal Action Instance Segmentation and Localisation. In Modelling Human Motion: From Human Perception to Robot Design (pp. 141–161). Springer International Publishing. https://doi.org/10.1007/978-3-030-46732-6_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free