Moving Objects Segmentation Based on DeepSphere in Video Surveillance

11Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Segmentation of moving objects from video sequences plays an important role in many computer vision applications. In this paper, we present a background subtraction approach based on deep neural networks. More specifically, we propose to employ and validate an unsupervised anomaly discovery framework called “DeepSphere” to perform foreground objects detection and segmentation in video sequences. DeepSphere is based on both deep autoencoders and hypersphere learning methods to isolate anomaly pollution and reconstruct normal behaviors in spatial and temporal context. We exploit the power of this framework and adjust it to perform foreground objects segmentation. We evaluate the performance of our proposed method on 9 surveillance videos from the Background Model Challenge (BMC 2012) dataset, and compare that with a standard subspace learning technique, Robust Principle Component Analysis (RPCA) as well as a Deep Probabilistic Background Model (DeepPBM). Experimental results show that our approach achieved successful results than other existing ones.

Cite

CITATION STYLE

APA

Ammar, S., Bouwmans, T., Zaghden, N., & Neji, M. (2019). Moving Objects Segmentation Based on DeepSphere in Video Surveillance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11845 LNCS, pp. 307–319). Springer. https://doi.org/10.1007/978-3-030-33723-0_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free