Fusion of Depth and Thermal Imaging for People Detection

5Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.

Abstract

The methodology presented in this paper covers the topic of automatic detection of humans based on two types of images that do not rely on the visible light spectrum, namely on thermal and depth images. Various scenarios are consid- ered with the use of deep neural networks being extensions of Faster R-CNN models. Apart from detecting people, in- dependently, with the use of depth and thermal images, we proposed two data fusion methods. The first approach is the early fusion method with a 2-channel compound input. As it turned out, its performance surpassed that of all other meth- ods tested. However, this approach requires that the model be trained on a dataset containing both types of spatially and temporally synchronized imaging sources. If such a training environment cannot be setup or if the specified dataset is not sufficiently large, we recommend the late fusion scenario, i.e. the other approach explored in this paper. Late fusion mod- els can be trained with single-source data. We introduce the dual-NMS method for fusing the depth and thermal imaging approaches, as its results are better than those achieved by the common NMS.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Gutfeter, W., & Pacut, A. (2021). Fusion of Depth and Thermal Imaging for People Detection. Journal of Telecommunications and Information Technology, (4), 53–60. https://doi.org/10.26636/jtit.2021.155521

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 1

100%

Readers' Discipline

Tooltip

Engineering 1

100%

Save time finding and organizing research with Mendeley

Sign up for free