Encoding Stereoscopic Depth Features for Scene Understanding in off-Road Environments

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Scene understanding for autonomous vehicles is a challenging computer vision task, with recent advances in convolutional neural networks (CNNs) achieving results that notably surpass prior traditional feature driven approaches. However, limited work investigates the application of such methods either within the highly unstructured off-road environment or to RGBD input data. In this work, we take an existing CNN architecture designed to perform semantic segmentation of RGB images of urban road scenes, then adapt and retrain it to perform the same task with multi-channel RGBD images obtained under a range of challenging off-road conditions. We compare two different stereo matching algorithms and five different methods of encoding depth information, including disparity, local normal orientation and HHA (horizontal disparity, height above ground plane, angle with gravity), to create a total of ten experimental variations of our dataset, each of which is used to train and test a CNN so that classification performance can be evaluated against a CNN trained using standard RGB input.

Author supplied keywords

Cite

CITATION STYLE

APA

Holder, C. J., & Breckon, T. P. (2018). Encoding Stereoscopic Depth Features for Scene Understanding in off-Road Environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10882 LNCS, pp. 427–434). Springer Verlag. https://doi.org/10.1007/978-3-319-93000-8_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free