SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

14.9kCitations
Citations of this article
7.9kReaders
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1]. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/.

References Powered by Scopus

U-net: Convolutional networks for biomedical image segmentation

65983Citations
N/AReaders
Get full text

Going deeper with convolutions

39870Citations
N/AReaders
Get full text

ImageNet Large Scale Visual Recognition Challenge

30759Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Encoder-decoder with atrous separable convolution for semantic image segmentation

3714Citations
N/AReaders
Get full text

Deep High-Resolution Representation Learning for Visual Recognition

2831Citations
N/AReaders
Get full text

Image Segmentation Using Deep Learning: A Survey

2592Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615

Readers over time

‘15‘16‘17‘18‘19‘20‘21‘22‘23‘24‘25040080012001600

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3216

75%

Researcher 735

17%

Professor / Associate Prof. 197

5%

Lecturer / Post doc 165

4%

Readers' Discipline

Tooltip

Computer Science 2485

59%

Engineering 1490

35%

Earth and Planetary Sciences 132

3%

Agricultural and Biological Sciences 118

3%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1
News Mentions: 3

Save time finding and organizing research with Mendeley

Sign up for free
0