Semantic segmentation of large-scale point cloud scenes via dual neighborhood feature and global spatial-aware

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

As a core task in 3D scene information extraction, point cloud semantic segmentation is crucial for understanding 3D scenes and environmental perception. While extracting local geometric structural features from point clouds, existing research often overlooks the long-range dependencies present in the scene, making it challenging to fully uncover the long-range contextual features hidden within point clouds. On this basis, we propose a segmentation algorithm (DG-Net) that integrates dual neighborhood features with global spatial-aware. Initially, the local structure information encoding module is designed to learn about local geometric shapes by encoding spatial position and directional features, thus supplementing structural information. Subsequently, a dual neighborhood features complementary module is introduced to merge the geometric structural and semantic features within local neighborhoods, learning local dependencies and capturing distinguishable local contextual features. Finally, these features are relayed to a global spatial-aware module equipped with a gated unit, which dynamically adjusts the weights of features at different stages, effectively modeling long-range dependencies between local structures and finely extracting long-range contextual features. We conducted experiments on benchmark datasets of point cloud scenes, and both quantitative and qualitative results demonstrate that our algorithm can accurately identify small-scale objects with complex geometric structures within scenes, surpassing other mainstream networks in segmentation performance. The mIoU on the S3DIS, Toronto3D, and SensatUrban datasets are 71.9 %, 82.1 %, and 59.8 %, respectively.

Figures

References Powered by Scopus

SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

14812Citations
N/AReaders
Get full text

PointNet: Deep learning on point sets for 3D classification and segmentation

10901Citations
N/AReaders
Get full text

Dynamic graph Cnn for learning on point clouds

4716Citations
N/AReaders
Get full text

Cited by Powered by Scopus

LCL_FDA: Local context learning and full-level decoder aggregation network for large-scale point cloud semantic segmentation

0Citations
N/AReaders
Get full text

Small but mighty: Enhancing 3D point clouds semantic segmentation with U-Next framework

0Citations
N/AReaders
Get full text

Multi-Scale Geometric Feature Extraction and Global Transformer for Real-World Indoor Point Cloud Analysis

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Liu, T., Ma, T., Du, P., & Li, D. (2024). Semantic segmentation of large-scale point cloud scenes via dual neighborhood feature and global spatial-aware. International Journal of Applied Earth Observation and Geoinformation, 129. https://doi.org/10.1016/j.jag.2024.103862

Readers over time

‘24‘2502468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

80%

Researcher 1

20%

Readers' Discipline

Tooltip

Engineering 2

50%

Linguistics 1

25%

Earth and Planetary Sciences 1

25%

Save time finding and organizing research with Mendeley

Sign up for free
0