A dual-field sensing scheme for a guidance system for the blind

6Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

An electronic guidance system is very helpful in improving blind people’s perceptions in a local environment. In our previous work “Lin, Q.; Han, Y. A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model. Sensors 2014, 14, 18670–18700”, a context-aware guidance system using a combination of a laser scanner and a camera was proposed. By using a near-field graphical model, the proposed system could interpret a near-field scene in very high resolution. In this paper, our work is extended by adding a far-field graphical model. The integration of the near-field and the far-field models constitutes a dual-field sensing scheme. In the near-field range, reliable inference of the ground and object status is obtained by fusing range data and image data using the near-field graphical model. In the far-field range, which only the camera can cover, the far-field graphical model is proposed to interpret far-field image data based on appearance and spatial prototypes built using the near-field interpreted data. The dual-field sensing scheme provides a solution for the guidance systems to optimise their scene interpretation capability using simple sensor configurations. Experiments under various local conditions were conducted to show the efficiency of the proposed scheme in improving blind people’s perceptions in urban environments.

Figures

  • Figure 1. Dual-field sensing scheme. (a) Dual-field sensing scheme in 2D side view; (b) Dual-field sensing scheme in 3D frame.
  • Figure 2. Guidance system model.
  • Figure 3. Near-field sensing scenario. (a) Near-field sensing result in the image domain; (b) Near-field sensing result in the range data domain.
  • Figure 4. Cross-field appearance interpretation. (a) Block sampling ROI in the range data domain; (b) Block sampling ROI in the original image domain; (c) Appearance prototype building and matching on top-view image domain.
  • Figure 5. Perspective mapping and top-view mapping.
  • Figure 6. Adaptive appearance prototype building.
  • Figure 7. (a) Rotation-invariant CS-LBP descriptor; (b) Rotation-invariant CS-LBP block descriptor.
  • Figure 8. (a) Spatial distribution of the ground and object region in the original image domain; (b) Spatial distribution of the ground and object region in the top-view domain.

References Powered by Scopus

BRISK: Binary Robust invariant scalable keypoints

3247Citations
N/AReaders
Get full text

Faster and better: A machine learning approach to corner detection

1704Citations
N/AReaders
Get full text

Description of interest regions with local binary patterns

1152Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Unifying terrain awareness for the visually impaired through real-time semantic segmentation

94Citations
N/AReaders
Get full text

Object detection and recognition: using deep learning to assist the visually impaired

23Citations
N/AReaders
Get full text

Context-aware assistive indoor navigation of visually impaired persons

13Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Lin, Q., & Han, Y. (2016). A dual-field sensing scheme for a guidance system for the blind. Sensors (Switzerland), 16(5). https://doi.org/10.3390/s16050667

Readers over time

‘16‘17‘18‘19‘20‘21‘22‘2502468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 5

63%

Professor / Associate Prof. 3

38%

Readers' Discipline

Tooltip

Computer Science 6

55%

Engineering 3

27%

Medicine and Dentistry 1

9%

Nursing and Health Professions 1

9%

Save time finding and organizing research with Mendeley

Sign up for free
0