Applications of science and technology have made a human life much easier. Vision plays a very important role in one’s life. Disease, accidents or due some other reasons people may loose their vision. Navigation becomes a major problem for the people with complete blindness or partial blindness. This paper aims to provide navigation guidance for visually impaired. Here we have designed a model which provides the instruction for the visionless people to navigate freely. NoIR camera is used to capture the picture around the person and identifies the objects. Using earphones voice output is provided defining the objects. This model includes Raspberry Pi 3 processor which collects the objects in surroundings and converts them into voice message, NoIR camera is used detect the object, power bank provides the power and earphones are used here the output message. TensorFlow API an open source software library used for object detection and classification. Using TensorFlow API multiple objects are obtained in a single frame. eSpeak a Text to Speech synthesizer (TTS) software is used to convert text (detected objects) to speech format. Hence using NoIR camera video which is captured is converted into voice output which provides the guidance for detecting objects. Using COCO model 90 commonly used objects are identified like person, table, book etc.
CITATION STYLE
H M, N. … B S, M. (2020). Navigation Aid for the Blind and the Visually Impaired People using eSpeak and Tensor Flow. International Journal of Recent Technology and Engineering (IJRTE), 8(6), 2924–2927. https://doi.org/10.35940/ijrte.f8327.038620
Mendeley helps you to discover research relevant for your work.