IoT Assistant for People with Visual Impairment in Edge Computing
Résumé
We advocate that technology can make it possible to develop devices capable of recognizing people and objects, mainly with the aid of machine learning, computer vision, and cloud computing. Such devices can be used in the daily life of a visually impaired person, providing valuable information for guiding their steps, providing a better quality of life. This paper proposes an architecture that uses computer vision and applies deep learning techniques to an Internet of Things (IoT) assistant for people with visual impairment. Considering that an IoT device is a limited device, it’s used edge computing to improve the proposed architecture so that the device may be updated over time. The recognized object is converted into Text To Speech (TTS), allowing the user to listen to what has been recognized and also the distance from the user to the object. Unrecognized objects are sent to the cloud, and the device receives a re-trained network. The proposed architecture has been implemented using known and proved technologies such as Raspberry Pi 3, USB camera, Ultrasonic Sensor module, You Only Look Once (YOLO) algorithm, Google-TTS, and Python. Experimental results demonstrate that our architecture is feasible and promising.