Real-Time Object Detection Using OpenCV for Visually Impaired People

Ahmed Alturki
4 min readApr 8, 2021

--

Overview

Recently, computer vision field has been an active area of research. In this project, we make a device work with one of its applications (object detection) in real-time.

Raspberry-Pi 4 with camera and sensor installed

This project was made as a final year project named “Smart Cane” in March 2021, since the device is aimed at visually impaired people, it will be installed on a cane to be used by targeted users. It detects and classifies objects using camera and a deep learning model, measures distance, identifies threats on the user, and alerts the user of any danger with a feedback mechanism.

Hardware:

Device : Device on which the program will be run. Since the device will be installed on a cane, and it will be working in real-time, then smartphones, or Raspberry-Pi could be used. In this implementation, Raspberry-Pi 4 and this case are used.

Camera: Camera will be used to open an image stream (video), and frames will be processed to identify objects. In this implementation, Raspberry-Pi 4 Camera is used.

Sensor for Distance: Infrared or ultrasonic sensors can be used to measure distance. In this implementation, ultrasonic sensor is used. The circuit for it can be built by following this guide.

Feedback Component: Various feedback mechanisms can be used, such as text to speech with earphones or vibration motor. In this implementation, vibration motor is used.

(Optional) Battery and a cane for complete implementation.

Software:

To develop this project, Python Programming Language is used, and the OpenCV library. OpenCV is an open-source library of programming functions mainly aimed at real-time computer vision, OpenCV can be installed on Raspberry-Pi by following the steps mentioned in this link.

Threading is used, in which separate parts of the code can run concurrently, this allows for the distance sensor to work while object detection is working for example.

Object Detection Model:

Since the project device has limited processing power, we need light weight models for object detection to be able to detect in real-time. Available light models are TinyYOLO, and SSD MobileNet. In this implementation, we use SSD MobileNet.

The model takes every frame and tries to classify objects on that frame, it can classify up to 100 objects and returns the object name and how confident the model is with its classification.

Demo 1

Implementation:

To make use of the info returned by the model, the machine continuously measures the danger level on the user, this is done by making three danger classes (high, medium, and low), and based on the object detected, the danger level can change. For example, if a car is detected, the danger level is higher than if a cell phone is detected. In addition, danger level can change based on the distance. For instance, if the sensor detects that an object is within 1 meter, this results in higher danger level.

Possible Improvements:

This project uses the latest light object detection models as mentioned above, these models are fast with reasonable accuracy, with the rapid development and research taking place in computer vision field, models as fast as the model used (and even faster) with higher accuracy will hopefully emerge. Using these new models will improve the project implementation. Additionally, although the computing power can produce good results, greater computing power will result in a smoother experience and better camera frame rate.

Mobile app can be developed to work with the machine, this can give the ability to provide information about the device to users (or guardians of users) such as battery health, and will allow for further functionalities.

Feedback mechanism can be changed to wireless method, such as wireless bands controlled by Bluetooth, or any other reasonable idea.

Also, the code written for implementation can be more efficient, and better threading implementation can be made.

Note: This is a team project, and it was done with my teammate and friend Thabet.

References:

Complete Code for this project can be found in this GitHub repository:

Our Supervisor’s LinkedIn.

My Teammate’s LinkedIn.

My LinkedIn Page.

Mr. Murtaza Hassan has done much work in regard to computer vision and object detection applications, this project was helpful.

Learn more about SSD MobileNet model here.

Learn more about YOLO model here.

--

--

No responses yet