Computer Vision based Auditory Guidance for Navigation in Indoor Semi-controlled Environment

Project Team | Executive Summary | Project Scope

Executing Organization: Department of Computer Science & IT, University of Malakand
Funding Organization: ICT R&D Fund, Ministry of Information Technology, Government of Pakistan
Approved Budget: 7.99153 millions
Project duration: 21 months

 

Project Team

  1. Dr. Sehat Ullah
    Project Director/Principal Investigator: 
    Assistant Professor
    sehatullah@uom.edu.pk
    sehatullah@hotmail.com

  2. Dr. Engr. Sami ur Rahman
    Deputy Director/Co-Principal Investigator:
    Assistant Professor
    srahman@uom.edu.pk
    softrays@hotmail.com

  3. Mr. Zia ur Rahman
    Developer
    BS in Computer Science
    Zia.khan110@gmail.com

  4. Mr. Sayed Nabi
    Developer
    BS in Computer Science
    syednabi.uom@gmail.com

  5. Mr. Aurang Zeb
    Research Assistant
    (MS/MPhil Level)

  6. Mr. Raees Khan ShahSani
    Research Assistant
    (PhD Level)

  7. Mr. Farman ullah
    Accountant
    (Additional responsibility)

 

Executive Summary

The objective of this project is to develop a vision based software system that will enable visually impaired people to independently navigate in indoor semi-controlled environments. Similarly the system will also provide help/guidance to tourists or visitors in huge unfamiliar places such as museums and historical buildings. Here the guidance will not only include navigational assistance but also automatic information provision about various objects/things.

Fiducial markers augmented with audio information are placed in the environment. The user holds a mobile phone with webcam in his/her hand and navigates through the environment. The audio assistance is provided to the user whenever a particular marker is detected by the camera and thus enables him/her to independently navigate or get automatic and appropriate information about objects/things in environments.

 

Scope of the Project

To develop a vision based software system that will enable visually impaired people to independently navigate in indoor semi-controlled environments. The user will be given audio information to provide assistance in navigation. The Fiducial markers (special patterns printed on a paper) will be placed in the environment on optimal positions (like ceiling, doors etc). Each of these markers will be augmented with some audio information based on their position in the environment. The user will be required to hold a mobile phone (with camera) in his hand while navigating in the environment. The system will scan the video stream captured by the mobile phone’s camera and will search it for a fiducial marker. Once a marker is detected, the next step will be to identify it in the set of all markers deployed in the environment. The system will then provide the corresponding audio information to the user based on the identified marker. The audio information will help the user to:

  1. To know about his/her current position/location in the environment
  2. To know about his/her surrounding for example office X to his right, room Y to his left and Lab Z to his etc.
  3.  To locate his/her destination

In addition the system will help the user to reach a particular destination (in the environment) through auditory guidance.  Similarly the system will also help/guide a new person (through his mobile set) to locate his/her destination in a huge unfamiliar building. In this case the local area network will detect the new entry (person) in the environment and will send him/her a message of the software installation. After the software installation, the user will be able to use the guidance of the system.                   



University of Malakand - Pakistan