Driver Support System

Introduction:

The main objective of the project, we are working in, is to develop a smart autonomous vehicle that can navigate through an environment and, through a sensor suite, collect data about the environment which feeds into an on board intelligent system for understanding the environment and performing certain tasks of interest. One of the goals of our system is to provide a Driver Support System (DSS) that will be employed in a pedestrians aiding system. In such applications, road sign detection and recognition (RSR) is very important, since the road signs carry much information necessary for successful, safe and easy driving and navigation.

Goal:

The main Goal of the projec is to develop a smart autonomous vehicle that can navigate through an environment and perform certain tasks of interest.

Methods:

We have developed a multistage approach for road sign detection and recognition. In the first stage, we use a Bayes classifier to detect the road signs in the captured image based on its color content. The Bayes classifier does not just label the captured image only, but it categorizes the labels to the appropriate category of the road sings as well.

In the second stage and based on the results obtained by the Bayes classifier, an invariant feature transform, namely the Scale Invariant Feature Transform (SIFT) is used to match the detected labels with the correspondent road sign. Using the SIFT transform for the matching process achieves several advantages over the previous work in RSR. For example, it overcomes some difficulties with previous algorithms such as the slowness of template matching based techniques or the need for a large number of various real images of signs for training like the neural-based approaches.

Results:

Road sign images

Results of Bayes classification

Detected signs

Matching results

Research Team:

Publications:

  1. Aly Farag and Alaa E. Abdel-Hakim, ” Detection, Categorization and Recognition of Road Signs for Autonomous Navigation”, Proc. Advanced Concepts in Intelligent Vision Systems (ACIVS2004), Brussel, Belgium, August-September 2004, pp. 125-130.

Acknowledgement:

We would like to thank the US-Army for its sponsorship.


News