Autonomous Ground Vehicle


The autonomous ground vehicle project at the CVIP lab has two major components, indoor navigation and outdoor navigation. For the indoor navigation, an experiment based on using optical flow was made. In this experiment, the robot moves on a flat floor. This means the motion along the vertical direction is negligible. The algorithm used was based on capturing images from left and right cameras simultaneously. These images are smoothed using Gaussian filters. Optical flow is the calculated for both left and right images. The difference between the averages of optical flow on each side is used to turn the robot to the left or to the right. For the outdoor navigation, the CVIP lab made a software package to demonstrate outdoor navigation using three known architecture paradigms, reactive paradigm (Sense-Act), hierarchical paradigm (Sense-Plan-Act), and hybrid reactive paradigm (Plan, Sense-Act).


The aim here is to develop rubst methods for Autonomous Ground Vehicle indoor and outdoor navigation.


For the indoor and outdoor navigations, the platform used was an ATRV-2 iRobot. This ATRV-2 robot is supported with sonar, LADAR, two stereo cameras, and a GPS. Also, it has 2 PC’s that use Linux as the operating system. The algorithms used in the indoor and outdoor navigations are realized using C++ language accompanied with the Mobility Robot Integration Software. We are now working on the ATRV2 robot to upgrade its software by rewriting the codes using Player/Stage which is an open software project built by the academic robotics community to enable research in robots and sensor systems.. Also, the hardware will be upgraded, as we will replace the two PC’s contained with the robot with newer ones. Also high frame rate cameras will be used instead of the current cameras. The research team for this project is learning algorithms for dense motion estimation and stereo correspondence. Moreover, kalman filters, particle filters, camera models, sensor fusion, and sensor networks will be used to upgrade the indoor and outdoor navigation algorithms in structured and unstructured environments. In addition, we will take into account studying the case when the motion along the vertical direction is not negligible. CVIP lab uses also another platform for navigation which is the mini-ATRV  iRobot in which the navigation algorithms are realized on this robot using Player/Stage. This will be discussed in details in the following section.

Also, the ATRVmini autonomous mobile robot project has been working toward developing a platform for in lab research and experimentation as well as to conduct autonomous tours of the projects and facilities in the CVIP lab. The project began as a hardware refurbishment of an outdated ATRVmini robot. First a new motherboard, PC power supply, hard drive, USB to serial converters and Wi-Fi adapter were purchased, installed and interfaced to the robot hardware. Then a fresh installation of Linux OS along with the Player/Stage robot server system were installed on the robot’s PC.

Software refurbishment on the ATRVmini robot has involved work in Linux systems, Player/Stage, C++,  OpenCV, speech synthesis, etc. The main research has involved accessing, controlling and reading the robot’s sensor suite with Player/Stage so the robot can be autonomously controlled with C++ code. Currently the robot system has a working SICK Laser Scanner, pan-tilt-zoom camera unit, color video camera and speed/rotation control of the motors and wheels.  Control code for blob tracking/following, basic obstacle avoidance, face detection and speech has been demonstrated. Future work intendeds to port more advanced control programs for face detections/object recognition and navigation/path planning from other CVIP projects to the ATRVmini platform.


The ATRV-2 robot

Research Team:


  1. Aly Farag and Alaa E. Abdel-Hakim, ” Detection, Categorization and Recognition of Road Signs for Autonomous Navigation,” Proc. Advanced Concepts in Intelligent Vision Systems (ACIVS2004), Brussel, Belgium, August-September 2004, pp. 125-130.
  2. Aly A. Farag, M. Sabry Hassouna and Alaa E. Abdel-Hakim “PDE-Based Robust Robotic Navigation,”  Proc. 2nd Canadian Conference on Computer and Robot Vision (CRV2005),Victoria, British Colombia Canada, May  2005.
  3. M. Sabry Hassouna, A.A. Farag, and Alaa Abdel-Hakim, “Robust Robotic Path Planning Using Level Sets,” Proc. of IEEE International Conference on Image Processing (ICIP05), Genova, Italy, September 11-14, 2005.