Sensor Planning in Smart Vision Systems

Introduction:

Camera planning can be viewed as the problem of assigning proper values for camera parameters to achieve some visual requirements. The number of the camera parameters, that can be generated using a planning algorithm, depends on the number of the degrees of freedom of the vision system. For example, position and orientation parameters are the degrees of freedom of vision systems with passive lenses. They can be extended to include zoom and focus settings in vision systems with active lenses. The visual requirements vary according to the application; e.g., in visual tracking applications, the visual requirement is to keep a target object in the field of view of the camera. In 3D modeling applications using stereo or multi-camera acquisition systems, the camera planning process aims to maximize the overlap of the relevant object(s) in the acquired images.

Goal:

The main goal of this project is to plan the cameras of a vision system by assigning the proper camera parameters in order to get an object of interest to the most suitable area in the field of view.

Methods:

The work we have developed in [1] was restricted to the design of the CardEye. As a generalization, we have presented a novel and robust model for camera planning in any smart vision systems [3]. This approach uses virtual forces to adjust camera parameters (pan and tilt) to the most proper values with respect to the application. The proposed model employs the information in the acquired image and some of the intrinsic camera parameters to estimate pan and tilt displacements required to bring a target object into a specific location of interest in the image. This model is a general framework and any vision system can be easily modeled to use it. This approach has several advantages over previous work in camera planning. It is portable, expandable, robust, and flexible. Also, there is no need for complicated calibration for the cameras or their pan-tilt heads. The results show that our approach is efficient even with poor system initialization and it is robust against possible weakness in the auxiliary algorithms used.

Results:

Before Planning

After Planning

Research Team:

Publications:

  1. Aly Farag and Alaa E. Abdel-Hakim, “Image Content-Based Active Sensor Planning for a Mobile Trinocular Active Vision System,Proc. IEEE International Conference on Image Processing (ICIP’2004), Singapore, October 2004, Vol. II, pp. 193-196.

  2.  Aly Farag and Alaa E. Abdel-Hakim, “Scale Invariant Features for Camera Planning in a Mobile Trinocular Active Vision System,Proc. Advanced Concepts in Intelligent Vision Systems (ACIVS’2004), Brussel, Belgium, August-September 2004, pp. 169-176.

  3. Aly A. Farag and Alaa E. Abdel-Hakim, “Virtual Forces for Camera Planning in Smart Vision Systems,” Proceedings IEEE Workshop on Applications of 

Acknowledgement:

 


News