Cognitive Perception within the 4D-Approach

P117: Cognitive Perception within the 4D-Approach

The main target of CoTeSys Project 117 is the enhancement of detection-, tracking- and recognition-algorithms for objects in an offroad scenario incorporating various feedback loops within the image processing chain.

Traditional computer vision approaches are organized in a hierachical structure with unidirectional image transformations. Such algorithms operate from the elementary pixel level to image features, up to full objects, passing the accumulated knowledge about a scene only towards the superior levels of the image processing chain.

Contrary to this computer-based approach, it is very common for biological systems to have feedback channels from higher levels of knowledge to lower levels. This allows control of lower level sensitivity based on higher level prior knowledge and expectations ("cognitive perception"). Therefore, our principle goal is the realization of cognitive perception on lower levels of the visual perception process, which we expect to enhance the performance of state-of-the-art computer vision approaches.

The innovation of this project is based on the design, implementation and test of feedback loops within the image processing chain. We intent to use such feedback loops for dynamic parameter adaption, enhanced feature clustering, feedback of uncertainty, improved image acquisition and feedback of computational load. The developed algorithms will first be tested and evaluated on the demonstrator MuCAR-3, but in principle will also be applicable to other demonstrators with visual perception.

To demonstrate the potential of feedback loops we developed a detection algorithm for Emergency Response Intervention Cards (ERI-Cards), which was evaluated at the C-ELROB 2007. This color-based algorithm is able to detect an ERI-Card even under extreme fluctuations of light, thanks to a biologically inspired feedback loop for dynamic threshold adaption. Currently we aim to improve the clustering of features by statistical scene analysis and segmentation.


Left: Original greyimage with marked region (red). Right: Grey image that represents statistical similarity to the marked region of the left image.