MarVEye and its control system

MarVEye and its control system

Camera configuration

Closed perception - action cycles and their control as well as coordination with other objects / subjects in the situation encountered has been the key element for system design. In order to be able to perceive the situation sufficiently well, a complex Multi-focal active / reactive Vehicle Eye (MarVEye) has been developed with conventional miniaturized video cameras as elements [Dic 95b; PeD 00]. It combines a wide field of view (f.o.v.) nearby (>100°, peripheral part), with central areas of high resolution: a 3-chip-color camera (MT) with a f.o.v. of 23° and a high sensitivity b/w camera (ST) with a f.o.v. of 5.5° (foveal part). At L0.05 a single pixel in the image corresponds to 5cm in the real world normal to the optical axis.


Figure 11: MarVEye camera configuration

Pan-tilt camera head

The MarVEye camera configuration is mounted on a pan-tilt camera head (TaCC). The viewing direction of the TaCC may be controlled in pan by plus/minus 70° for good horizontal coverage; in tilt, the control range is much smaller (or even absent for passenger cars with small focal lengths). The figures 12 and 13 show the TaCCs of the test vehicles VaMoRs and VAMP.


Figure 12: TaCC of VaMoRs


Figure 13: TaCC of VaMP

Gaze Control

A gaze control unit determines the viewing direction of the TaCC not only ad hoc for the present moment, but plans and optimizes viewing behavior in advance for a certain period of time. Human beings are able to change their viewing direction in a very quick and complex manner. Thereby, periods of smooth pursuit are interrupted by quick changes of viewing direction, so-called saccades. Saccades are triggered by optical stimuli or by intention. EMS-Vision transfers such complex viewing behavior to a technical system. As an essential advantage of this approach, the sensors are used and adapted in dependence of internal knowledge, optimizing the information input by using actuators. Working in real-time is the core benchmark of the optimization algorithm for gaze control [PeD 00].

The EMS-Vision system architecture includes three behavior decision modules for the different aspects of behavior (see figure 14): Central Decision (CD), Behavior Decision for Gaze & Attention (BDGA) and Behavior Decision for Locomotion (BDL). CD, BDGA and BDL work on a uniform model for behavior and a common scene representation. CD determines the perception tasks and sends these to BDGA.


Figure 14: Modules for behavior decision and gaze control

The module BDGA contains the planning part of the gaze control system, and the server process Gaze Control (GC) realizes the executive part. If the optimization algorithm in BDGA finds an optimal viewing behavior in form of a sequence of gaze maneuvers, this sequence is sent to the GC process. If the optimization algorithm finds no viewing behavior which suffices all perception tasks, a conflict message is sent to CD. For solving the conflict situations, CD prioritizes the perception tasks.

The GC process communicates with the TaCC embedded controller system and connects it with the PC-net. GC offers and performs the following gaze maneuvers:

  • With a saccade an arbitrary camera can be oriented towards a physical object or a point in object coordinates.
  • With a smooth pursuit a moving object can be kept centered in a camera image. If the discrepancy exceeds a certain threshold value, an intermediate saccade is started to center the object in the image.
  • With search paths a certain part of the environment can be scanned with a high resolution sensor for detecting new objects.

Moreover, the GC process monitors the performance of active gaze maneuvers and writes the state and status of the TaCC into the Dynamic Object Database (DOB) [BDPR 99, PeD 00].