Areas of application

Areas of application

  • Balancing of an inverted pendulum
  • on an electro-cart has been achieved in 1982 with the BVV-1 system containing up to five 8-bit microprocessors and based on observer techniques [Mei 82; MeD 83]. The development of recursive estimation techniques for this two-degrees-of-freedom (2-d.o.f.) task has been done in 83/84 [Wün 87].


Figure 1: Inverted pendulum

  • 2-D satellite docking with an air cushion vehicle
  • has been the second real-time demonstration of the approach achieved in 1986 with the BVV_1 system [Wün 87]. This reaction-control vehicle had three d.o.f., two translational and one rotational of second order each. The shapes of the docking partners were confined to convex polyhedral ones [Wün 88]


Figure 2: Satellite model

  • Road vehicle guidance
  • has been picked up initially because the step to aero-space vehicles (the proper domain at the Department for Aerospace Engineering (LRT)) was considered to be too much for the beginning. This application domain has been so successful that it allowed to attract by far the major part of research money from industry and ministries over the next 1 ½ decades. A survey may be found in [Dic 95a] containing references to the various fields of applications of computer vision.


Figure 3: VaMP      Figure 4: VaMoRs

  • The field of on-board autonomous landing approaches of aircraft has been selected early as target area for air traffic applications; from 82 to 87 the field has been screened in a first dissertation concentrating on basic formulations and first results in unperturbed environments, all of this confined to numerical simulation with the BVV_2 as real hardware for image sequence processing in the simulation loop [Ebe 87; Dic 88].
From 92 to 97 joint evaluation of inertial measurement data and image sequences allowed to handle on-board autonomous landing approaches even with strong perturbations from winds and gusts; the perception part has been flight-validated in 1991 with a single camera on a two-axis gaze platform in a DO 128 twin turbo-prop aircraft of the University of Braunschweig [ScD 89; DiS 92; Scl 92a; Scl 92b; ScD 92].


Figure 5: Do 128

In the following years the approach has been extended to a bifocal active camera arrangement; the image processing hardware base was changed to transputers, and for scene interpretation the scene-tree according to [DDi 97] has been adopted [FüD 98; FWD 98] and a new multi-scale modeling of the runway and nearby taxiways including markers on them has been introduced. Flight experiments for visual perception with the new system took place in 1993 (same aircraft as above) and in 2000 with the VFW 614 twin-jet aircraft ATTAS of DLR, the German Center for Aerospace Research.


Figure 6: ATTAS

  • The same approach with modifications for vehicle dynamics allowed to tackle the problem of landmark navigation for helicopters in nap-of-the-earth flight [WBD 95; Wer 97]. Brief helicopter missions have been performed around the airport of Braunschweig in hardware-in-the-loop real-time simulations with the 3-axes motion simulator (DSB) of UBM/ISF. Road runway markers, taxiways and the helicopter landing spot (letter capital H) had been used by the vision system for most accurate navigation with position errors below 10 cm during the final landing approach.


Figure 7: Bifocal Landmark navigation of a helicopter:
left with mild, right with stronger tele lens

  • Autonomously guided vehicles on the factory floor (AGV's) have been investigated in the early 90ies together with an industrial research company and the Institut fuer Messtechnik (UBM) [Hoc 91;HoD 92; Hoc 94].
  • An unique experiment has been the first grasping of a free floating object in Space in cooperation with DLR-Germany (Prof. Hirzinger) and the Fakultaet fuer Informatik of UBM. The robot arm and the vision sensors have been in a cabin on board Space-Shuttle "Columbia", while all computing devices were located in "mission control" in Oberpfaffenhofen (near Munich). The time delay introduced by data transfer via geostationary satellites and ground cable networks with many computers was about 3 seconds each way. Temporal modeling allowed compensating these time delays varying around 6 seconds [DDi 97; Fag 96; FDD 94].



Figure 8: ROTEX Freeflyer

  • Recognizing humans has been attacked in two areas:

  • In the framework of the "PROMETHEUS"-project a basic study has been performed modeling human motion in traffic situations (standing, walking, running, waving the arm) [Kin 94a, b]. At that time, computing power at affordable cost, volume and power consumption turned out not to be sufficient for real-time solution of this complex subject.
  • The second area was determining gaze and area of attention of a human pilot in the cockpit while flying. In the project "Crew Assistant Military Aircraft" (CAMA, Prof. Onken, ISF), the knowledge based assistant system should know, to which flight display or outside region the pilot is actually paying attention. First results may be found in [ScD 98; Sct 00].


Figure 9: Gaze determination of human eyes