Overall Cognitive System Architecture

Overall Cognitive System Architecture

Based on spatio-temporal models for motion processes observed (4-D approach) and the general scheme of prediction error feedback for continued perception, the overall system architecture according to figure 19 below has been developed.



Figure 19: Overall cognitive system architecture in EMS-Vision (4 layers)
RDT = Road Detection and Tracking;ODT = Obstacle Detection and Tracking
LDT = Landmark Detection and Tracking; 3DS = 3D Surface Recognition
IBES = Inertially-Based Ego State; NN& = Future additional capabilities

The lowest (signal) level contains all the direct data processing activities which can be done without referring to temporal models; here, very high data rates are common (eg. 500 Hz gaze control and pixel to feature image sequence processing).

The second level introduces spatio-temporal models both for perception (center and left part) and for realization of behavior (right part); it is termed 4-D level. In the perception part there is the knowledge base for generic models of object and subject classes (center), and the capability of hypothesis generation based on aggregations of features observed (second radial regions). The specialists for tracking objects/subjects of certain classes form the outer regions (see also legend).

The curved arrow through the specialist's segments to the right is to indicate that any (or several) objects tracked may serve as reference(s) for feedback control. The decision "which are the objects actually relevant" is taken on the top level after situation assessment.

The processes store their recursive estimation results in the Dynamic Object Data Base (DOB), shown on the left-hand side of the knowledge representation level (third from bottom). This symbolic representation with "state variables" for the relative position and orientation of objects, and "actual parameters" for object shape and other properties needs orders of magnitude less data but keeps (hopefully) most of the information relevant for situation assessment. Combined with proper procedures and background knowledge, this information is sufficient for "imagining" dynamic scenes (independent from the lower sensory part!)

Situation assessment (upper level left) is done by observing the evolution of processes over time. With reference to general performance criteria (values) or a mission plan, behavior decisions are taken (upper level) on an abstract level taking the own state and capabilities (second level from top, right) into account. Changes are communicated to the implementation level on another processor (arrow downward to the right).

Implementation is finally achieved making use of the most actual data on the 4-D level (second from bottom, right); in the system actually running, this is done by transputers in the vehicle subsystem (figure 10).