Both longi-tudinal and lateral degrees freedom
612 28 Dynamic Vision for Perception and Control of Motion
a
with proper extension in the lateral direction. The lowest edge is considered to be the line where the object touches the ground.
Next, the lateral boundaries of the object are searched for above the lowest group of features. In each pixel row, the locations of extremal intensity gradients are determined; note that in Fig. 28.21 in some regions the vehicle is darker than the road (bottom part) while it is brighter in others (upper part). The histogram of positions of extremal gradient magnitude counts both (Fig. 28.21). Thus, the width of the vehicle is determined by the difference of the positions of two peaks; the estimated range in conjunction with the mapping parameters is used for conversion from pixels to absolute width.
Because only black-and-white video signals have been evaluated with edge feature extraction algorithms, construction sites with yel-low markings on top of the white ones could not be handled. Also, passing vehicles cutting into the vehicle’s lane very near in front of the ego-vehicle posed problems because they could not be picked up early enough due to lack of simultaneous field of view, and because monocular range estimation took too long to converge to a stable interpretation. For these reasons, the system is now being improved with a wide field of view from two divergently oriented wide angle cameras with a central region of overlap for stereo interpretation; additionally, a high resolution (3-chip) color camera also covers the central part of the stereo field of view. This allows for trinocular stereo and area-based object recognition.