Collision Avoidance System Field Operational Test
9 DATA FUSION (Task C1)
9.1 Requirements Definition and Architecture Development (Task C1A)
The objective of this task is to develop performance and interface requirements and the architecture for the data fusion subsystem.
The approach is to gather information on each sensor subsystem – data provided, performance specifications, confidence measures, and information on the requirements for the subsystems that use the output of the data fusion subsystem to develop performance and interface requirements. This information will also be used to determine the fusion algorithms and set requirements on the data fusion architecture.
Milestones and Deliverables
The initial data fusion architecture and performance requirements definition was completed and presented at a meeting at HRL on 9/16/99.
HRL developed performance and interface requirements for the data fusion subsystem, which have been incorporated into the Data Fusions requirements.
The main research finding of this task is that the data fusion subsystem must be robust and able to detect and handle situations when there is missing or invalid data.
Plans through December 2000
This task has been completed.
9.2 Initial Algorithm Development (Task C1B)
The objective of this task is to develop fusion algorithms to fuse radar, lane tracking, GPS/map, and host vehicle sensors to produce a robust estimate of the host lane geometry, host state, driver distraction level, and environmental state.
The data fusion subsystem can be divided into four main functional subunits:
Milestones and Deliverables
The first milestone for this task is the Preliminary Data Fusion Algorithm Demonstration. This demonstration, which is scheduled for December 2000, will be an offline (i.e., non real-time) demonstration of all four parts of the data fusion subsystem: host lane geometry estimation, host-state estimation, driver distraction level estimation, and environment state estimation.
Although not part of the official list of program deliverables, a preliminary version of the data fusion software was delivered to GM for insertion into the EDV in September 2000. Also, a model of the data fusion subsystem was provided to PATH for use in the PATH simulator.
HRL has developed and implemented initial versions of algorithms for host lane geometry, host state, driver distraction and environment state estimation. These algorithms were chosen and developed after extensive literature survey and testing of several competitive and promising approaches. For example, as discussed above, HRL tested several different commonly used road models and compared errors in estimating road geometry in both a recursive (Kalman) and a non-recursive (least-squares) framework. This performance evaluation demonstrated that conventional "single-clothoid" road models have estimation errors that would not meet the system performance requirements. This motivated us to develop a higher-order road model that was amenable to state-space representation in a Kalman filter framework.
We have completed development and implementation of this novel road model and evaluated its performance. Results show that this model is superior to a conventional "single clothoid" road model as it has smaller road geometry estimation errors, especially during sharp transitions in road curvature. Fig. 9.1 shows a simulation scenario used to evaluate these road models. The simulated road geometry is shown in the left half of the figure, while the clothoid coefficients c0 and c1 are shown in the right half. The transition points (changes in c1 coefficient) are also shown in Fig. 9.1. A host vehicle is simulated to traverse the road with a speed of 20m/sec with a look-ahead distance of 100m. Road geometry information is provided as offsets of 10 points along the road spaced 10m apart starting in front of the host vehicle. The sampling rate is chosen as 10Hz. A Kalman filter based on single-clothoid and new road model uses these offsets as measurements and estimates road geometry. The estimated road geometry is compared to the simulated road geometry and errors computed. Figure 9.2 shows the mean and maximum estimation errors of the single-clothoid and new road models as a function of time (x-axis).
9.1 Simulation Scenario Used to Evaluate These Road Models
Figure 9.2 Mean And Maximum Estimation Errors
The performance of this model is currently being evaluated on roads obtained from the NavTech database.
HRL has also developed an adaptive Kalman filter approach for road geometry and host state estimation which is superior to a conventional Kalman filter. The adaptive Kalman filter performs better during sharp transitions in road geometry compared to a conventional Kalman filter. Performance evaluation using real data is in progress.
We developed a fuzzy-rule based algorithm to estimate driver distraction. The data fusion subsystem provides an estimate of driver distraction by monitoring if the driver is performing a secondary task. In our working model, there are two major categories of secondary tasks that may affect driver situation awareness. The first one is a simple task that just requires one glance to complete the necessary visual aspect. The second one is complex and requires many short sampling glances away from the forward view. For the first category, once the control is activated, the amount of distraction left to predict is insignificant. In other words, the activation of the control essentially follows the single-glance distraction time. In complex secondary tasks the driver's vision is time-shared with the primary driving task. The driver cyclically samples the task, activates the control and returns to the forward view for as many glancing cycles as are needed to complete his task (adjusting the radio, perhaps, or turning on the air conditioning). The domain knowledge assumes that the first activation of any of the controls (FACT) for such tasks follows the first glance time and predicts a high degree of distraction for the next 8-10 seconds. In fact, the elapsed time (1stAct) from the FACT is used to predict the coming level of driver distraction for a given complex task such as radio knob adjustments. The 1stAct is defined as the difference between current time and the time of FACT. The longer the 1stAct is, the less predictable is the driver distraction level for the remaining glance time. In other words, the strength of the 1stAct is inversely proportional to its length. To predict driver distraction level, fuzzy rules are based on the strength of 1stAct and Duration as depicted by the matrix shown in Table 9.1. "Duration" relates to the current given cycle of activation of control and is quantized as long, normal, short, or off; and the strength of 1stAct, as off, weak, medium, or strong. Performance evaluation of the driver distraction estimation algorithm is in progress using simulated inputs.
Table 9.1 Driver Distraction Level
Radio, HVAC & DVI with knob adjustments Driver
Duration long normal short off fault 1st
off LOW LOW LOW NONE NONE weak MED MED LOW NONE NONE medium MED HIGH MED LOW NONE strong MED HIGH HIGH MED NONE
The environment state estimation algorithm detects and reports conditions indicative of slippery road surfaces. Data on conditions is used to modify the expected braking intensity the driver will achieve when responding to an alert. In turn, the expected intensity has an impact on the timing of the alerts. HRL defined road conditions as dry, dry-icy, wet, or icy. They are provided at a confidence level specified as none, low, medium or high. Both the road conditions and their associated confidence levels are derived based first upon the windshield wipers activity; then further refined through use of outside temperature measurements, as shown by the matrix in Table 9.2. Performance evaluation of the environment state estimation algorithm is in progress using simulated inputs.
Table 9.2 Environment State Estimation
|Road condition based on wiper activity and temperature|
|wiper not active||wiper active|
|Road surface condition||DRY||DRY-ICY||WET||ICY|
Plans through December 2000
Plans for the next six months are to work with GM to collect synchronized data from all of the sensor subsystems so that we can test the performance of the fusion algorithms on real data. The real data will also be used to refine the fusion algorithms to improve performance.
9.3 Real-time Algorithm Development (Task C1C)
The objective of this task is to develop real-time versions of the algorithms developed in Task C1B for integration into pilot and deployment vehicles.
To develop real-time versions of the algorithms developed in Task C1B, our approach is first to port the algorithms onto the real-time hardware platform specified by GM for the data fusion subsystem. After porting the algorithms, we will evaluate algorithm real-time performance to determine if there are portions of the fusion algorithm that must be tuned or modified to meet real-time processing requirements.
Milestones and Deliverables
The first milestone for this task is the Data Fusion Algorithm Demonstration, which is scheduled for the end of April 2001.
This task has not yet started.
Plans Through December 2000
This task is scheduled to start in October 2000. We will begin porting of the data fusion algorithms to hardware specified by GM and begin real-time performance evaluation.
9.3 Task C1 Schedule
[TITLE PAGE] [TABLE OF CONTENTS]
[1 Executive Summary] [2 Introduction] [3 System Integration] [4 Forward Radar Sensor]
[5 Forward Vision Sensor] [6 Brake Control System] [7 Throttle Control System]
[8 Driver-Vehicle Interface] [9 Data Fusion] [10 Tracking & Identification] [11 CW Function]
[12 ACC Function] [13 Fleet Vehicle Build] [14 Field Operational Test]
[Appendix A] [Acronyms]