Our work on Automated Driving Systems (ADS) led the ERATO MMSD group to perform various research in the area of Search-Based Testing (SBT) (e.g. Frenetic) . However, when applying our techniques to the Autonomoose ADS, we faced a new research challenge: non-determinism.
Existing techniques to counteract such noisy execution behaviour typically require estimating the noise or development of noise models. Our work aims to discover and develop new approaches to overcome this challenge. In the process, we developed the k-nearest neighbours averaging (kNN-Avg) method, which uses an SBT algorithm’s previous execution history to produce more robust techniques.
In the process, our ongoing research shows promising results and resulted in multiple publications.
This paper presents the first evaluation of k-nearest neighbours-Averaging (kNN-Avg) on a real-world case study. kNN-Avg is a novel technique that tackles the challenges of noisy multi-objective optimisation (MOO). Existing studies suggest the use of repetition to overcome noise. In contrast, kNN-Avg approximates these repetitions and exploits previous executions, thereby avoiding the cost of re-running. We use kNN-Avg for the scenario generation of a real-world autonomous driving system (ADS) and show that it is better than the noisy baseline. Furthermore, we compare it to the repetition-method and outline indicators as to which approach to choose in which situations.
KNN-Averaging for Noisy Multi-Objective Optimisation
Multi-objective optimisation is a popular approach for finding solutions to complex problems with large search spaces that reliably yields good optimisation results. However, with the rise of cyber-physical systems, emerges a new challenge of noisy fitness functions, whose objective value for a given configuration is non-deterministic, producing varying results on each execution. This leads to an optimisation process that is based on stochastically sampled information, ultimately favouring solutions with fitness values that have co-incidentally high outlier noise. In turn, the results are unfaithful due to their large discrepancies between sampled and expectable objective values. Motivated by our work on noisy automated driving systems, we present the results of our ongoing research to counteract the effect of noisy fitness functions without requiring repeated executions of each solution. Our method kNN-Avg identifies the k-nearest neighbours of a solution point and uses the weighted average value as a surrogate for its actually sampled fitness. We demonstrate the viability of kNN-Avg on common benchmark problems and show that it produces comparably good solutions whose fitness values are closer to the expected value.