Abstract for HONS 01/22
Exposure Control for Visual Odometry
Cheng-Nan Lee
Department of Computer Science and Software Engineering
University of Canterbury
Abstract
We propose an experiment to verify the relationship between the various image quality metrics and visual odometry (VO) accuracy. Currently, many automatic exposure (AE) algorithms assume that optimising some predetermined quality measurements of the image leads to improved VO, evidenced indirectly by their superior performance over other AE algorithms. Having first accepted this assumption, we explored image enhancement techniques to understand the chosen metrics.
We first implemented an AE that selected exposure times that optimise the weighted sum of gradient magnitudes. While this AE can robustly address lighting changes, when most textures appear in a concentrated region, this could cause the selected time to neglect features found elsewhere. Therefore, we extended this AE by partitioning the image into patches, finding the exposure time that maximises the locally weighted sum of gradient magnitudes, and selecting the median of the exposure time to re-expose the entire frame. Our algorithm generated a greater gradient quantity in a scene of high contrast, more
importantly, offered some insights on how to isolate and address saturated regions.
However, we still need to establish the link between VO performance and the optimised image metrics. Therefore, our experiment measures the ATE of various VO algorithms over two scenes in a simulation environment at different exposure values. My analysis indicates that gradient-based metrics, image entropy, number of features, and feature strength, have no discernible relationship with ATE. The discoveries are significant because it implies that adjusting exposure values to optimise image is unlikely to improve VO accuracy.