Abstract: We present a system for performing multi-sensor fusion that learns from experience, i.e., from training data and propose that learning methods are the most appropriate approaches to real-world fusion problems, since they are largely model-free and therefore suited for a variety of tasks, even where the underlying processes are not known with sucient precision, or are too complex to treat analytically. In order to back our claim, we investigate two simulated fusion problems which are representative of real-world problems and which exhibit a variety of underlying probabilistic models and noise distributions. To perform a fair comparison, we study two other ways of performing optimal fusion for these problems: empirical estimation of joint probability distributions and direct analytical calculation using Bayesian inference. We demonstrate that near-optimal fusion can indeed be learned, and that learning is by far the most generic and resource-ecient alternative. In addition, we show that the generative learning approach we use is capable of improving its performance far beyond the Bayesian optimum by detecting and rejecting outliers, and that it is capable to detect systematic changes in the input statistics.

Pdf: http://mandargogate.github.io/papers/CogComp2016-generative-learning-approach-to-sensor-fusion-and-change-detection.pdf


  author    = {Alexander R. T. Gepperth and
               Thomas Hecht and
               Mandar Gogate},
  title     = {A Generative Learning Approach to Sensor Fusion and Change Detection},
  journal   = {Cognitive Computation},
  volume    = {8},
  number    = {5},
  pages     = {806--817},
  year      = {2016},
  url       = {http://doi.org/10.1007/s12559-016-9390-z},
  doi       = {10.1007/s12559-016-9390-z},
  timestamp = {Thu, 18 May 2017 09:53:51 +0200},
  biburl    = {http://dblp.org/rec/bib/journals/cogcom/GepperthHG16},
  bibsource = {dblp computer science bibliography, http://dblp.org}