RMIT University
Browse

A note on the reward function for PHD filters with sensor control

journal contribution
posted on 2024-11-01, 23:09 authored by Branko RisticBranko Ristic, Ba-Ngu Vo, Daniel Clark
The context is sensor control for multi-object Bayes filtering in the framework of partially observed Markov decision processes (POMDPs). The current information state is represented by the multi-object probability density function (pdf), while the reward function associated with each sensor control (action) is the information gain measured by the alpha or Rényi divergence. Assuming that both the predicted and updated state can be represented by independent identically distributed (IID) cluster random finite sets (RFSs) or, as a special case, the Poisson RFSs, this work derives the analytic expressions of the corresponding Rényi divergence based information gains. The implementation of Rényi divergence via the sequential Monte Carlo method is presented. The performance of the proposed reward function is demonstrated by a numerical example, where a moving range-only sensor is controlled to estimate the number and the states of several moving objects using the PHD filter.

History

Related Materials

  1. 1.
    DOI - Is published in 10.1109/TAES.2011.5751278
  2. 2.
    ISSN - Is published in 00189251

Journal

IEEE Transactions on Aerospace and Electronic Systems

Volume

47

Issue

2

Start page

1521

End page

1529

Total pages

9

Publisher

Institute of Electrical and Electronics Engineers

Place published

United States

Language

English

Copyright

© 2011 IEEE

Former Identifier

2006057434

Esploro creation date

2020-06-22

Fedora creation date

2015-12-22

Usage metrics

    Scholarly Works

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC