RMIT University
Browse

Interpretable parallel recurrent neural networks with convolutional attentions for multi-modality activity modeling

conference contribution
posted on 2024-10-31, 22:07 authored by Kaixuan Chen, Lina Yao, Xianzhi Wang, Dalin Zhang, Tao Gu, Zhiwen Yu, Zheng Yang
Multimodal features play a key role in wearable sensor based human activity recognition (HAR). Selecting the most salient features adaptively is a promising way to maximize the effectiveness of multimodal sensor data. In this regard, we propose a "collect fully and select wisely" principle as well as an interpretable parallel recurrent model with convolutional attentions to improve the recognition performance. We first collect modality features and the relations between each pair of features to generate activity frames, and then introduce an attention mechanism to select the most prominent regions from activity frames precisely. The selected frames not only maximize the utilization of valid features but also reduce the number of features to be computed effectively. We further analyze the accuracy and interpretability of the proposed model based on extensive experiments. The results show that our model achieves competitive performance on two benchmarked datasets and works well in real life scenarios.

History

Related Materials

  1. 1.
    DOI - Is published in 10.1109/IJCNN.2018.8489767
  2. 2.
    ISBN - Is published in 9781509060153 (urn:isbn:9781509060153)

Start page

2082

End page

2089

Total pages

8

Outlet

Proceedings of the International Joint Conference on Neural Networks (IJCNN 2018)

Name of conference

IJCNN 2018

Publisher

IEEE

Place published

United States

Start date

2018-07-08

End date

2018-07-13

Language

English

Copyright

© 2018 IEEE

Former Identifier

2006083930

Esploro creation date

2020-06-22

Fedora creation date

2019-01-02

Usage metrics

    Scholarly Works

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC