RMIT University
Browse

Multimodal data fusion for cyber-physical-human systems

Download (9.89 MB)
thesis
posted on 2024-11-24, 04:10 authored by Lars PLANKE
<p>With increasingly higher levels of automation in aerospace Cyber-Physical Systems (CPS) it is imperative that human operators maintain the required level of Situational Awareness (SA) in order to contribute effectively to both strategic and tactical decision-making processes. On the other hand, it is also essential that the cognitive ability of human operators is constantly monitored to assess their ability to perform effectively in a closed loop human-machine system environment. To ensure that the performance of the system and the Mental Workload (MWL) of the human operator are maintained at an acceptable level, one possible approach is to introduce real-time adaptation in the Human-Machine Interfaces and Interactions (HMI2). While current aerospace systems and interfaces are limited in adaptability, a Cognitive Human Machine System (CHMS) addresses these issues with a cyber-physical human design that provides dynamic real-time system adaptation. Nevertheless, to reliably drive adaptation of current and emerging aerospace systems there is a need to accurately and repeatably estimate cognitive states, in particular MWL, in real-time. Henceforth, this research has studied methods for sensing physiological and behavioural responses associated with MWL and have used the corresponding measures to provide a real-time multimodal inference of MWL.</p> <p>As part of this research, three experimental activities have been conducted. Experimental Activity 1 included an exploratory study that implemented and analysed an Electroencephalogram (EEG) index as well as a straightforward data fusion method during a complex One-to-Many (OTM) Unmanned Aerial Vehicle (UAV) wildfire detection scenario. The EEG index included previously proven features of MWL and showed to be sensitive to changes in MWL in the complex task scenario using the Correlation Coefficient (CC). Moreover, a straightforward data fusion approach showed that fusing the EEG index and an eye activity feature gave the highest correlation with a secondary task performance measure (CC = 0.73 ± 0.14).</p> <p>The following Experimental Activities 2 and 3 were more comprehensive activities and involved offline and online testing of a multimodal inference model of MWL that was tested during the Multi-Attribute Task Battery (MATB) scenario. This involved a rigorous analysis of a subject specific EEG model and an Adaptive Neuro Fuzzy Inference System (ANFIS) model for fusing the multimodal physiological and behavioural features. The results of the offline calibration and validation conducted in Experimental Activity 2, showed that the average from the best performing subject specific feature combinations gave the lowest error with the task level with an average Mean Absolute Error (MAE) = 0.28. Nonetheless, the results from using all the seven features in the combination showed comparably good result with an average MAE = 0.36.</p> <p>The final Experimental Activity 3 included the online validation (during two rounds) of 11 selected ANFIS models as determined in the previous activity. The results from online validation of the first five ANFIS models (containing different feature combinations of eye activity and control input features) all demonstrated good performance with a MAE around the 0.68 mark, with the best performing model showing an average MAE = 0.67 and CC = 0.71. This was similarly reflected in the results from performing cross-session validation. The remaining multimodal models of MWL showed a larger error as the online inference from the EEG model had an arbitrary offset resulting in an equivalent offset in the output of the multimodal ANFIS model. The efficacy of the model could however be seen with the normalised pairwise correlation with the target value and showed good results, with ANFIS model 11 demonstrating the highest average correlation across the models tested (CC = 0.77). Henceforth this study has demonstrated the ability for multimodal data fusion from features extracted from EEG, eye activity and control inputs to produce an accurate and repeatable inference of MWL. The investigation of multimodal fusion for MWL inference has assisted in corroborating the viability of real-time system adaptation in future aerospace Cyber-Physical-Human Systems (CPHS) architectures.</p>

History

Degree Type

Masters by Research

Imprint Date

2021-01-01

School name

School of Engineering, RMIT University

Former Identifier

9922018206501341

Open access

  • Yes

Usage metrics

    Theses

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC