posted on 2025-12-02, 21:37authored byBernardo Moreira Coelho
<p dir="ltr">The aerospace industry's adoption of autonomous systems mandates reliable, safe, and efficient solutions within deterministic safety-critical workflows. This thesis investigates the challenges inherent to certification of autonomous systems applications based on artificial intelligence for Aerospace and Defence. </p><p dir="ltr">This research is motivated by the need for Aerospace and Defence systems operating autonomously while adhering to stringent safety standards. The methodology involves utilising state-of-the-art AI systems, particularly reinforcement learning, integrated into a comprehensive Autonomous Systems stack. </p><p dir="ltr">Reinforcement learning has been proven to generalise problems in large state-spaces, which is exponentially difficult for deterministic methods, but the question still lies in its ability to function within large state-spaces which are also high-dimensional (large number of possible actions). </p><p dir="ltr">A sizeable portion of this research is dedicated to the application of novel policy-based reinforcement learning algorithms to test its efficiency in real world implementations. Furthermore, it investigates the implications of sparse rewards and intrinsic curiosity-driven modules within the policy-based approach and how well it generalises to more complex applications. </p><p dir="ltr">The incorporation of FlightGoggles, a photorealistic UAV platform, and the SpaceX Crew Dragon simulator, enables the practical implementation and demonstration of different developed autonomous systems within high-dimensional computer vision processing scenarios. </p><p dir="ltr">It also provides a novel implementation of an Autonomous Analyst by embedding of human knowledge expertise into reinforcement learning training workflows via Propositional Logic Networks while leveraging an Actor-Checker architecture to provide a reasonable Safety case for certification of non-deterministic agents. </p><p dir="ltr">This multi-domain approach provides all the base knowledge needed to formulate and propose a novel solution for one of the most dominant challenges in the field of Artificial Intelligence: the disproportionate challenge of certifying non-deterministic systems through traditional certification workflows, such as DO-178C, which presents the most demanding requirements around traceability and success rates for mission-critical applications. </p><p dir="ltr">By incorporating several of the most utilised artificial intelligent applications cohesively to execute in critical mission scenarios, a practical application is constructed providing realistic metrics and workflows for the analysis and definition of a safety case argument for the use of reinforcement learning agents in safety-critical settings. In doing so this research seeks to answer a question yet to be responded by industry and academia and contribute to the advance of the discipline of Artificial Intelligence into safety-critical applications.</p>