Social media platforms have become major sources of news creation, dissemination and consumption for online users. However, these platforms can easily be misused to circulate rumours and misinformation. The spread of false rumours can be rapid and unpredictable, resulting in severe negative impacts on the news ecosystem, individuals and society. Therefore, studying rumour detection and mitigation on online social networks (OSNs) is extremely important. This research aims to model how rumour propagation evolves, and then propose novel deep learning algorithms to detect rumour and strategies to mitigate
the spread of rumour on Twitter.
In the first part of this thesis, we investigate the dynamics of how rumours propagate over time. To do so, we employ two novel generative models: 1) the Multivariate Hawkes process to derive a new measurement of user influence, namely, user influence rate based on the historical events (e.g., retweets) generated by users, and 2) incorporating the user influence rate as marks in Marked Hawkes processes to model how rumours propagate differently from non-rumours when considering the temporal pattern and user influence rate. Despite the obvious differences between the temporal pattern of rumour and non-rumour propagation, encoding this effect into models for rumour detection has not been previously considered.
For the detection of rumours, we propose a novel graph convolutional network (GCN) based model, namely, temporal bidirectional GCN or tBi-GCN for rumour detection. tBi- GCN incorporates temporal features into the GCN and considers users’ posts and both top-down and bottom-up directions of rumour propagation. The tBi-GCN model effectively detected rumours from non-rumours. Once a rumour has been detected, mitigation strategies can be employed to combat rumour spread (especially a false rumour) before it causes severe negative impacts on individuals and society.
In the third part of this thesis, we propose a deep reinforcement learning-based mitigation model to combat rumour spread on OSNs. Our proposed model, Epidemic-RL, models the information propagation in a social network environment by considering the changes in users’ beliefs towards the information using an epidemic model. An agent is trained to learn a multi-stage policy for selecting debunkers to inject truthful information at multiple stages under a budget constraint. The overall objective is to maximize the number of users who recover from infections by misinformation. Our extensive experiments on synthetic and real-world social networks demonstrate that Epidemic-RL can effectively minimize the spread of rumours on OSNs.