posted on 2020-07-16, 15:23authored byBello Shehu Bello
Online social media platforms such as Twitter offer flexible and effective means of communication on a large scale. The use of Twitter by political figures and government officials increases the wider acceptance of the medium. However, the use of automated accounts known as bots for election campaigns raises concerns that the medium is now being used to disseminate political propaganda aimed at manipulating public opinion. Recent research has shown significant success in the detection of bots. While there are approaches to distinguish automated from regular user accounts, information about their masters, targets, strategies, biases and antagonisms remains harder to obtain. This is a challenging task but can lead to a better understanding of their use in political campaigns. In this thesis, we proposed an approach to reverse engineering the behaviour of Twitter bots to create a visual model explaining their actions. We use machine learning to infer a set of understandable rules governing the behaviour of a bot and a visual notation to make such rules accessible to a non-technical audience. We propose the notion of differential sentiment to provide means of understanding their behaviour with respect to the topics on their network in relation to both their sources of information (friends) and their target audience (followers). Respectively, this provides insights into their bias and antagonism with the target audience. We evaluated our approach using prototype bots we created and selected real Twitter bots. The results show that the approach is successful in describing the behaviour of the bots correctly and that the approach can help to understand their role and impact.
This thesis contributes to knowledge with regard to understanding the behaviour and strategies of Twitter bots. As shown by two case studies, the approach can help to monitor the use of bots to manipulate public opinion and to create transparency in public debate.