Authors
Fatemeh Alizadeh, Aikaterini Mniestri, Gunnar Stevens
Publication date
2022
Conference
Proceedings http://ceur-ws. org ISSN
Volume
1613
Pages
0073
Description
Disposing of bad actors on social media is a daunting task, particularly in the face of “engineered social tampering”[4]. That is what Ferrara et al.[6] have labeled the rise of social bots, and large platform owners are struggling to mitigate the harmful effects caused by such malicious software. Therefore, it is no surprise that platform owners like META are fastening their security controls and that the popular press has tracked the efficacy of these measures. Specifically, META has been implementing what Forbes’ Lance Eliot named the ‘Upside Down Turing Test.’[26]. Unlike the original Turing test, which tasked a human participant with distinguishing a human from a digital speech correspondent, this version is designed to use a software program to distinguish non-human activity on the platform. In this work, we discuss the complications introduced by this reversal taking the human user’s perspective. On the one hand, we recognize the necessity for fraud detection and defense against webautomated attacks. On the other hand, we find it necessary to uplift the voices of users who are wrongfully made victims as a result, in minor or major ways. At the same time, we offer alternatives to these invisible Reverse Turing Tests (RTTs) that expand the scope for distinguishing between human and non-human actors, while keeping humanity at the forefront of this inquiry.
Total citations
Scholar articles
F Alizadeh, A Mniestri, G Stevens - Proceedings http://ceur-ws. org ISSN, 2022