Jiang and Vetter Publish Article on Wikipedia Bots and Problematic Information

Posted on 8/24/2019 2:52:27 PM

Jialei Jiang (PhD candidate, English Composition & Applied Linguistics) and Matt Vetter (Faculty, English Composition & Applied Linguistics) have published an article in a special issue on "Lies, Bullshit and Fake News: Some Epistemological Concerns" in the Journal Postdigital Science and Education.

"The Good, the Bot, and the Ugly: Problematic Information and Critical Media Literacy in the Postdigital Era" explores Wikipedia bots and problematic information in order to consider implications for cultivating students’ critical media literacy.

To understand bots and other algorithms as more than just tools, the authors turn towards a postdigital theorization of these as ‘agents’ that co-produce knowledge in conjunction with human editors and actors. This article presents case studies of three specific bots on Wikipedia, including ClueBot NG, AAlertbot, and COIBot, each of which engages in some type of information validation in the encyclopedia. The activities involving these bots, illustrated in these case studies, ultimately support the argument that information validation processes in Wikipedia are complicated by their distribution across multiple human-computer relations and agencies. Despite the programming of these bots for combating problematic information, their efficacy is challenged by social, cultural, and technical issues related to misogyny, systemic bias, and conflict of interest.

Ultimately, studying the function of Wikipedia bots makes space for extending educational models for critical media literacy. In the postdigital era of problematic information, students should be on the alert for how the human and the nonhuman, the digital and the nondigital, interfere and exert agency in Wikipedia’s complex and highly volatile processes of information validation.

Read the special issue call for papers commentary.

Access the article online.

Department of English