University of Bern, Switzerland, 19th-21st of October 2023
Supported by the Swiss National Science Foundation
In a datafied society, living with and through big data technologies raises many issues: How are bodies and biographies (not) being taken into account in the process of standardization, and what space is left for ‘failing’ bodies and lives unfolding ‘otherwise’? How do individuals, institutions, and societies navigate sense-making amidst often conflicting forms of knowledge? What can practices of implementation, circumvention, and adaptation of diverse technologies tell us about the ways in which people and societies craft their futures?
Scholars studying the role of sensors, Big Data, and AI technologies in the fields of health, policing, and ecology will gather in Bern, Switzerland, to discuss these questions and the broader implications of datafication, in a dynamic, collaborative environment. The Big Data Lives Symposium proposes to engage with technologically mediated practices of control and care, the imaginaries present in sociotechnical assemblages, and to explore how futures are being imagined, and imagined otherwise, in relation to data-driven practices that are already, continuously shaping our lives and the worlds we inhabit. We will engage with a series of thinking pieces and works in progress that will ground in-depth discussion and dialogue. It is our intention that the symposium will lead to a publication.
The symposium will begin on Thursday, 19th October 2023 with a public lecture by Veronica Barassi.
On Friday 20th October and Saturday 21st October 2023 panels with invited scholars will take place. Please register for the panels via the registration form.
FLYER DOWNLOAD LINK
Thursday, 19th October 2023 – Public Lecture by Veronica Barassi (University of St. Gallen)
The Everyday Life of AI Failures: Conflicts, Experiences and the Future Imaginaries
The rapid proliferation of data-driven and AI technologies in everyday life has led to the rise of important research in anthropology which investigates the relationship between big data and meaning construction (see Boellstroff and Mauer, 2015), analyses algorithms as culture (Dourish, 2016; Seaver, 2017) or explores the multiple ways in which people negotiate with processes of datafication (Pink et al., 2018; Dourish and Cruz, 2018). Yet little research has focused on AI failure, and this is articulated, experienced and understood. Drawing on the findings of The Human Error Project, this keynote will show that we need to understand AI failure as a complex social reality that is defined by the interconnection between our data, technological design, and structural inequalities (Benjamin, 2019; Broussard, 2023) by political economic forces (Appadurai & Alexander, 2019) and by everyday practices and social conflicts (Aradau & Blanke, 2021). To make sense of the complexity of AI failure we need a theory of AI errors. Bringing philosophical approaches to error theory together with anthropological perspectives, I argue that a theory of error is essential because it sheds light on the fact that the errors in our systems result from processes of erroneous knowledge production, from mischaracterisations and flawed cognitive relations. A theory of AI errors, therefore, ultimately confronts us with the question about knowledge production in our AI technologies. What types of cognitive relations and moral judgements define our AI technologies? How are these erroneous, misguided or deluded? As I will show in this paper, when we pose these questions we come face-to-face with the extent of the fallacy of our models, and their inability to understand the complexity of our worlds, cultures, and experiences. We also realise the fundamental role that anthropological knowledge can play in AI research.
Panels
Panel 1: Sensing Insecurities in Urban Policing
Participants: Daniel Marciniak, University of Hull; Florent Castagnino, Institut Mines-Télécom; Lucien Schönenberg, University of Bern
This panel critically engages with the field of urban security, focusing on data-driven policing practices in the UK, the US, Canada, and France. Sensors and algorithmic machines expanding the human sensorium often come with the promise of efficiency and objectivity to overcome human errors. The proliferation of sensing technologies in policing has opened up an opportunity to think about the knowledge this human-machine hybridity (Suchman 2021) produces. Data accumulation for predictive cartographies, heat maps, and automated object recognition marks a shift from reaction to more proactive and predictive forms of policing (Brayne 2021). In this panel, we closely investigate the combined energies of humans, sensory devices, software, servers, and interfaces that produce visual representations for policing processes. Doing research on predictive policing software in the UK and the US and on the work of watching in video surveillance control rooms in Canada and France, we ask about technologized ways of sensing insecurities in urban spaces and their underlying epistemic regimes. This panel invites to resist algorithmic fetishism (Monahan 2018) by discussing ethnographically informed accounts of sensing technologies in policing and to critically engage with theories of algorithmic governmentality/governance (Rouvroy and Berns 2013; Katzenbach and Ulbricht 2019; Issar and Aneesh 2022), data-driven managerialism (Benbouzid 2019), and the marketization of urban security in the era of digital capitalism.
Panel 2: Feeling good? Caring for everyday relations with algorithms
Participants: Minna Ruckenstein, University of Helsinki; Jeannette Pols, University of Amsterdam; Sophie Wagner, University of Bern
This panel aims to address the way we intimately relate to data gathering and processing technologies. We address the affective dimensions that result from routine encounters with algorithmic technologies in (the context of) health and care.
Data technologies – sensors on bodies, cameras in homes, applications on smart phones and online platforms – are intimately entangled in the fabric of everyday lives, where desires, fears, and competing notions of truth and objectivity emerge. How do people experience health and illness as mediated through algorithmic technologies? When and how do we trust algorithmic decision-making? And what can we learn from the moments when skepticism and irritation disrupt the “anticipatory sensation of trust” (Pink, 2021) towards our co-evolving technological companions? Following Puig de la Bellacasa (2011), we care for future relations with algorithmic infrastructures by turning to neglected perspectives of living with algorithms – those moments when algorithmic relations don’t feel right. If articulations of “bad” algorithmic encounters are treated not merely as ambivalent personal reflections, but as intrinsic “affective atmospheres of data” (Lupton, 2017) or a patterned “algorithmic culture” (Ruckenstein, 2023), the epistemological value of affects and emotions in knowledge formation become visible. Caring, in this sense, means taking seriously practices such as repair work (Pink et al., 2018, Schwennesen, 2019), tinkering and doctoring (Mol, 2006, Mol et al. 2010), which citizens, users, and patients perform as a way of mending relations with technologies.
Panel 3: Ecologies of Care and Control
Participants: Zsuzsanna Ihar, University of Cambridge; Carolina Dominguez Guzmán, University of Amsterdam; Darcy Alexandra, University of Bern
This panel focuses on ecological practices of control and care (Puig de la Bellacasa, 2017) as a means to apprehend contesting theories of futurity. Considering the tensions between statecraft/military technologies and everyday acts of maintenance and repair, the panelists will present research into data collection cultures in the Hebridean Sea, the care for water infrastructure in Northern Peru, and citizen science interventions in the Sonoran Desert. From these three field sites, we aim to discuss the crisscrossing of expertise between different stakeholders, and the interdependencies between the science of diverse forms of monitoring including those intending to protect non-human worlds and the co-existence of different versions of care.
Organizers and Contact
Prof Michaela Schäuble, Dr Darcy Alexandra, MA Sophie Wagner, MA Lucien Schönenberg
Do you have any questions regarding the symposium? Please write an e-mail to: lucien.schoenenberg@unibe.ch