In Stanley Kubrick's 2001: A Space Odyssey, an ape kills an unarmed fellow ape of an enemy group with a bone. The conqueror exultantly flings the murder weapon into the air. Whirling through space, the bone transforms into a starship. Inside the craft resides the infallible supercomputer HAL 9000 making autonomous decisions that detrimentally impact the human crew. From bone to spaceship, from prehistoric tool to futuristic hyperintelligence, this short scene in film history can be understood as an allegory for humanity’s history with technology. Humans, through the production and appropriation of technological artifacts, compensate for their physical deficiencies, but they could also be replaced or even destroyed by new, artificial, intelligence. This fictional scene in Kubrick's 1968 film anticipated the advances in machine learning as well as in automated decision-making. These imaginaries of the future not only influence our present engagement with technology but also shape our fantasies about the future of humanity and the role technology might play within it.
Rather than approaching technological advances through a framework that considers the ways in which technology will “save” humanity or render humans obsolete, the Big Data Lives Symposium organized at the University of Bern from October 19th-21st 2023 highlighted the social lives of technology. The focus on social practices permits scholars to think about the multiplicity of ways people work and live with sensors, data, and AI technologies in the everyday.
The opening public keynote by Veronica Barassi (University of St. Gallen) not only reminded us of failures and bias in AI[1], but tried to move beyond this critique to work towards a theory of AI Errors. For Barassi, there has been a “cultural turn in AI research,” which are foundation models.[2] These models are pre-trained on a massive dataset and are meant to be used in different contexts and situations, thus impacting diverse applications, in the case of failure. According to Barassi, error is an integral part of these models: AI will always take an object for something else than it is – as it only produces patterns and a statistical corpus of knowledge. In that sense, it is very unlikely that AI will ever be able to understand the complexity of social worlds and human experience. Therefore, making sense of unpredictability is something that humans do–not intelligent machines.
Talking about (un)predictability, the first presentation on Friday morning reflected on predictive policing and other forms of data-driven technologies among police forces in the UK and US. Daniel Marciniak explained how AI reinforces pre-existing logics in policing. Starting from the premise that how crime is measured defines the actions and practices of the police, he described how ideas of AI and automation had been translated into a tool to manage police patrols on the ground. In short, AI is neither replacing nor making police officers do things automatically; rather, it is translated into a management tool for hierarchical control that perpetuates pre-existent policing logics.
But aren’t unsupervised models[3] creating new forms of knowledge which would otherwise have remained hidden for humans? While that might be the case, this knowledge is not objective, argues Florent Castagnino, who works on algorithmic video surveillance in policing. While supervised learning tends to replicate previous categorizations and ways of thinking among security professionals, unsupervised machine learning does not define what is suspicious behavior in advance but recognizes patterns. However, what is flagged as being ‘suspicious’ or ‘abnormal’ in such systems is determined by the infrequency of a situation. In short, statistically rare events get conflated with a moral category of ‘abnormality’ or ‘suspicion’.
Connecting to the social definitions of abnormality and suspicion, Lucien Schönenberg, focused on the police officers’ work with video surveillance cameras in CCTV control rooms and how they classify situations they observe. It will be interesting to see, how algorithmic systems that automatically recognize and classify objects, will change the work of video surveillance operators.
Relations with algorithmic infrastructures have become part of our everyday in many different forms, and can be rendered as particularly intimate when they are connected to bodies and their fluids. Wearing a “smart” device in everyday life influences not only our expectations and experiences, but also our behavior. For Minna Ruckenstein, self-tracking devices, with their pre-programmed definitions of “good” sleep, for example, come with an ideological stance turning sleep into a “productive” activity. That numbers frame the experience is something Jeanette Pols further expanded on. Self-tracking can intensify feelings of being or not being in control; the body incoherently lives up to numbers and if certain metrics are (not) achieved, moralization comes into play. For instance, people start depending on sleep metrics rather than trusting an embodied feeling of having slept “enough,” and judge themselves for these outcomes. They might aswell trick (or cheat) devices of having achieved certain numbers, by shaking devices to produce “more steps”. Individuals and algorithmic devices co-evolve through the loop, which can feel empowering or intrusive. Stepping out of this loop, as Ruckenstein reminded in the discussion, therefore is a necessary form of agency.
For type 1 diabetes patients who use (semi-)automated systems of sensors and pumps, stepping out of the loop and disengaging with the technology often isn’t an option, as Sophie Wagner demonstrated. Living with medical sensors demands a great deal of trust and acceptance towards the technology, and simulataneously requires to remain skeptical and rely on intuition and bodily sensations (for example when patients feel that the numbers displayed by the devices are wrong or leg behind). The co-evolving of patients and technology is matter of fact. Doctors are currently learning how to navigate the disease within these new setting, in which some information is readily available in the form of graphs and charts, while the patients’ experiential knowledge, due to its narrative form, remains to be excavated within inintimate human relations.
Intimate encounters with technology and the practices of implementation, circumvention, and adaptation may also tell us how people and societies craft their futures. With an attention to feminist theory, the last panel of the symposium was devoted to thinking about the material and future engagements with technology and how theories of futurity are put into practice.
Szusanna Ihar analyzed a Scottish seascape in which militarized zones and ecological sanctuaries overlap. She told a story of the heritage of indigenous Scottish knowledge passed along through song in conversation with present-day sonic military infrastructure. The former perceives water as a form of knowledge in and of itself; the latter occupies waters for technoscientific data extraction. Millitary sonars, acoustic deterrent devices used by fish farms as well as trawlers occupy much of the sonic spectrum in the sea thus silencing sounds from whales and other species.
To re-imagine future engagements with technology, the panelists proposed to shift the focus from “control” to “care,” while understanding the tensions between the two. Working on water management infrastructure in Northern Peru, Carolina Domínguez-Guzmán showed the limitations of an approach that emphasizes control as a dominant rationale in water management. She proposed a linguistic and ontological shift, an attention to a vocabulary of care, and a serious consideration of the way people live with spirits inhabiting watersheds and landscapes.
Darcy Alexandra, working in the US-Mexican borderlands, proposed a necessary re-imagining of this settler-colonial landscape known for its border wall, surveillance infrastructure, and ongoing threats of mineral extraction. Through ethnographic poetry and a short, audiovisual portrait of the Sonoran Desert, she opened up an alternative view on the borderscape. Revealing more-than-human actors, water sanctuaries and ecological practices of care, she invited us to consider the material impacts of statecraft technology while critically rethinking our assumptions about the region.
The symposium proposed empirically, analytically, and theoretically diverse approaches to sociotechnical networks, human-machine hybridity, and possibilities for re-thinking future engagements with technology. While Kubrick’s supercomputer Hal 9000 connects well to a societal horizon of expectations enchanted and daunted by technological innovation, the symposium invited participants to leave these imaginaries aside. Focusing on everyday engagements with sensor-based technologies and diverse forms of knowledge production and data collection, the symposium participants could creatively discuss technology without pitting AI and humans against one another.
[1] Barassi, known for her book “Child, Data, Citizen”, has worked for a long time on AI failures or bias. Critical Algorithm or Data Studies have shown how AI systems reproduce structural inequalities while remaining unexplainable and often unaccountable. One of the by now classic examples for this was a face recognition algorithm from Amazon working well for white men, but very poorly for black women (See therefore https://www.ajl.org/about ).
[2] More information on foundation models can be found here: https://en.wikipedia.org/wiki/Foundation_models.
[3] Unsupervised Machine Learning, produces patterns without anyone telling the computer what kind of patterns he has to look for. See also https://en.wikipedia.org/wiki/Unsupervised_learning