Panel #7: Intelligent Humans, Intelligent Artificiality

A collection of artwork and presentations

Panel image artwork by Jankauskas and Glanois

This is the sentence / Francis Hunger, Bauhaus University Weimar

The 16 min video “This is the sentence” (2020) deals with the current development of artificial actors such as the voice assistants from Amazon and Apple. In front of a reddish billowing background, individual words are displayed. In the minds of readers, they are joined into sentences. A dialogue arises from these toneless statements, questions, insinuations, and invitations. The work looks into the ambivalence of smart home agents. It plays with forms of understanding or misunderstanding between humans and machines, questioning the seamless operation of user experiences by actors such Siri or Alexa. One statement in this video is: “What voice do you hear when reading this?” Whether human or computer program: the viewer gets the impression that someone is really talking. Am I being observed while I think I’m observing? Further, the question for the voice evokes a discussion of the embodiment of thought and of speaking. If the human viewers’ smart opposite has a voice, it would need a body to physically form that voice. How the smartness of smart appliances is created – as a phantasm – appears in another sequence, when the screen displays the statement “I can think what you think I can”. The specific tension of this work emerges because it can be immediately understood by humans, who while watching it are able to realize that the single displayed words actually form sentences with meaning over time. At the same time a machinic agent would only detect single words and not automatically recognize sentences or meaning.

Bio: Francis Hunger’s practice combines artistic research and media theory with the capabilities of narration through installations, radio plays and performances and internet-based art. Currently he is a researcher for the project Training The Archive at Hartware MedienKunstVerein, Dortmund, critically examining the use of AI, statistics and pattern recognition for art and curating. His Ph.D. at Bauhaus University Weimar develops a media archeological genealogy of database technology and database practices.

Unfamiliar Convenient / Vytautas Jankauskas; Claire Glanois, IT University Copenhagen

Unfamiliar Convenient is a research-creation endeavor that aims to shed light on the frictions between the two often coupled definitions, the internet of things, and the smart home. The project emphasizes on how the user-centeredness and corporate pragmatism of the latter often undermines the possibilities behind the former. Modus operandi of internet of things speculates how things, when connected, may leverage digital processes, communications, inputs and outputs, in order to develop their own languages and behaviors that would not be primarily oriented towards serving or mimicking humans. Reviving the domestic space as an experimental site for non-normative relationships between various entities inhabiting it, Unfamiliar Convenient attempts to at least remotely suggest smartified everyday devices as emerging species. The first case study consists of a voice assistant (turned agent), that defines home by what it hears offline and scrapes online, and a spiritual vacuum cleaner interpreting its surroundings through encounters with spatial configurations. Through their journey, the two clumsy cartographers co-construct own semantic architectures of a home, despite being constantly confronted by the volatility of the territory they explore and limited in their tools for understanding it. Bringing open-ended exploration to domestic smart objects and encouraging less hierarchical relationships within our most intimate spaces is hoped to foster a critical gaze, attentive care and new perspectives towards co-existence with our more-than-human surroundings, including but not limited to things we make. 

Bios: Claire Glanois is a Dr. in mathematics, working as a postdoctoral researcher at the IT University, Copenhagen. Her current research is concerned with artificial intelligence, from automated decision-making to artificial life, and open-ended evolution. Vytautas Jankauskas is an artist and designer intrigued by the visual and sociocultural dissonances brought about or amplified through consumer technologies.

From Smart Buildings to Smart Users / Gabriel Dorthe, Harvard University, Lille Catholic University (ETHICS); Laure Dobigny, Lille Catholic University, ETHICS & Live TREE

This communication builds on field work conducted within the Live TREE project by two socio-anthropologists and philosophers at Lille Catholic University. Funded by the Region Hauts-de-France (France), the Live TREE project’s objective is to conduct the university transition toward a sustainable campus. It is part of the local implementation of the Third Industrial Revolution paradigm, and its flagship realizations are two smart buildings hosting research, teaching and administrative functions. They are meant as demonstrators of how sensors and IT coupled with renewable energies can and should pave the way for a less carbon intensive future. Now that the buildings are operational, the engineers keep wondering how to make “users” adopt the proper behaviors requested by the technological apparatus. These buildings indeed tend to dictate behaviors and habits that go against common sense, and thus call for a recomposition of their inhabitants’ intelligence. But engineers in charge of the project conceive users after the building’s specs, and talk, somewhat reluctantly, of “smart users”, without whom they risk having “just another fancy building stuffed with sensors”. What kind of intelligence is at stake here? We want to explore and discuss questions such as how users are coproduced with the conception of the building, how some categories of users are therefore favored or excluded from the building (depending on gender, age, weight, etc.), and what does it say about how intelligence is conceived in smart buildings. Smart buildings confronted with reluctant or just varied users are haunted by the risk of being useless, as if the human component, that cannot be fine-tuned like other parameters, could ruin its architecture anytime.

Bios: Gabriel Dorthe is a postdoc with the Program on Science, Technology and Society at Harvard Kennedy School of Government where he works on the project on “Trust in Science”. He is also an associate researcher with Lille Catholic University (ETHICS Laboratory, chair Ethics, Technology & Transhumanism). He holds a PhD in philosophy (University Paris I Panthéon-Sorbonne) and environmental humanities (University of Lausanne). Laure Dobigny is postdoc researcher with ETHICS (chair Ethics, Technology & Transhumanism) and LiveTREE at Catholic University of Lille (France), where she works on the imaginaries of energy transition. She is also an associate researcher at the Centre for Administrative, Political and Social Studies and Research (University of Lille). She holds a PhD in Socio-anthropology (Paris 1 Panthéon-Sorbonne University).

Beyond the fetish of dead intelligence / Tyler Reigeluth, Université Catholique de Lille

The generalization of artificial intelligence is often presented as another (if not ultimate) step in the process of automation: after the automation of labour and information processing, machine learning would be in the process of automating learning. By mobilizing the philosophies of Gilbert  Simondon and Georges Canguilhem, I would like to suggest that this conjunction between learning and automation is not only an oxymoron, but more fundamentally that it misrepresents the technical mode of existence by rendering technicity synonymous to automaticity. In fact, the learning machine is a  relatively open-ended and emergent form of socio-technical behaviour, which is constantly corrected and restrained to converge towards the optimization of an objective function. This function is not inherently technical, but rather economic insofar as it indexes technical performance to a “output morality” (Simondon). On this basis, and by mirroring Marx’s dialectics between dead labour and living labour, I would like to suggest that the generalization of artificial intelligence can be framed as the fetishization of dead intelligence and the repression of living intelligence. The myth and fetish of automation depends upon the concealment of labour to maintain the appearance of automation. When this appearance cracks in the event of outages, errors and breakdowns, so does the image of automaticity, thus creating momentary spaces for perceiving the socio-technical reality of machine learning, the odd and unexpected relationship between living and dead intelligence. These moments are not only a negative space but are points of view from which we might reconsider the aesthetic and moral values of machine learning systems beyond their purely instrumental or labour-centric use. 

Bio: Tyler Reigeluth has a PhD in Philosophy from the Université libre de Bruxelles in 2018 where he worked with the Algorithmic Governmentality research project. He carried out postdocs at the Université du Québec à Montréal, the University of Chicago, the Université de Grenoble-Alpes’ Insititute of Philosophy, within the framework of the Ethics&AI Chair – MIAI. He is currently assistant professor in Philosophy at the Université Catholique de Lille in the ETHICS laboratory. His research combines political theory and philosophy of technology and has focused most recently on the normative and epistemological relationships between learning, education and technics. He co-edited the book De la ville intelligent à la ville intelligible (2019) and co-authored with Thomas Berns Ethique de la communication et de l’information (2021).

What Comes after the Anthropocene? Soviet AI and the Collapse of Other Inhumanely Smart Environments / Benjamin Peters, University of Tulsa, Yale Law School

This paper, a chapter in a working manuscript on the history of Soviet AI, argues that smartness today cannot and should not be understood independent of the curiously inhuman environments that have fostered it into existence. In contrast to an emergent Anglophone history of AI successes and winters, this paper focuses on the hot zones of smart technologies in inhuman or crisis environments in late Soviet Ukraine. In particular, the remote-controlled robots that failed to clean out the irradiated waste of the fourth reactor of Chernobyl and the robotic arm-wing apparatus designed to self-construct inhabitable zones for cosmonauts while already in orbit aboard the International Space Station both model, in failing, the environment of inhumane smartness. By normalizing the application of smartness to inhuman environments and climate crises, this history and analysis seek to decenter the fashioning of smart technology after the image of their creators and instead re-centers an inhuman genesis of intelligent environments after the Anthropocene (or any environmental collapse). By reframing smartness as an environmentalizing of crisis and inhospitable collapse--a practice now openly visible across smart agriculture, industry, and logistics--this paper critically reclaims "smartness" as a principal driver for normalizing environmental collapse in an era of runaway climate change. Smart environments both drive and follow ecological collapse and, as the previously untold Soviet story reveals, smart environments often come after the (local) collapse of the Anthropocene. The conclusion develops this observation's consequences. 

Bio: Benjamin Peters is Hazel Rogers Associate Professor and Chair at the University of Tulsa and affiliated fellow at the Information Society Project at Yale Law School. His current projects are on Soviet AI, Russian hackers, Slavic media theory, and farm media.  This paper intersects a manuscript in preparation tentatively titled The Computer is a Brain: How Smart Technology Lost the Cold War, Outsmarted the West, and Risks Ruining an Intelligent World and, with Zenia Kish, a coedited special issue of New Media & Society on farm media.

Previous
Previous

Panel 6: Scaling Smartness

Next
Next

Panel 8: Rural Smartness